id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.16868 | Analytically Computation of Sensitivity Coefficients in Hybrid AC/DC
Micro-Grid | In this paper, we present a closed-form model for the analytical computation
of the power flow sensitivity coefficients (SCs) for hybrid AC/DC networks. The
SCs are defined as the partial derivates of the nodal voltages with respect to
the active and reactive power injections. The proposed method is inspired by an
existing SC computation process proposed for AC networks and here extended to
include both the DC grid and the relevant AC/DC Interfacing Converters (ICs).
The ICs can operate under different control modes i.e. voltage or power.
Additionally, the model is able to compute the SCs for three-phase networks
subjected to unbalanced loading conditions. The proposed method is numerically
validated by means of a comparison with a detailed time-domain simulation model
solved within the EMTP-RV simulation environment. Furthermore, we provide a
formal proof regarding the uniqueness of the proposed SCs computational model
for hybrid AC/DC networks. | Willem Lambrichts, Mario Paolone | 2023-09-28T21:54:43Z | http://arxiv.org/abs/2309.16868v2 | # Analytical Computation of the Sensitivity Coefficients in Hybrid AC/DC Networks
###### Abstract
In this paper, we present a closed-form model for the analytical computation of the power flow sensitivity coefficients (SCs) for hybrid AC/DC networks. The SCs are defined as the partial derivatives of the nodal voltages with respect to the active and reactive power injections. The proposed method is inspired by an existing SC computation process proposed for AC networks and here extended to include both the DC grid and the relevant AC/DC Interfacing Converters (ICs). The ICs can operate under different control modes i.e. voltage or power. Additionally, the model is able to compute the SCs for three-phase networks subjected to unbalanced loading conditions. The proposed method is numerically validated by means of a comparison with a detailed time-domain simulation model solved within the EMTP-RV simulation environment. Furthermore, we provide a formal proof regarding the uniqueness of the proposed SCs computational model for hybrid AC/DC networks.
Sensitivity coefficients, Hybrid AC/DC networks, Optimal power flow, Unbalanced networks, Microgrids.
## I Introduction
Hybrid AC/DC microgrids are a promising solution for future power grids relying heavily on renewable sources. Integrating AC and DC networks has several advantages: 1) an increased overall efficiency of the system, because DC sources and loads are directly connected in the DC grid and thus fewer power conversion sources are required [1]; 2) a lower infrastructure investment cost because of the material savings from cables and transformers and 3) a more flexible grid control that is mainly driven by the controllability of the AC/DC Interfacing Converters (ICs) [1, 2].
Grid-aware real-time control is a critical element that is desired for a secure and optimal operation of these hybrid AC/DC networks. One of the main blocks of any real-time control is the Optimal Power Flow (OPF), which aims at computing the optimal setpoints of the controllable Distributed Energy Resources (DER) in both the AC and DC grid and the optimal ICs' setpoints [3].
OPF-type controls require an accurate model of the full hybrid network that is typically defined by the power flow (PF) model. The PF equations for the AC and DC network are strongly non-convex and, therefore, difficult to solve in order to determine the global minimum. Additionally, accurate models of the ICs that link the AC and DC networks need to be included. The inner control loops of the ICs, which are typically operated as Voltage Source Converters (VSC), decouple the d and q frames. Therefore, two electrical quantities can be controlled simultaneously e.g. the DC voltage and the AC reactive power, or the AC active power and the AC reactive power. The control modes are referred to as voltage control: \(V_{dc}-Q_{ac}\) and power control: \(P_{ac}-Q_{ac}\). As a consequence, the ICs' model has to be suitably adapted depending on the control mode. Furthermore, the real-time control algorithm typically requires a fast and accurate computation of the OPF problem for time-critical control applications.
Various methods have been presented in the literature for the solution of the OPF problem in hybrid AC/DC networks. Typically, the convexification of the PF and ICs' models is achieved by either relaxing the non-convex constraints or by linearizing them. Most of the works on hybrid AC/DC OPF are based on the first method where relaxation techniques are deployed. Therefore, new equations are typically added to identify the structure of a new feasible set that is also convex. Reference [4] transforms the non-convex optimisation problem in a semi-definite program (SDP) and shows that the SDP relaxation is exact under specific technical conditions. The authors of [5] also propose an SDP relaxation to convexify the power losses model and operational constraints of the VSC. Reference [6] presents a second-order cone programming (SOCP) relaxation and models the ICs as dummy generators that absorb (or inject) active and reactive power into the AC network. The DC side is modelled similarly and includes a proportional loss. [7] follows a similar approach. In [8] the authors use a SOCP for the OPF in stand-alone DC microgrids. Reference [9] proposes a method to include the ICs that are able to operate under different operation modes. The method is implemented as an extension of the MATPOWER package [10]. References [11] and [12] solve the OPF problem where the ICs are only operated using droop control. In [13] an open-source framework for the unified OPF of hybrid High-Voltage DC is presented. The work uses a state space relaxation that relaxes the DC states to voltage phasors and solves the hybrid network as an equivalent AC network.
In the second approach, the grid and ICs models are simplified by, for instance, linearizing the underlying models around their operating point. This approach approximates the model, and thus, may reduce its accuracy. However, because of the model's linear properties, the OPF problem can be solved very efficiently, which is important for real-time optimal control. The method is associated with the concept of sensitivity coefficients (SCs) i.e. the partial derivatives of |
2305.09445 | The Dirichlet series of the arithmetic derivative | The main object of this paper is to give the generalized von mangoldt
function using the L-additive function which can help us to make it possible to
calculate The Dirichlet series of the arithmetic derivative $\delta$ and
Dirichlet series defined by: $$\sum \limits_{n\geq 1}
\frac{f(n)\delta(n)}{n^s}$$ where $f$ is a classical arithmetic function. | Es-said En-naoui | 2022-12-11T11:41:04Z | http://arxiv.org/abs/2305.09445v1 | # The Dirichlet series of the arithmetic derivative
###### Abstract
The main object of this paper is to give the generalized von mangoldt function using the L-additive function which can help us to make it possible to calculate The Dirichlet series of the arithmetic derivative \(\delta\) and Dirichlet series defined by:
\[\sum_{n\geq 1}\frac{f(n)\delta(n)}{n^{s}}\]
where \(f\) is a classical arithmetic function.
## 1 Introduction
The arithmetic derivative of a natural number \(n\), denoted by \(\delta(n)\) or \(D(n)\) or \(n^{\prime}\), has been the subject of extensive study, from Barbeau, E. J (see, e.g., [4]) to P. Haukkanen, J. K. Merikoski, and T. Tossavainen (see, e.g., [2]).
In this work, we address two major result: Dirichlet product of L-additive function and generalized von Mangoldt function using L-additive function \(f\). In this article, we use the generalized function von Mangoldt using the L-additive function to find alternative proof for many series expansions depend on the arithmetic derivative function. The methods readily generalize, and can be applied to other L-additive functions. Our principal result are that :
\[\sum_{n\geq 1}\frac{\delta(n)}{n^{s}}=\zeta(s-1)\sum_{p}\frac{1}{p^{s}-p}\]
where \(\delta\) is the arithmetic derivative function.
Let \(n\) be a positive integer. Its _arithmetic derivative_ is the function \(\delta\ :\ \mathbb{N}\rightarrow\mathbb{N}\), defined by the rules :
1. \(\delta(p)=1\) for all primes \(p\)
2. \(\delta(mn)=m\delta(n)+n\delta(m)\) for all positive integers \(m\) and \(n\) (the Leibnitz rule)
Let \(n\) a positive integer, if \(n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}\) is the prime factorization of \(n\), then the formula for computing the arithmetic derivative of n is (see, e.g., [4, 17]) giving by :
\[\delta(n)=n\sum_{i=1}^{s}\frac{\alpha_{i}}{p_{i}}=n\sum_{p^{\alpha}||n}\frac{ \alpha}{p} \tag{1}\]
A brief summary on the history of arithmetic derivative and its generalizations to other number sets can be found, e.g., in [4, 17, 9].
Similarly, one can define _the arithmetic logarithmic derivative_[17] as
\[\operatorname{ld}(n)=\frac{\delta(n)}{n}.\]
First of all, to cultivate analytic number theory one must acquire a considerable skill for operating with arithmetic functions. We begin with a few elementary considerations.
**Definition 1.1** (arithmetic function).: An **arithmetic function** is a function \(f:\mathbb{N}\longrightarrow\mathbb{C}\) with domain of definition the set of natural numbers \(\mathbb{N}\) and range a subset of the set of complex numbers \(\mathbb{C}\).
**Definition 1.2** (multiplicative function).: A function \(f\) is called an **multiplicative function** if and only if :
\[f(nm)=f(n)f(m) \tag{2}\]
for every pair of coprime integers \(n\),\(m\). In case (2) is satisfied for every pair of integers \(n\) and \(m\), which are not necessarily coprime, then the function \(f\) is called **completely multiplicative**.
Clearly, if \(f\) are a multicative function, then \(f(n)=f(p_{1}^{\alpha_{1}})\dots f(p_{s}^{\alpha_{s}})\), for any positive integer \(n\) such that \(n=p_{1}^{\alpha_{1}}\dots p_{s}^{\alpha_{s}}\), and if \(f\) is completely multiplicative, so we have : \(f(n)=f(p_{1})^{\alpha_{1}}\dots f(p_{s})^{\alpha_{s}}\).
The functions defined above are widely studied in the literature, (see, e.g., [11, 13, 14, 15, 16]).
**Definition 1.3** (additive function).: A function \(f\) is called an **additive function** if and only if :
\[f(nm)=f(n)+f(m) \tag{3}\]
for every pair of coprime integers \(n\),\(m\). In case (3) is satisfied for every pair of integers \(n\) and \(m\), which are not necessarily coprime, then the function \(f\) is called **completely additive**.
Clearly, if \(f\) are a additive function, then \(f(n)=f(p_{1}^{\alpha_{1}})+\ldots+f(p_{s}^{\alpha_{s}})\), for any positive integer \(n\) such that \(n=p_{1}^{\alpha_{1}}\ldots p_{s}^{\alpha_{s}}\), and if \(f\) is completely additive, so we have : \(f(n)=f(p_{1})^{\alpha_{1}}+\ldots+f(p_{s})^{\alpha_{s}}\).
**Definition 1.4** (L-additive function).: We say that an arithmetic function \(f\) is _Leibniz-additive_ (or, _L-additive_, in short) (see, e.g., [2]) if there is a completely multiplicative function \(h_{f}\) such that
\[f(mn)=f(m)h_{f}(n)+f(n)h_{f}(m) \tag{4}\]
for all positive integers \(m\) and \(n\).
Then \(f(1)=0\) since \(h_{f}(1)=1\). The property (4) may be considered a generalized Leibniz rule. For example, the arithmetic derivative \(\delta\) is L-additive with \(h_{\delta}(n)=n\), since it satisfies the usual Leibniz rule
\[\delta(mn)=n\delta(m)+m\delta(n)\]
for all positive integers \(m\) and \(n\), and the function \(h_{\delta}(n)=n\) is completely multiplicative. Similarly, the arithmetic partial derivative respect to the prime \(p\) is L-additive with \(h_{\delta_{p}}(n)=n\). Further, all completely additive functions \(f\) are L-additive with \(h_{f}(n)=1\). For example, the logarithmic derivative of \(n\) is completely additive since
\[\operatorname{ld}(mn)=\operatorname{ld}(m)+\operatorname{ld}(n).\]
The term "L-additive function" seems to be new in the literature, yet Chawla [5] has defined the concept of completely distributive arithmetic function meaning the same as we do with an L-additive function. However, this is a somewhat misleading term since a distributive arithmetic function usually refers to a property that
\[f(u*v)=(fu)*(fv), \tag{5}\]
i.e., the function \(f\) distributes over the Dirichlet convolution. This is satisfied by completely multiplicative arithmetic functions, not by completely distributive functions as Chawla defined them.
Because L-additivity is analogous with generalized additivity and generalized multiplicativity (defined in [8]), we could, alternatively, speak about generalized complete additivity (and also define the concept of generalized complete multiplicativity).
In this paper, we consider L-additive functions especially from the viewpoint that they are generalizations of the arithmetic derivative. In the next section, we present their basic properties. In the last section, we study L-additivity and the arithmetic derivative in terms of the Dirichlet convolution.
**Theorem 1.1**.: _Let \(f\) be an arithmetic function. If \(f\) is L-additive and \(h_{f}\) is nonzero-valued, then \(f/h_{f}\) is completely additive._
Proof.: If \(f\) satisfies (4) and \(h_{f}\) is never zero, then
\[\frac{f(mn)}{h_{f}(mn)}=\frac{f(m)h_{f}(n)+f(n)h_{f}(m)}{h_{f}(m)h_{f}(n)}= \frac{f(m)}{h_{f}(m)}+\frac{f(n)}{h_{f}(n)},\]
**Theorem 1.2**.: _Let \(n\) a positive integer, if \(n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}\) is the prime factorization of \(n\) and \(f\) is L-additive with \(h_{f}(p_{1}),\ldots,h_{f}(p_{s})\neq 0\), then_
\[f(n)=h_{f}(n)\sum_{i=1}^{s}\frac{\alpha_{i}f(p_{i})}{h_{f}(p_{i})}.\]
Proof.: (see, e.g., [2, Theorem 2.4])
The next step is to extend the L-additive function to the set of rational numbers \(\mathbb{Q}^{*}\) to us it in the Direchlet product of L-additive function, We start from the positive rationals.
The shortest way is to use the theorem 1.1. Namely, if \(x=\prod_{i=1}^{s}p_{i}^{x_{i}}\) is a factorization of a rational number \(x\) in prime powers, (where some \(x_{i}\) may be negative) then we put :
\[f(x)=h_{f}(x)\sum_{i=1}^{s}\frac{x_{i}f(p_{i})}{h_{f}(p_{i})}. \tag{6}\]
and the same proof as in Theorem 1.1 shows that this definition is still consistent with the Leibnitz rule for every L-additive function \(f\) with \(h_{f}\neq 0\).
**Lemma 1.1**.: _Let \(n\) a positive integer, if \(n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}\) is the prime factorization of \(n\), then and \(f\) is L-additive function with \(h_{f}\) is nonzero-valued, then :_
\[f\bigg{(}\frac{1}{n}\bigg{)}=\frac{-f(n)}{h_{f}^{2}(n)} \tag{7}\]
Proof.: Let \(n\) a positive integer, if \(n=\prod_{i=1}^{s}p_{i}^{\alpha_{i}}\) is the prime factorization of \(n\), then we have by the formula (6) :
\[f\bigg{(}\frac{1}{n}\bigg{)}=h_{f}\bigg{(}\frac{1}{n}\bigg{)}\sum_{i=1}^{s}\frac {-\alpha_{i}f(p_{i})}{h_{f}(p_{i})}=-h_{f}\bigg{(}\frac{1}{n}\bigg{)}\sum_{i=1} ^{s}\frac{\alpha_{i}f(p_{i})}{h_{f}(p_{i})}\]
Since \(f(n)=h_{f}(n)\sum_{i=1}^{s}\frac{\alpha_{i}f(p_{i})}{h_{f}(p_{i})}\) then, \(\sum_{i=1}^{s}\frac{\alpha_{i}f(p_{i})}{h_{f}(p_{i})}=\frac{f(n)}{h_{f}(n)}\), so we have :
\[f\bigg{(}\frac{1}{n}\bigg{)}=-h_{f}\bigg{(}\frac{1}{n}\bigg{)}.\frac{f(n)}{h_{ f}(n)}=-h_{f}(n).h_{f}\bigg{(}\frac{1}{n}\bigg{)}.\frac{f(n)}{h_{f}^{2}(n)}= \frac{-f(n)}{h_{f}^{2}(n)}\]
because \(h_{f}\) is multiplicative and \(h_{f}(n).h_{f}\bigg{(}\frac{1}{n}\bigg{)}=h_{f}\bigg{(}\frac{n}{n}\bigg{)}=h_ {f}(1)=1\).
**Theorem 1.3**.: _Lets \(n\) and \(m\) two positive integers with \(m\neq 0\). and \(f\) is L-additive function with \(h_{f}\) is nonzero-valued, then we have :_
\[f\bigg{(}\frac{n}{m}\bigg{)}=\frac{f(n)h_{f}(m)-f(m)h_{f}(n)}{h_{f}^{2}(m)} \tag{8}\]
_A L-additive can be well defined for rational numbers using this formula and this is the only way to define a L-additive over rationals that preserves the Leibnitz rule._
Proof.: If \(n\) and \(m\) two positive integers with \(m\neq 0\) and \(h_{f}\) is never zero, then :
\[f\bigg{(}\frac{n}{m}\bigg{)}=f\bigg{(}n.\frac{1}{m}\bigg{)}=h_{f}(n)f\bigg{(} \frac{1}{m}\bigg{)}+h_{f}\bigg{(}\frac{1}{m}\bigg{)}f(n)\]
Since by the lemma 1.1 we have : \(f\big{(}\frac{1}{m}\big{)}=\frac{-f(m)}{h_{f}^{2}(m)}\), and \(h_{f}\big{(}\frac{1}{m}\big{)}=\frac{1}{h_{f}(m)}\), then
\[f\bigg{(}\frac{n}{m}\bigg{)}=\frac{f(n)}{h_{f}(m)}-\frac{h_{f}(n)f(m)}{h_{f}^ {2}(m)}=\frac{f(n)h_{f}(m)-f(m)h_{f}(n)}{h_{f}^{2}(m)}\]
for all positive integers \(n\) and \(m\), The theorem 1.3 may be considered a generalized Leibniz rule in the set of rational number \(\mathbb{Q}\). This terminology arises from the observation that the arithmetic derivative is L-additive with \(h_{\delta}=n\) ; it satisfies the usual Leibniz rule of quotient :
\[\delta(\frac{n}{m})=\frac{\delta(n)h_{\delta}(m)-\delta(m)h_{\delta}(n)}{h_{ \delta}^{2}(m)}=\frac{m\delta(n)-n\delta(m)}{m^{2}}\]
Further, all completely additive functions \(f\) are L-additive with \(h_{f}(n)=1\), then we can extended any completely addtive function to the set of rational number \(\mathbb{Q}\) by this formula :
\[f\left(\frac{n}{m}\right)=f(n)-f(m)\]
For example, the logarithmic derivative of \(n\) is completely additive, then we have :
\[ld\left(\frac{n}{m}\right)=ld(n)-ld(m)\]
## 2 L-additive functions in terms of the Dirichlet convolution
Above we have seen that many fundamental properties of the extended of the L-additive function to the set of rational number. We complete this article by changing our point of view slightly and demonstrate that L-additive functions can also be studied in terms of the Dirichlet convolutions by using the theorem 1.3.
Let \(f\) and \(g\) be arithmetic functions. Their _Dirichlet convolution_ is
\[(f*g)(n)=\sum_{\begin{subarray}{c}a,b=1\\ ab=n\end{subarray}}^{n}f(a)g(b)=\sum_{d|n}^{n}f(d)g\left(\frac{n}{d}\right).\]
where the sum extends over all positive divisors \(d\) of \(n\), or equivalently over all distinct pairs \((a,b)\) of positive integers whose product is \(n\).
In particular, we have \((f*g)(1)=f(1)g(1)\),\((f*g)(p)=f(1)g(p)+f(p)g(1)\) for any prime \(p\) and for any power prime \(p^{m}\) we have :
\[(f*g)(p^{m})=\sum_{j=0}^{m}f(p^{j})g(p^{m-j}) \tag{9}\]
This product occurs naturally in the study of Dirichlet series such as the Riemann zeta function. It describes the multiplication of two Dirichlet series in terms of their coefficients:
\[\bigg{(}\sum_{n\geq 1}\frac{\big{(}f*g\big{)}(n)}{n^{s}}\bigg{)}=\bigg{(}\sum_ {n\geq 1}\frac{f(n)}{n^{s}}\bigg{)}\bigg{(}\sum_{n\geq 1}\frac{g(n)}{n^{s}} \bigg{)} \tag{10}\]
with Riemann zeta function or is defined by :
\[\zeta(s)=\sum_{n\geq 1}\frac{1}{n^{s}}\]
These functions are widely studied in the literature (see, e.g., [3, 6, 7]).
We let \(f(u*v)\) denote the product function of \(f\) and \(u*v\), i.e.,
\[(f(u*v))(n)=f(n)(u*v)(n).\]
**Theorem 2.1**.: _An arithmetic function \(f\) is completely additive if and only if_
\[f(u*v)=(fu)*v+u*(fv)\]
_for all arithmetic functions \(u\) and \(v\)._
Proof.: (see, e.g., [15, Proposition 2]).
Next theorems shows the dirichlet convolution of arithmetic function with L-additive functions.
**Theorem 2.2**.: _Lets \(f\) and \(g\) be two arithmetics functions. If \(f\) is L-additive and \(h_{f}\) is nonzero-valued, then :_
\[(f*g)(n)=\frac{f(n)}{h_{f}(n)}(h_{f}*g)(n)-\left(h_{f}*\frac{fg}{h_{f}}\right) (n) \tag{11}\]
Proof.: Let \(f\) and \(g\) be two arithmetic functions. If \(f\) is L-additive and \(h_{f}\) is nonzero-valued, then applying the theorem 1.3 on \(f\left(\frac{n}{d}\right)\) we have :
\[(g*f)(n) =\sum_{d|n}g(d)f\big{(}\frac{n}{d}\big{)}=\sum_{d|n}g(d)\bigg{(} \frac{h_{f}(d)f(n)-h_{f}(n)f(d)}{h_{f}^{2}(d)}\bigg{)}\] \[=\sum_{d|n}g(d)\bigg{(}\frac{f(n)}{h_{f}(d)}-\frac{h_{f}(n)f(d)}{ h_{f}^{2}(d)}\bigg{)}\] \[=f(n)\sum_{d|n}\frac{g(d)}{h_{f}(d)}-h_{f}(n)\sum_{d|n}\frac{f(d) f(d)}{h_{f}^{2}(d)}\] \[=f(n)\left(1*\frac{g}{h_{f}}\right)(n)-h_{f}(n)\left(1*\frac{f.g }{h_{f}^{2}}\right)(n)\] \[=\frac{f(n)}{h_{f}(n)}\left(h_{f}*g\right)(n)-\left(h_{f}*\frac{ f.g}{h_{f}}\right)(n)\]
We can prove this formula using the theorem (2.1), since the arithmetic function \(\frac{f}{h_{f}}\) is completely additive by the theorem (1.1), so we have :
\[\frac{f}{h_{f}}\left(h_{f}*g\right)=\left(\frac{f}{h_{f}}h_{f}*g\right)+ \left(h_{f}*\frac{f}{h_{f}}g\right)=(f*g)+\left(h_{f}*\frac{fg}{h_{f}}\right)\]
**Corollary 2.1**.: _If \(f\) is \(L\)-additive and \(h_{f}\) is nonzero-valued, then :_
\[f*\mu h_{f}=-h_{f}*\mu f\]
Proof.: We substitute \(g=\mu h_{f}\) into (2.2) gives :
\[(f*\mu h_{f})(n)=\frac{f(n)}{h_{f}(n)}(h_{f}*\mu h_{f})(n)-\left(h_{f}*\frac{f \mu h_{f}}{h_{f}}\right)(n)\]
Since :
\[\frac{f(n)}{h_{f}(n)}(h_{f}*\mu h_{f})(n)=f(n)\left(1*\mu\right)f(n)\epsilon(n )=0\]
then we have :
\[(f*\mu h_{f})(n)=-\left(h_{f}*\mu f\right)(n)\]
As we are aware, the arithmetic derivative \(\delta\) is an L-additive function with \(h_{\delta}(n)=Id(n)=n\), then by using the theorem (2.2) we have this corollary :
**Corollary 2.2**.: _Given an arithmetic function \(g\), then for every positive integer not null \(n\) we have :_
\[(\delta*g)(n)=\frac{\delta(n)}{n}\bigg{(}Id*g\bigg{)}(n)-\bigg{(}Id*\frac{g. \delta}{Id}\bigg{)}(n) \tag{12}\]
Proof.: It suffices to notice that \(h_{\delta}(n)=Id(n)=n\).
Now, taking \(g(n)=Id(n)=n\), then the corollary (2.2) becomes :
\[(Id*\delta)(n)=\frac{\delta(n)}{n}\bigg{(}Id*Id\bigg{)}(n)-\bigg{(}Id*\frac{Id.\delta}{Id}\bigg{)}(n)=\delta(n)\tau(n)-\left(Id*\delta\right)(n)\]
So we have this proven formula in the paper (see, e.g., [10, Proposition 6]) and (see, e.g., [15, Proposition 2]) :
\[(Id*\delta)(n)=\frac{1}{2}\tau(n)\delta(n) \tag{13}\]
where \(\tau(n)\) is the divisor-number-function.
We know that \(1*Id=\sigma\) where \(\sigma(n)\) is the sum of the (positive) divisors of \(n\) and \(1(n)=1\) for all positive integers \(n\), then by the equality 13 we have :
**Corollary 2.3**.: _For every integer \(n\) not null we have :_
\[\big{(}\sigma*\delta\big{)}(n)=\frac{1}{2}\big{(}1*\tau.\delta\big{)}(n) \tag{14}\]
**Corollary 2.4**.: _For every positive integer \(n\) not null we have :_
\[\delta(n)=\frac{1}{2}\big{(}Id.\mu*\tau.\delta\big{)}(n) \tag{15}\]
Proof.: Let \(n\) a positive integer not null, and \(\mu\) the Mobius function.
Since
\[\left(Id*\delta\right)(n)=\frac{1}{2}\tau(n)\delta(n)\]
And :
\[\left(Id.\mu*Id\right)(n)=\epsilon(n)\]
then we have :
\[Id.\mu*\left(Id*\delta\right)(n)=\left(Id.\mu*\frac{\tau\delta}{2}\right)(n)\]
So :
\[\delta(n)=\frac{1}{2}\big{(}Id.\mu*\tau.\delta\big{)}(n)\quad since\ \ \left( \epsilon*\delta\right)(n)=\delta(n)\]
where \(\epsilon\) is the multiplicative identity \(\big{(}\epsilon(n)=\lfloor\frac{1}{n}\rfloor\big{)}\)
**Corollary 2.5**.: _For every integer \(n\) not null we have :_
\[\big{(}Id*Id.\delta\big{)}(n)=\sigma(n)\delta(n)-\big{(}Id^{2}*\delta\big{)}(n) \tag{16}\]
Proof.: For \(g(n)=1(n)\), where \(1(n)=1\) for all positive integers \(n\), this reads by corollary (2.2) :
\[(1*\delta)(n)=\frac{\sigma(n)\delta(n)-(Id^{2}*\delta)(n)}{n}\]
The corollary (2.5) is satisfied by multiplying the previous equality by id.
**Corollary 2.6**.: _For every integer \(n\) not null we have :_
\[\big{(}Id.\mu*\delta\big{)}(n)=-\big{(}Id*\mu.\delta\big{)}(n) \tag{17}\]
Proof.: For \(g(n)=Id(n)\mu(n)\) and for all positive integers \(n\), this reads by corollary (2.2) :
\[(\delta*Id.\mu)(n)=\frac{\delta(n)}{n}\big{(}Id*Id.\mu\big{)}(n)-\bigg{(}Id*\frac {Id.\mu.\delta}{Id}\bigg{)}(n)\]
Then :
\[(\delta*Id.\mu)(n)=\delta(n)\big{(}1*\mu\big{)}(n)-\big{(}Id*\mu.\delta\big{)}(n)\]
Since we know that :
\[\big{(}1*\mu\big{)}(n)=\epsilon(n)\]
Therefore :
\[(\delta*Id.\mu)(n)=\delta(n)\epsilon(n)-\big{(}Id*\mu.\delta\big{)}(n)=-\big{(} Id*\mu.\delta\big{)}(n)\]
Because \(\delta(n)\epsilon(n)=0\) for all positive integers \(n\).
We can prove this corollary just by substitute \(f(n)=\delta(n)\) with \(h_{\delta}(n)=n\) into the corollary (2.1)
**Corollary 2.7**.: _For every integer \(n\) not null we have :_
\[\big{(}Id.\phi*\delta\big{)}(n)=n\delta(n)-\big{(}Id*\phi.\delta\big{)}(n) \tag{18}\]
Proof.: For \(g(n)=Id(n)\phi(n)\) and for all positive integers \(n\), this reads by corollary (2.2) :
\[(\delta*Id.\phi)(n)=\frac{\delta(n)}{n}\big{(}Id*Id.\phi\big{)}(n)-\bigg{(}Id* \frac{Id.\phi.\delta}{Id}\bigg{)}(n)\]
Then :
\[(\delta*Id.\phi)(n)=\delta(n)\big{(}1*\phi\big{)}(n)-\big{(}Id*\phi.\delta \big{)}(n)\]
Since we know that (see[19]) :
\[\big{(}1*\phi\big{)}(n)=Id(n)=n\]
Therefore :
\[(\delta*Id.\phi)(n)=n\delta(n)-\big{(}Id*\phi.\delta\big{)}(n)\]
On the other hand, An arithmetic function \(f\) completely additive is L-additive function with \(h_{f}(n)=1(n)=1\) for every integer not null, then we have this corollary :
**Corollary 2.8**.: _Lets \(f\) and \(g\) be two arithmetic functions. If \(f\) is completely additive, then for every positive integer not null \(n\) by using the theorem 1.3 we have :_
\[(f*g)(n)=f(n)(1*g)(n)-\left(1*fg\right)(n) \tag{19}\]
as we knows, the derivative arithmetic function \(ld\) is completely additive with \(h_{ld}(n)=1(n)\) then, for every arithmetic function \(f\) we have this result :
\[(ld*f)(n)=ld(n)(1*f)(n)-\left(1*ld.f\right)(n) \tag{20}\]
If multiplie both sides of the previous equality by \(Id\) we get this formula :
\[\left(\delta*Id.f\right)(n)=\delta(n)\left(1*f\right)(n)-\left(Id*f\delta \right)(n) \tag{21}\]
In the same way we can give many result about The prime omega function \(\Omega\) and the function \(log\).
## 3 Main Results : The generalized von mangoldt function using L-additive function
Let \(f\) L-additive function with \(h_{f}\) is nonzero-valued, then Now we defined the von Mangoldt function related to The function \(f\) by :
\[\Lambda_{f}(n)=\begin{cases}\frac{f(p)}{h_{f}(p)}&\text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$},\\ 0&\text{otherwise}.\end{cases} \tag{22}\]
then we have this result :
**Theorem 3.1**.: _If \(n\geq 1\) then we have :_
\[f(n)=h_{f}(n)\sum_{d|n}\Lambda_{f}(d)\]
_That mean by using dirichlet convolution : \(f=h_{f}*h_{f}\Lambda_{f}\)_
Proof.: If \(n=p_{1}^{\alpha_{1}}\dots p_{s}^{\alpha_{s}}\) then we have :
\[\sum_{d|n}\Lambda_{f}(d)=\sum_{i=1}^{s}\sum_{k=1}^{i}\Lambda_{f}(p_{i}^{k})= \sum_{i=1}^{s}\sum_{k=1}^{i}\frac{f(p_{i})}{h_{f}(p_{i})}=\sum_{i=1}^{s}\frac{ if(p_{i})}{h_{f}(p_{i})}=\frac{f(n)}{h_{f}(n)}\]
as claimed
**Theorem 3.2**.: _for every positive integer \(n\) we have :_
\[\Lambda_{f}(n)=\sum_{d|n}\frac{\mu\left(\frac{n}{d}\right)f\left(d\right)}{h_{f} \left(d\right)}=-\sum_{d|n}\frac{\mu(d)f(d)}{h_{f}(d)}\]
_That is we have \(\Lambda_{f}=\mu\ast\frac{f}{h_{f}}=-1\ast\frac{\mu f}{h_{f}}\)_
Proof.: By Theorem (3.1) applying Mobius inversion, we have \(\Lambda_{f}=\mu\ast\frac{f}{h_{f}}\), Note that :
\[\sum_{d|n}\frac{\mu\left(\frac{n}{d}\right)f\left(d\right)}{h_{f} \left(d\right)} =\sum_{d|n}\mu(d)\frac{f\left(\frac{n}{d}\right)}{h_{f}\left(\frac {n}{d}\right)}=\sum_{d|n}\mu(d)\frac{h_{f}(d)}{h_{f}(n)}\bigg{(}\frac{h_{f}(d )f(n)-h_{f}(n)f(d)}{h_{f}^{2}(d)}\bigg{)}\] \[=\sum_{d|n}\frac{\mu(d)h_{f}^{2}(d)f(n)}{h_{f}(n)h_{f}^{2}(d)}- \frac{\mu(d)h_{f}(n)h_{f}(d)f(d)}{h_{f}(n)h_{f}^{2}(d)}\] \[=\frac{f(n)}{h_{f}(n)}\sum_{d|n}\mu(d)-\sum_{d|n}\frac{\mu(d)f(d) }{h_{f}(d)}\] \[=\frac{f(n)\epsilon(n)}{h_{f}(n)}-\sum_{d|n}\frac{\mu(d)f(d)}{h_{ f}(d)}\] \[=-\sum_{d|n}\frac{\mu(d)f(d)}{h_{f}(d)}\]
Hence \(\Lambda_{f}(n)=-1\ast\frac{\mu f}{h_{f}}\). This completes the proof.
**Corollary 3.1**.: _Let \(f\) an arithmetic function. If \(f\) is L-additive and \(h_{f}\) is nonzero-valued, then :_
\[(\tau\ast\Lambda_{f})(n)=\frac{f(n)\tau(n)}{2h_{f}(n)} \tag{23}\]
Proof.: Substituting \(g(n)=h_{f}(n)\) into the theorem (2.2) give :
\[(f\ast h_{f})(n)=\frac{f(n)}{h_{f}(n)}(h_{f}\ast h_{f})(n)-\left(h_{f}\ast \frac{fh_{f}}{h_{f}}\right)(n)\]
then :
\[(f\ast h_{f})(n)=\frac{1}{2}f(n)\tau(n)\]
Since we have that from the theorem (3.1) :
\[f(n)=\left(h_{f}\ast h_{f}\Lambda_{f}\right)(n)\]
Then we find that :
\[\left(h_{f}\ast f\right)\left(n\right)=\left(h_{f}\ast h_{f}\ast h_{f}\Lambda_{f} \right)\left(n\right)=h_{f}(n)\left(1\ast 1\ast\Lambda_{f}\right)\left(n\right)\]
We conclude that :
\[h_{f}(n)\left(\tau\ast\Lambda_{f}\right)(n)=\frac{1}{2}\tau(n)f(n)\]
This completes the proof of Theorem
As we knows the arithmetic function \(f\) completely additive is L-additive function with \(h_{f}(n)=1(n)=1\) for every integer not null, then we have the von Mangoldt function related to The function \(f\) defined by :
\[\Lambda_{f}(n)=\begin{cases}f(p)&\text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$},\\ 0&\text{otherwise}.\end{cases} \tag{24}\]
**Corollary 3.2**.: _Let \(f\) an arithmetic function completely additive, then :_
\[f(n)=\sum_{d|n}\Lambda_{f}(d)\]
_That mean by using dirichlet convolution : \(f=1\ast\Lambda_{f}\)_
**Corollary 3.3**.: _Let \(f\) an arithmetic function completely additive, then we have:_
\[\Lambda_{f}(n)=\sum_{d|n}\mu\left(\frac{n}{d}\right)f\left(d\right)=-\sum_{d| n}\mu(d)f(d)\]
_That is we have \(\Lambda_{f}=\mu\ast f=-1\ast\mu f\)_
The definition (22) may be considered a generalized von mangoldt function. This terminology arises from the observation that the logarithm is L-additive with \(h_{log}(n)=1\); it satisfies the usual von mangoldt function denoted by \(\Lambda\)
\[\Lambda(n)=\Lambda_{log}(n)=\begin{cases}\frac{log(p)}{h_{log}(p)}=log(p)& \text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$},\\ 0&\text{otherwise}.\end{cases}\]
By using the corollary (3.2) and (3.3) the Properties of the von mangoldt function is hold and we have
\[log=1\ast\Lambda\ \ \ and\ \ \Lambda=\mu\ast\log=-1\ast\mu\log\]
Now we can defined the von mangoldt function associed to arithmetic derivative \(ld\) by :
\[\Lambda_{ld}(n)=\begin{cases}\frac{1}{p}&\text{if $n=p^{k}$ for some prime $p$ and integer $k\geq 1$,}\\ 0&\text{otherwise.}\end{cases}\]
Substituting \(h_{ld}(n)=1\) into the corollary (3.2) and (3.3) gives :
\[ld(n)=\left(1*\Lambda_{ld}\right)\left(n\right) \tag{25}\]
And :
\[\Lambda_{ld}(n)=\left(\mu*ld\right)\left(n\right)=-\left(1*ld\mu\right)\left(n\right) \tag{26}\]
For later convenience we introduce the prime function denoted by \(F\) defined by :
\[F(s)=\sum_{p}\frac{1}{p^{s+1}-p}\]
and note that it converges for \(Re(s)>0\) It is an analog of the Riemann zeta function, described in (see, e.g., [20]), with the sum taken over prime numbers instead of all natural numbers, then we have this result about the von mangoldt function associed to arithmetic derivative
**Lemma 3.1**.: _Let \(s\) a number complex such that \(Re(s)>0\), then we have :_
\[\sum_{n\geq 1}\frac{\Lambda_{ld}(n)}{n^{s}}=F(s)\]
Proof.: Let \(s\) a number complex such that \(Re(s)>0\), then
\[\sum_{n\geq 1}\frac{\Lambda_{ld}(n)}{n^{s}} =\frac{\Lambda_{ld}(1)}{1^{s}}+\frac{\Lambda_{ld}(2)}{2^{s}}+ \frac{\Lambda_{ld}(3)}{3^{s}}+\frac{\Lambda_{ld}(4)}{4^{s}}+\frac{\Lambda_{ld} (5)}{5^{s}}+\ldots+\frac{\Lambda_{ld}(16)}{16^{s}}+\ldots\] \[=\frac{1}{2^{s+1}}+\frac{1}{3^{s+1}}+\frac{1}{2^{2s+1}}+\frac{1}{ 5^{s+1}}+\frac{1}{7^{s+1}}+\frac{1}{2^{3s+1}}+\ldots+\frac{1}{2^{4s+1}}+\ldots\] \[=\sum_{p}\sum_{k\geq 1}\frac{1}{p^{ks+1}}=\sum_{p}\frac{1}{p} \sum_{k\geq 1}\frac{1}{p^{ks}}\] \[=\sum_{p}\frac{1}{p}\sum_{k\geq 1}\left(\frac{1}{p^{s}}\right)^{k}= \sum_{p}\frac{1}{p}.\frac{1}{p^{s}}.\frac{1}{1-\frac{1}{p^{s}}}\] \[=\sum_{p}\frac{1}{p^{s+1}-p}\]
which completes the proof
**Theorem 3.3**.: _Let \(s\) a number complex such that \(Re(s)>0\), then we have :_
\[\sum_{n\geq 1}\frac{\delta(n)}{n^{s}}=\zeta(s-1)F(s-1)\]
Proof.: By Using (25) we have :
\[\delta(n)=\left(Id*Id.\Lambda_{ld}\right)(n)\]
then by the formula (10) we have :
\[\sum_{n\geq 1}\frac{\delta(n)}{n^{s}}=\sum_{n\geq 1}\frac{\left(Id*Id.\Lambda_ {ld}\right)(n)}{n^{s}}=\zeta(s-1)\sum_{n\geq 1}\frac{\Lambda_{ld}(n)}{n^{s-1}}= \zeta(s-1)F(s-1)\]
**Corollary 3.4**.: _for every \(s\in\mathbb{C}\) where \(Re(s)>2\) we have :_
\[\sum_{n\geq 1}\frac{\tau(n)\delta(n)}{n^{s}}=2\zeta^{2}(s-1)F(s-1)\]
Proof.: If we apply the relation (10) on the equality (2),then for every complex number \(s\) we get that :
\[\sum_{n\geq 1}\frac{\tau(n)\delta(n)}{n^{s}}=2\zeta(s-1)\sum_{n\geq 1}\frac{ \delta(n)}{n^{s}}\qquad(for\ \ Re(s)>2)\]
then by the theorem (3.2) for every \(s\in\mathbb{C}\) where \(Re(s)>2\) we have :
\[\sum_{n\geq 1}\frac{\tau(n)\delta(n)}{n^{s}}=2\zeta^{2}(s-1)F(s-1)\]
**Corollary 3.5**.: _for every \(s\in\mathbb{C}\) where \(Re(s)>2\) we have :_
\[\sum_{n\geq 1}\frac{\mu(n)\delta(n)}{n^{s}}=\frac{-F(s-1)}{\zeta(s-1)}\]
in the same way, If we apply the relation (10) to the corollary (2.6) we have that :
\[\zeta(s-1)\sum_{n\geq 1}\frac{\mu(n)\delta(n)}{n^{s}}=-\sum_{n\geq 1}\frac{\mu(n) }{n^{s-1}}\sum_{n\geq 1}\frac{\delta(n)}{n^{s}}\qquad(for\ \ Re(s)>2)\]
Since (see, e.g., [19]) :
\[\sum_{n\geq 1}\frac{\mu(n)}{n^{s}}=\frac{1}{\zeta(s)}\]
Therefore by using the result of theorem (3.3) we have that :
\[\sum_{n\geq 1}\frac{\mu(n)\delta(n)}{n^{s}}=\frac{-F(s-1)}{\zeta(s-1)}\qquad( for\ \ Re(s)>2)\]
**Corollary 3.6**.: _for every \(s\in\mathbb{C}\) where \(Re(s)>3\) we have_
\[\sum_{n\geq 1}\frac{\phi(n)\delta(n)}{n^{s}}=\frac{\zeta(s-2)}{\zeta(s-1)} \bigg{(}F(s-2)-F(s-1)\bigg{)}\]
Proof.: We know from ([19]) that :
\[\sum_{n\geq 1}\frac{\phi(n)}{n^{s}}=\frac{\zeta(s-1)}{\zeta(s)}\]
And by corollary (2.7) we have :
\[\big{(}Id.\phi*\delta\big{)}(n)=n\delta(n)-\big{(}Id*\phi.\delta\big{)}(n)\]
then :
\[\sum_{n\geq 1}\frac{\phi(n)\delta(n)}{n^{s}}=\frac{1}{\zeta(s-1)}\sum_{n\geq 1 }\frac{\delta(n)}{n^{s-1}}-\frac{\zeta(s-2)}{\zeta^{2}(s-1)}\sum_{n\geq 1} \frac{\delta(n)}{n^{s}}\qquad(for\ \ Re(s)>3)\]
On the other hand as a consequence of the corollary (2.5) we have :
\[\big{(}Id*Id.\delta\big{)}(n)=\sigma(n)\delta(n)-\big{(}Id^{2}*\delta\big{)}(n)\]
then by apply the formula 10 and the result of the theorem 3.3 we have:
**Corollary 3.7**.: _for every \(s\in\mathbb{C}\) where \(Re(s)>3\) we have :_
\[\sum_{n\geq 1}\frac{\sigma(n)\delta(n)}{n^{s}}=\zeta(s-1)\zeta(s-2)\left(F(s-2) +F(s-1)\right)\]
By using this equality let \(f(n)=Id_{k}(n)\) then we have
**Corollary 3.8**.: _Let \(k\in\mathbb{N}\), then for every \(s\in\mathbb{C}\) where \(Re(s)>k+2\) we have :_
\[\sum_{n\geq 1}\frac{\sigma_{k}(n)\delta(n)}{n^{s}}=\zeta(s-1)\zeta(s-k-1)\left(F (s-1)+F(s-k-1)\right)\]
Proof.: Let \(k\in\mathbb{N}\),applying equality (24) to \(f(n)=Id_{k}(n)\) we obtain :
\[\left(\delta*Id.Id_{k}\right)(n)=\delta(n)\left(1*Id_{k}\right)(n)-\left(Id*Id _{k}\delta\right)(n)\]
Since :
\[\left(1*Id_{k}\right)(n)=\sigma_{k}(n)\]
then we have :
\[\left(\delta*Id_{k+1}\right)(n)=\delta(n)\sigma_{k}(n)-\left(Id*Id_{k}\delta \right)(n)\]
Now applying equality (10) we obtain:
\[\sum_{n\geq 1}\frac{\left(\delta*Id_{k+1}\right)(n)}{n^{s}}=\sum_{n\geq 1} \frac{\delta(n)\sigma_{k}(n)}{n^{s}}-\sum_{n\geq 1}\frac{\left(Id*Id_{k} \delta\right)(n)}{n^{s}}\]
Therefore,
\[\sum_{n\geq 1}\frac{\delta(n)\sigma_{k}(n)}{n^{s}}=\zeta(s-k-1)\sum_{n\geq 1 }\frac{\delta(n)}{n^{s}}+\zeta(s-1)\sum_{n\geq 1}\frac{\delta(n)}{n^{s-k}}\]
Since by the theorem (3.3) we have:
\[\sum_{n\geq 1}\frac{\delta(n)}{n^{s}}=\zeta(s-1)F(s-1)\]
then :
\[\sum_{n\geq 1}\frac{\delta(n)\sigma_{k}(n)}{n^{s}}=\zeta(s-k-1)\zeta(s-1)F(s-1 )+\zeta(s-1)\zeta(s-k-1)F(s-k-1)\]
and the proof is complete.
the aim of this study is to giving some result that may help us to find a relation defined by Dirichlet product which makes it possible to calculate the series of Dirichlet \(\sum\limits_{n\geq 1}\frac{f(n)\delta(n)}{n^{s}}\) for many arithmetic function \(f\).
Conclusion :
The von Mangoldt function \(\Lambda_{f}\) related to the L-additive function \(f\) is the best way to be solved us the problem of the Dirichlet series of the arithmetic derivative
|
2309.11875 | Stochastic stiffness identification and response estimation of
Timoshenko beams via physics-informed Gaussian processes | Machine learning models trained with structural health monitoring data have
become a powerful tool for system identification. This paper presents a
physics-informed Gaussian process (GP) model for Timoshenko beam elements. The
model is constructed as a multi-output GP with covariance and cross-covariance
kernels analytically derived based on the differential equations for
deflections, rotations, strains, bending moments, shear forces and applied
loads. Stiffness identification is performed in a Bayesian format by maximising
a posterior model through a Markov chain Monte Carlo method, yielding a
stochastic model for the structural parameters. The optimised GP model is
further employed for probabilistic predictions of unobserved responses.
Additionally, an entropy-based method for physics-informed sensor placement
optimisation is presented, exploiting heterogeneous sensor position information
and structural boundary conditions built into the GP model. Results demonstrate
that the proposed approach is effective at identifying structural parameters
and is capable of fusing data from heterogeneous and multi-fidelity sensors.
Probabilistic predictions of structural responses and internal forces are in
closer agreement with measured data. We validate our model with an experimental
setup and discuss the quality and uncertainty of the obtained results. The
proposed approach has potential applications in the field of structural health
monitoring (SHM) for both mechanical and structural systems. | Gledson Rodrigo Tondo, Sebastian Rau, Igor Kavrakov, Guido Morgenthal | 2023-09-21T08:22:12Z | http://arxiv.org/abs/2309.11875v1 | Stochastic stiffness identification and response estimation of Timoshenko beams via physics-informed Gaussian processes
###### Abstract
Machine learning models trained with structural health monitoring data have become a powerful tool for system identification. This paper presents a physics-informed Gaussian process (GP) model for Timoshenko beam elements. The model is constructed as a multi-output GP with covariance and cross-covariance kernels analytically derived based on the differential equations for deflections, rotations, strains, bending moments, shear forces and applied loads. Stiffness identification is performed in a Bayesian format by maximising a posterior model through a Markov chain Monte Carlo method, yielding a stochastic model for the structural parameters. The optimised GP model is further employed for probabilistic predictions of unobserved responses. Additionally, an entropy-based method for physics-informed sensor placement optimisation is presented, exploiting heterogeneous sensor position information and structural boundary conditions built into the GP model. Results demonstrate that the proposed approach is effective at identifying structural parameters and is capable of fusing data from heterogeneous and multi-fidelity sensors. Probabilistic predictions of structural responses and internal forces are in closer agreement with measured data. We validate our model with an experimental setup and discuss the quality and uncertainty of the obtained results. The proposed approach has potential applications in the field of structural health monitoring (SHM) for both mechanical and structural systems.
## 1 Introduction
The classical Timoshenko beam theory is one of the most applied models in modern engineering, especially for structures with a high depth-to-length ratio, or composite and sandwich beams. Recent advances in material technology and controlled manufacturing have increased the use of such structures. From a sustainability point of view, it is important that these systems are maintained and that their life-cycle is extended. Thus, a large number of sensors and computer technologies are commonly applied in structural health monitoring (SHM) systems.
Traditionally, SHM depends on the understanding of the system's physical behaviour, generally described by partial differential equations (PDEs), and structural diagnostics relies on optimisation and model updating strategies [1, 2, 3, 4, 5]. Advances in machine learning allowed for the use of data-driven methods that lack a physical description of a process or structure, such as neural networks (NNs) and Gaussian process (GP) regression. These have been extensively and successfully used as surrogate models (cf. eg. [6, 7, 8, 9]) to accelerate model updating, effectively replacing expensive numerical simulations. Despite their good predictive quality, the lack of a physical description
can eventually lead to a poor performance, as the model is fully based on the data that it is trained with. Recently, physics-informed machine learning models have been introduced as a hybrid formulation to bridge the gap between PDE-based and data-driven strategies [10]. These models can learn from data, but they do so while conforming to specific descriptions of physical phenomena that are built into the machine-learning framework using partial differential equations. A schematic of the three different approaches is shown in Fig. 1. Combinations of PDE-based models and neural networks have been employed to solve problems in material science [11, 12, 13], fluid dynamics [14, 15, 16] and structural mechanics [17, 18, 19, 20].
Gaussian process (GP) regression [21] have gained popularity as a non-parametric probabilistic machine learning tool with a powerful learning framework that avoids overfitting by construction [22, 23, 24, 25]. They can be viewed as a single-layer neural network with an infinite number of neurons [26]. A physically-informed GP model is defined by a set of Gaussian processes with mean functions and covariance matrices derived based on a particular differential equation. The relation between different quantities defined by the PDE of interest is modelled by cross-covariance kernels [27, 28, 29]. Physics informed GPs have been a popular tool to estimate the inverse solution of PDEs [30, 27], where the unknown model parameters are identified from noisy measurement data by incorporating them as a variable in the covariance kernels. This strategy has been applied to the system identification of Bernoulli beams [31, 24] and the Navier-Stokes equation parameters [30]. In a forward approach, PIGPs have been employed for the dynamic force identification in structural systems [32, 33, 28], applied load reconstruction in slender structures and for aerodynamic analysis of long-span bridges [34]. The probabilistic nature of Gaussian processes have also been used as a tool for sensor placement optimisation and online learning [35, 36, 37, 38], determining optimal data collection locations via a greedy entropy minimization approach [39]. To the best of the author's knowledge, this strategy has not yet been employed within a physics-informed framework.
In this study, we introduce a physics-informed GP model based on the Timoshenko theory of static response of beams [40]. The model is constructed as a multi-output GP with covariances and cross-covariances analytically derived based on the differential equation models for the physical quantities (deflections, rotations, strains, bending moments, shear forces and applied loads). This approach allows the model to be trained based of combinations of heterogeneous datasets (e.g. a combination of displacements and strains) while accounting for their correlation, defined by the cross-covariance kernels. In addition, the model also allows for training based on several datasets of the same response with different quality levels, by determining individual optimal noise values for each dataset. Bending and shear stiffness are assumed constant throughout the length of the structure and their identification is carried out in a Bayesian manner using a Markov chain Monte Carlo approach to sample from the posterior parameter distribution. This procedure results in stochastic models for the structural parameters, providing uncertainty bounds instead of point estimates as it is usually the case with the majority of the methods in literature. Predictions of unobserved responses are done in a fully Bayesian framework, taking into account model hyperparameter uncertainties. Additionally, a novel heterogeneous physics-informed sensor placement optimisation framework is presented. The method is based on information theory, and can capture cross-domain influence (e.g. the information gained about deflections at a certain location after placing a rotation sensor nearby). This also extends similar sensor sensor placement methodologies as it allows for the incorporation of boundary conditions prior to the optimisation.
The paper is organized as follows: section 2 briefly reviews the Timoshenko beam model, with a particular view on how the bending and shear contributions to the response are linearly combined. Section 3 presents the derivations of the physics-informed GP model, along with the framework for parameter learning, stiffness identification and prediction strategy. Section 4 defines the entropy minimization approach for sensor placement optimisation and discusses the advantages obtained from the physics-informed setting. To evaluate the model, section 5 presents numerical studies comparing the novel and standard entropy-based sensor placement strategies and uses the results for stiffness identification and further predictions of model responses. Influences of noise and structural rigidity are
Figure 1: The three main types of modelling approaches. PDE-based models represent physical phenomena in terms of differential equations, while data-driven models can only reflect the properties contained in the training data. Physics-informed machine learning models are hybrid in nature, and fit training data according to their mathematical description provided by PDEs.
also discussed. Validation of the novel method is given in section 6 with an experimental setup, where heterogeneous multi-fidelity data sets are used for the inverse and forward problems within the GP framework. Lastly, section 7 presents the conclusions of this study, where limitations and possible improvements are discussed. The implementations of the model can be found in the Github repository available at [https://github.com/gledsonrt/PIGPTimoshenkoBeam](https://github.com/gledsonrt/PIGPTimoshenkoBeam).
## 2 Timoshenko beam theory
Traditional static beam models define the response of a line-like structure as a function of an externally applied load. The Timoshenko (TB) beam theory [40] considers the total deflection response \(w(x)\) along the length-wise position \(x\) as a contribution from bending \(w_{b}\) and shear \(w_{s}\) effects (see Fig. 2, left), such that
\[w(x)=w_{b}+w_{s}. \tag{1}\]
The deflection due to bending, assuming constant stiffness along the length of the beam, can be obtained from the applied load \(q(x)\) as
\[q(x)=EI\frac{d^{4}w_{b}}{dx^{4}}, \tag{2}\]
where \(E\) is the modulus of elasticity and \(I\) is the second moment of area of the cross-section. Assuming small deflections, the angle of rotation due to bending effects \(\varphi_{b}\) normal to the mid-surface of the beam is obtained by
\[\varphi_{b}(x)=\frac{dw_{b}}{dx}. \tag{3}\]
Bending moments \(M\) in the linear elastic range are related to the deflections via the bending stiffness \(EI\) by:
\[M(x)=EI\frac{d^{2}w_{b}}{dx^{2}}. \tag{4}\]
The shear forces are obtained through the derivative of the bending moment w.r.t. the spatial coordinate,
\[V(x)=\frac{dM}{dx}=EI\frac{d^{3}w_{b}}{dx^{3}}. \tag{5}\]
The total rotation of the cross-section, including the contribution from shear effects \(\varphi_{s}\)[40], is calculated as:
\[\varphi(x)=\varphi_{b}+\varphi_{s}=\frac{dw_{b}}{dx}-\frac{V}{kGA}, \tag{6}\]
where \(G\) is the shear modulus, \(A\) is the cross-section area, and \(k\) is the Timoshenko shear coefficient, which accounts for the differences between the average and the exact shear supported by a cross-section of arbitrary geometry. The deflection \(w\) is obtained integrating \(\varphi\) as
\[w(x)=\int\varphi dx. \tag{7}\]
Figure 2: Left: The Timoshenko beam model (blue) accounts for shear deformations of the cross-section. When the rotation due to shear \(\varphi_{s}=-V/kGA\) is negligible the traditional Euler-Bernoulli beam model (red) is recovered. Right: Relative deflection contribution due to shear effects, as a function of the structural rigidity. The inflexion point at \(r=0.3125\), for a simply supported beam under UDL, corresponds to the case where 50% of the total deflection \(w\) is due to shear effects.
The strain \(\epsilon\) is linear across the vertical direction \(z\) in the cross-section and can be obtained from the rotations as
\[\epsilon(x,z)=-z\frac{d\varphi}{dx}. \tag{8}\]
Boundary conditions (BCs) are accounted for by enforcing the responses at specific positions. For instance, at simple supports located at \(x=x_{\mathrm{BC}}\) that allows for structural rotation, the respective boundary condition is \(w(x_{\mathrm{BC}})=0\) m, or in case of supports that restrict displacements and rotations, \(w(x_{\mathrm{BC}})=0\) m and \(\varphi(x_{\mathrm{BC}})=0\) rad. Following Timoshenko's beam relations, the total response of the model is a function of both the bending stiffness \(EI\) and the shear stiffness \(kGA\). This dependency can be described by a rigidity factor \(r\), calculated as [41, 42]:
\[r=\frac{3EI}{L^{2}kGA}, \tag{9}\]
where \(L\) is the beam length. If the shear stiffness \(kGA\) tends to infinity and \(r\ll 1\), the deflection contribution due to shear is negligible and the traditional Euler-Bernoulli beam model is obtained. For example, in the case of a simply supported beam under uniformly distributed load (UDL), at \(r=0.3125\) the combination of material and geometrical properties results in equal deflection contributions from shear and bending effects (c.f. Fig. 2, right).
## 3 Physics-informed GP model
### Problem statement
Consider a heterogeneous data set \(\mathcal{I}=\{\mathbf{y},\mathbf{x}\}\) of concatenated noisy measurements of beam responses, internal forces and applied loads \(\mathbf{y}\) at locations \(\mathbf{x}\). The problems we address with our model are:
* to solve the inverse problem of identifying constant bending \(EI\) and shear \(kGA\) stiffness estimates based on collected heterogeneous and multi-fidelity data;
* to estimate beam unobserved responses and internal forces at arbitrary locations \(\mathbf{x}_{\star}\);
* to find an optimal finite set of locations \(\mathbf{x}\) where sensors are to be installed.
To this end, in Section 3.2 the physics-informed GP model is formulated, followed by the optimisation strategy, which includes solving the inverse problem in Section 3.3 (objective i). Section 3.4 defines the method for inference of unobserved responses (objective ii). Finally, Section 4 uses the defined model for the selection of sensor locations and elaborates on the novelties of the proposed method (objective iii).
### Problem formulation
We consider at first an Euler-Bernoulli beam with constant bending stiffness subjected to a static load, derive physics-informed models for all its quantities (deflections, rotations, strains, bending moments, shear forces and applied loads), and show how they can be extended to account for shear deformations according to the Timoshenko model, assuming a constant shear stiffness. The deflection at any given position is assumed to be drawn from a zero-mean Gaussian process
\[w_{b}\sim\mathcal{GP}_{w_{b}}\left(0,k_{w_{b}w_{b}}\right), \tag{10}\]
where \(k_{w_{b}w_{b}}=k_{w_{b}w_{b}}(x,x^{\prime})\) is a covariance kernel. The zero-mean assumption in the prior of Eq. 10 does not mean the deflections have zero mean, which is unrealistic from a structural mechanics perspective. Rather, it implies that the predictive mean (shown later in Eqs. 26 and 27) follows no specific parametrized model, and is fully based on the training data, captured by the GP's covariance functions. Several kernel options are available in the literature, and tailoring its structure to a specific problem is a powerful way to account for prior knowledge of the function to be approximated. The Squared Exponential (SE) kernel is herein used to model \(k_{w_{b}w_{b}}\) as it is continuous, smooth and infinitely differentiable [43, 44]. The covariance kernel is calculated as:
\[k_{w_{b}w_{b}}(x,x^{\prime};\sigma_{s},\ell)=\sigma_{s}^{2}\mathrm{exp}\left( -\frac{1}{2}\left(\frac{x-x^{\prime}}{\ell}\right)^{2}\right), \tag{11}\]
where \(x\) and \(x^{\prime}\) are spatial coordinates, \(\sigma_{s}^{2}\) is a variance measure and \(\ell\) controls the covariance's length scale. The applied load is in a linear relationship with the deflection response through the linear operator \(\mathcal{L}_{q}=EI\frac{\partial^{4}}{\partial x^{4}}\) (cf. Eq. 2), and when applied to \(\mathcal{GP}_{w_{b}}\) yields a Gaussian process for the load
\[q\sim\mathcal{GP}_{q}\left(0,k_{qq}\right), \tag{12}\]
with
\[k_{qq}=\mathcal{L}_{q}\mathcal{L}_{q}^{\prime}k_{w_{b}w_{b}}=EI\frac{\partial^{4}} {\partial x^{4}}\left(EI\frac{\partial^{4}}{\partial x^{\prime 4}}k_{w_{b}w_{b}}\right), \tag{13}\]
where the dependency on \(\sigma_{s}\), \(\ell\) and the spatial coordinates is omitted for simplicity. To describe the connection between the deflection and the loads, the cross-covariance kernels are calculated by:
\[\begin{split} k_{qw_{b}}=\mathcal{L}_{q}k_{w_{b}w_{b}}& =EI\frac{\partial^{4}}{\partial x^{4}}k_{w_{b}w_{b}},\\ k_{w_{b}q}=\mathcal{L}_{q}^{\prime}k_{w_{b}w_{b}}& =EI\frac{\partial^{4}}{\partial x^{\prime 4}}k_{w_{b}w_{b}}.\end{split} \tag{14}\]
In both formulations, the constant bending stiffness \(EI\) is built into the covariance kernel via the differential equation and, if not known, can be identified during the training of the model. The optimisation process is discussed in Sec. 3.3. The kernels for bending moments and shear forces are calculated similarly by applying the linear operators defined in Equations 4 and 5 to \(k_{w_{b}w_{b}}\), respectively, yielding
\[\begin{split} k_{M\!M}=\mathcal{L}_{M}\mathcal{L}_{M}^{\prime}k_ {w_{b}w_{b}}=EI\frac{\partial^{2}}{\partial x^{2}}\left(EI\frac{\partial^{2} }{\partial x^{\prime 2}}\ k_{w_{b}w_{b}}\right),\\ k_{VV}=\mathcal{L}_{V}\mathcal{L}_{V}^{\prime}k_{w_{b}w_{b}}= EI\frac{\partial^{3}}{\partial x^{3}}\left(EI\frac{\partial^{3}}{\partial x^{ \prime 3}}\ k_{w_{b}w_{b}}\right),\end{split} \tag{15}\]
while the remaining GP models for bending-related rotations and strains are obtained similarly. In Timoshenko's model, the total cross-section rotation is a combination of bending and shear effects. To relate the deflection due to bending with the combined (bending and shear) rotation in the cross-section, the operators in Eqs. 5 and 6 are applied to \(k_{w_{b}w_{b}}\) resulting in:
\[k_{w_{b}\varphi}=\left(\mathcal{L}_{\varphi_{b}}^{\prime}-\frac{\mathcal{L}_{ V}^{\prime}}{kGA}\right)k_{w_{b}w_{b}}=\left(\frac{\partial}{\partial x^{ \prime}}-\frac{EI}{kGA}\frac{\partial^{3}}{\partial x^{\prime 3}}\right)k_{w_{b}w_{b}}, \tag{16}\]
and the combined rotation kernel is obtained by
\[k_{\varphi\varphi}=\left(\mathcal{L}_{\varphi_{b}}-\frac{\mathcal{L}_{V}}{kGA }\right)k_{w_{b}\varphi}=\left(\frac{\partial}{\partial x}-\frac{EI}{kGA} \frac{\partial^{3}}{\partial x^{3}}\right)k_{w_{b}\varphi}, \tag{17}\]
where similarly to the bending stiffness \(EI\), the constant shear stiffness \(kGA\) is now a part of the covariance kernel and can be learned from data in case it is not known. The kernel for deflections on Timoshenko's model is obtained by expanding and integrating \(k_{\varphi\varphi}\), which yields
\[\begin{split} k_{ww}=&\iint k_{\varphi\varphi}\, \partial x\partial x^{\prime}=\left(1-\frac{EI}{kGA}\frac{\partial^{2}}{ \partial x^{2}}-\frac{EI}{kGA}\frac{\partial^{2}}{\partial x^{\prime 2}}\right.\\ &\left.+\left(\frac{EI}{kGA}\right)^{2}\frac{\partial^{2}}{ \partial x^{2}}\frac{\partial^{2}}{\partial x^{\prime 2}}\right)k_{w_{b}w_{b}},\end{split} \tag{18}\]
and the strain kernel in the Timoshenko model is obtained similarly by applying the linear operator defined in Eq. 8. The remaining covariance functions that define the physics-informed machine learning model are obtained through the application of the proper differential equations, as schematically shown in Fig. 3 and given in full in A. Equivalently as to the standard differential equation model, an Euler-Bernoulli physics-informed Gaussian process is obtained when the shear stiffness \(kGA\) is sufficiently high, as the shear rotations are consequently negligible and all the shear-related terms in the covariance kernels tend to zero. Generally, the true values of responses, internal forces and applied loads cannot be directly accessed, and can either be measured with some arbitrary quality depending on the sensing equipment, or assumed with a particular uncertainty. Taking deflections as an example, a measurement \(\mathbf{w}\) at finite locations \(\mathbf{x}\in\mathcal{R}^{N}\) takes the form
\[\mathbf{w}=f(\mathbf{x})+\mathbf{\delta}(\mathbf{x}), \tag{19}\]
where \(\mathbf{\delta}(\mathbf{x})\sim\mathcal{N}(0,\sigma_{n,w}^{2}\mathbf{I})\) accounts for the stationary white noise in the measurement system, defined by a variance \(\sigma_{n,w_{b}}^{2}\), and \(\mathbf{I}\in\mathcal{R}^{N\times N}\) is the identity matrix. Modelling \(f(\mathbf{x})\) as a zero-mean Gaussian process with covariance kernel calculated by Eq. 18, an extended covariance matrix that accounts for uncertainties is given as:
\[\mathbf{K}_{ww}^{\sigma_{uw}}=k_{ww}(\mathbf{x},\mathbf{x})+\sigma_{n,w}^{2}\mathbf{I}, \tag{20}\]
while the other quantities follow the same logical argument. With the complete set of covariance matrices, the prior physics-informed model can be represented as a multi-output Gaussian process,
\[\left[\mathbf{w},\mathbf{\varphi},\mathbf{\epsilon},\mathbf{M},\mathbf{V},\mathbf{q}\right]^{\rm T}= \mathcal{GP}\left(\mathbf{0},\mathbf{K}\right), \tag{21}\]
where the covariance matrix \(\mathbf{K}\) is calculated as
\[\mathbf{K}=\begin{bmatrix}\mathbf{K}_{ww}^{\sigma_{ww}}&\mathbf{K}_{w\varphi}&\mathbf{K}_{wc}&\bm {K}_{wl}&\mathbf{K}_{wV}&\mathbf{K}_{wq}\\ \mathbf{K}_{\varphi w}&\mathcal{K}_{\varphi\varphi}^{\sigma_{w}}&\mathcal{K}_{ \varphi\varphi}^{\sigma_{w}}&\mathcal{K}_{\varphi V}&\mathbf{K}_{\varphi V}&\mathbf{K}_ {\varphi q}\\ \mathbf{K}_{ew}&\mathbf{K}_{\varphi\varphi}&\mathbf{K}_{\epsilon\epsilon^{\prime\prime \prime}}&\mathbf{K}_{\epsilon\epsilon^{\prime}}&\mathbf{K}_{\epsilon^{\prime}}&\mathbf{K}_ {\epsilon^{\prime}}&\mathbf{K}_{\epsilon\epsilon}\\ \mathbf{K}_{dw}&\mathbf{K}_{M}&\mathbf{K}_{M}&\mathbf{K}_{M}&\mathbf{K}_{M}&\mathbf{K}_{M}&\mathbf{K}_{M} \\ \mathbf{K}_{Vw}&\mathbf{K}_{V\varphi}&\mathbf{K}_{V\epsilon}&\mathbf{K}_{VM}&\mathbf{K}_{V}^{ \sigma_{w}}&\mathbf{K}_{Vq}\\ \mathbf{K}_{qw}&\mathbf{K}_{q\varphi}&\mathbf{K}_{q\varepsilon}&\mathbf{K}_{q\beta}&\mathbf{K}_{qV }&\mathbf{K}_{q\beta}^{\sigma_{q\mu}}\end{bmatrix}. \tag{22}\]
Structural boundary conditions are taken into account via the addition of an artificial and noise-less data set of the appropriate response. For example, in the case of a simply supported beam with length \(L\), an additional set of locations \(\mathbf{x}_{w}^{\rm BC}=\{0,L\}\) with boundary condition information \(\mathbf{y}_{w}^{\rm BC}=\{0,0\}\) and fixed noise \(\sigma_{n,w,\rm BC}^{2}=0\) m is supplied to the model along with measurement data. This enforces the predictive mean to match \(\mathbf{y}_{w}^{\rm BC}\), while the predictive variance at \(\mathbf{x}_{w}^{\rm BC}\) collapses to zero. Different approaches for this problem exist in the literature, for instance by deriving the covariance kernels using Green's functions or using specific basis functions for \(k_{w_{\rm B}w_{\rm B}}\)[28, 45], and although they provide a more mathematically elegant solution to the problem, they also constrain the GP model to a particular set of structural systems.
The formulation of \(\mathbf{K}\) assumes that one dataset from each of the random fields, i.e. displacements, rotations, strains, moments, shears and loads, is available. This assumption can be relaxed to accommodate, on one hand, lack of data when no information is available, but also to account for multiple datasets on the same data type, for instance, when multiple displacement sensor sets of different quality are used to monitor a structure. The framework provides, therefore, a robust and physics-informed model for data fusion.
Figure 3: Physics-informed Gaussian process model of a Timoshenko beam. The models are first derived for each physical property according to the Bernoulli beam theory (red box). Combining the rotation and shear effects from the Bernoulli model yields Timoshenko’s beam theory (blue box). The underlying structural stiffness \(EI\) and \(kGA\) are part of the hyperparameter vector \(\mathbf{\theta}\), and can be identified from collected data \(\mathbf{y}\) (red points) at locations \(\mathbf{x}\) while optimizing the GP model.
### Training and identification structural stiffness
The model parameters, including the covariance kernel variables, structural stiffness and data set noises, if not known a priori, are included in a vector in \(\mathbf{\theta}=\{\sigma_{s}^{2},\ell,EI,kGA,\sigma_{w}^{2},\sigma_{r}^{2},...\}\), and can be identified from the data via different optimisation schemes. In this work, a fully-Bayesian approach is adopted, and a parameter posterior distribution is defined as:
\[p(\mathbf{\theta}|\mathbf{y},\mathbf{x})\propto p(\mathbf{y}|\mathbf{x},\mathbf{\theta})p(\mathbf{\theta}). \tag{23}\]
The parameter density \(p(\mathbf{\theta})\) can be arbitrarily defined based on prior knowledge of each of the variables in \(\mathbf{\theta}\), including their correlations if they exist. In the particular case of flat priors, that is, \(p(\theta_{i})=\mathcal{U}(-\infty,\infty)\) for all \(\theta_{i}\in\mathbf{\theta}\), no knowledge is assumed and the results are equivalent to a maximum likelihood estimation [46]. The likelihood \(p(\mathbf{y}|\mathbf{x},\mathbf{\theta})\) can be analytically calculated for Gaussian processes, and amounts in log form to [21]:
\[\log p(\mathbf{y}|\mathbf{x},\mathbf{\theta})=-\frac{1}{2}\mathbf{y}^{\mathrm{T}}\mathbf{K}^{-1} \mathbf{y}-\frac{1}{2}\mathrm{log}|\mathbf{K}|-\frac{n}{2}\mathrm{log}2\pi, \tag{24}\]
where \(|\cdot|\) denotes the determinant operation and \(n\) is the number of data points available during training. Sampling from \(p(\mathbf{\theta}|\mathbf{y},\mathbf{x})\) is typically achieved using variational methods or techniques based on Markov chain Monte Carlo [47, 48]. In this particular study, the Metropolis-Hastings (MH) algorithm [49] is used to draw from the posterior distribution. The MH algorithm creates a Markov chain by iteratively sampling new candidate parameters \(\mathbf{\theta}_{*}\) from a proposal distribution \(g(\mathbf{\theta})\), and evaluating a weighted posterior probability ratio \(p\). The new samples are accepted with probability \(p\geqslant a\sim\mathcal{U}(0,1)\), and rejected otherwise. A burn-in is enforced to the initial \(n_{b}\) values of the chain to isolate stable behaviour, and a thinning of \(n_{t}\) samples is applied to ensure i.i.d. conditions. An overview of the MH algorithm is given in Alg. 1.
```
Initialize \(\mathbf{\theta}_{0}\) for\(i=0,1,2,...,N-1\)do Sample \(\mathbf{\theta}_{*}\sim g(\mathbf{\theta}_{i})\) Compute the acceptance probability \(p=\text{min}\Bigg{\{}1,\,\frac{p(\mathbf{\theta}_{*}|\mathbf{y},\mathbf{x})g(\mathbf{\theta}_ {i}|\mathbf{\theta}_{*})}{p(\mathbf{\theta}_{i}|\mathbf{y},\mathbf{x})g(\mathbf{\theta}_{*}|\mathbf{ \theta}_{i})}\Bigg{\}}\) Sample \(a\sim\mathcal{U}(0,1)\) \(\mathbf{\theta}_{i+1}=\begin{cases}\mathbf{\theta}_{*},&\text{if }(p\geqslant a)\\ \mathbf{\theta}_{i},&\text{otherwise}\end{cases}\) endfor \(\mathbf{\theta}\xleftarrow{\text{burn-in,thin}}\)\(\mathbf{\theta}_{i}=\{n_{b},\,n_{b}+n_{t},\,n_{b}+2n_{t},\,...,\,N-1\}\)
```
**Algorithm 1** Metropolis Hastings algorithm
### Prediction of unobserved responses
After identifying the optimal parameters, including the bending and shear stiffnesses, predictions on the quantity of interest \(f_{i}(x_{\star})\) at a location \(x_{\star}\) can be made by marginalising over the parameter posterior as
\[p(f_{i}(x_{\star})|x_{\star},\mathbf{y},\mathbf{x})=\int p(f_{i}(x_{\star})|x_{\star}, \mathbf{y},\mathbf{x},\mathbf{\theta})p(\mathbf{\theta}|\mathbf{y},\mathbf{x})d\mathbf{\theta}. \tag{25}\]
The predictive posterior is a Gaussian process and takes the closed form
\[p(f_{i}(x_{\star})|x_{\star},\mathbf{y},\mathbf{x},\mathbf{\theta})=\mathcal{N}\left(\mu_{ \star},\sigma_{\star}^{2}\right), \tag{26}\]
where \(\mu_{\star}\) and \(\sigma_{\star}^{2}\) are the predictive mean and variance of quantity of interest \(i\) at location \(x_{\star}\), given by:
\[\mu_{\star}= k_{\star}^{\mathrm{T}}\mathbf{K}^{-1}\mathbf{y}, \tag{27}\] \[\sigma_{\star}^{2}= k_{\star\star}-k_{\star}^{\mathrm{T}}\mathbf{K}^{-1}\mathbf{k}_{\star}. \tag{28}\]
In the previous formulation, \(\mathbf{K}\) is the full covariance matrix of the measurement data set, including the noise parameters, as shown in Eq. 22, \(\mathbf{k}_{\star}\) is the cross-covariance between measurements and the prediction point, calculated by:
\[\mathbf{k}_{\star}= [k_{iw}(x_{\star},\mathbf{x}_{w}),k_{i\varphi}(x_{\star},\mathbf{x}_{ \varphi}),k_{i\epsilon}(x_{\star},\mathbf{x}_{\epsilon}),k_{iM}(x_{\star},\mathbf{x}_{ M}), \tag{29}\] \[k_{iV}(x_{\star},\mathbf{x}_{V}),k_{iq}(x_{\star},\mathbf{x}_{q})]^{ \mathrm{T}},\]
and \(k_{\star\star}=k_{ii}(x_{\star},x_{\star})\) is the covariance model for the specific quantity of interest at the unobserved location. The integral in Eq. 25 is generally intractable, and an approximate solution for the predictive model is obtained numerically through a Monte Carlo approach:
\[p(f_{i}(x_{\star})|x_{\star},\mathbf{y},\mathbf{x})\approx\frac{1}{N}\sum_{n=1}^{N}p(f_ {i}(x_{\star})|x_{\star},\mathbf{y},\mathbf{x},\mathbf{\theta}_{n}), \tag{30}\]
where \(\mathbf{\theta}_{n}\sim p(\mathbf{\theta}|\mathbf{y},\mathbf{x})\) are draws from the parameter posterior distribution. Due to the assumption of Gaussian noise, the predictive posterior takes the form of a multivariate mixture of Gaussians [50], and its two first moments are given respectively by
\[\mu_{\star}= \frac{1}{N}\sum_{n=1}^{N}\mu_{\star,n}, \tag{31}\] \[\sigma_{\star}^{2}= \frac{1}{N}\sum_{n=1}^{N}\sigma_{\star,n}^{2}+\frac{1}{N}\sum_{n= 1}^{N}(\mu_{\star,n}-\mu_{\star})^{2}, \tag{32}\]
with \(\mu_{\star,n}\) and \(\sigma_{\star,n}\) calculated by Eqs. 27 and 28 using the parameters \(\mathbf{\theta}_{n}\sim p(\mathbf{\theta}|\mathbf{y},\mathbf{x})\).
## 4 Physics-informed sensor placement
### Placement optimisation via entropy minimization
The physics-informed Gaussian process derived in Sec. 3 treats each of the physical quantities (deflections, rotations, moments, shear and loads) as probabilistic fields. Within that notion, a good sensor set \(\mathcal{S}\) is one that reduces the uncertainty in the remainder of the specific domain of interest \(\mathcal{D}\setminus\mathcal{S}\)[39]. Because each of the physical quantities is a GP, this uncertainty can be measured in terms of the domain entropy. Mathematically, this is calculated as
\[H\left(x_{\mathcal{D}\setminus\mathcal{S}}|x_{\mathcal{S}}\right)= \tag{33}\] \[-\iint p(x_{\mathcal{D}\setminus\mathcal{S}},x_{\mathcal{S}})\log \left(\frac{p(x_{\mathcal{D}\setminus\mathcal{S}},x_{\mathcal{S}})}{p(x_{ \mathcal{S}})}\right)dx_{\mathcal{D}\setminus\mathcal{S}}dx_{\mathcal{S}},\]
where \(x_{\mathcal{D}\setminus\mathcal{S}}\) and \(x_{\mathcal{S}}\) are spatial coordinates belonging to the sets of unobserved and observed locations, respectively. Minimizing the entropy of the unobserved locations, in turn, corresponds to finding the set of sensor positions that are most uncertain of each other [39]. Therefore, a set \(\mathcal{S}\) containing the selected locations is obtained by the following optimisation problem:
\[\mathcal{S}_{\mathrm{opt}}=\operatorname*{argmin}_{\mathcal{S}\subset \mathcal{D}}H\left(x_{\mathcal{D}\setminus\mathcal{S}}|x_{\mathcal{S}}\right) =\operatorname*{argmax}_{\mathcal{S}\subset\mathcal{D}}H\left(x_{\mathcal{S}} \right). \tag{34}\]
This type of combinatorial problem is common in many applied sciences and is proven to be NP-hard [51], making its direct use impractical in most cases as the computational complexity expands in a non-polynomial manner. Consequently, a greedy approach is generally adopted [52, 46, 53], where the location of maximum entropy is sought given the sensors that are currently placed, that is,
\[x_{k}=\operatorname*{argmax}_{x\subset\mathcal{D}\setminus\mathcal{S}}H\left( x_{\star}|\mathbf{x}_{\mathcal{S}}\right), \tag{35}\]
with \(k\in\{1,...,n_{s}\}\) for a total of \(n_{s}\) sensors, and the newly selected location \(x_{k}\) is added to the set of sensor positions at each iteration. For a normally distributed random variable, as is the case for each discrete position \(x\) in a GP, such entropy is calculated as:
\[H\left(x_{\star}|\mathbf{x}_{\mathcal{S}}\right)=\frac{1}{2}\mathrm{ln}\left(2\pi \mathrm{e}\sigma_{x_{\star}|\mathbf{x}_{\mathcal{S}}}^{2}\right), \tag{36}\]
where \(\sigma_{x_{\star}|\mathbf{x}_{\mathcal{S}}}\) is the proposed location's standard deviation conditioned on the previously placed sensors. It is worth noting that the total uncertainty and the correspondent entropy for GPs are functions exclusively of their location \(x\). Thus, sensor placement optimisation can be carried out before any data from the physical model is observed.
### Physics-informed placement effects
For the physics-informed Gaussian process model from Sec. 3, the total variance \(\sigma_{\star}^{2}\) at position \(x_{\star}\) is calculated according to Eq. 28, such that the entropy is obtained as
\[H\left(x_{\star}|\mathbf{x}_{\mathcal{S}}\right)=\frac{1}{2}\mathrm{ln}\left(2\pi \mathrm{e}\left(k_{\star\star}-\mathbf{k}_{\star}^{\mathrm{T}}\mathbf{K}^{-1}\mathbf{k}_{ \star}\right)\right). \tag{37}\]
The covariance functions that generate \(\mathbf{k_{\star}}\), \(\mathbf{k_{\star\star}}\) and \(\mathbf{K}\) are derived using the underlying differential equations, yielding, in turn, a physics-informed sensor placement strategy. In addition to accounting for the physical laws governing each process, the GP model allows for the consideration of boundary conditions by including noise-less datasets for the appropriate physical quantity at specific locations. These boundary conditions will in turn affect the total entropy at position \(x_{\star}\) even for cases where the BC and the target physical quantity are not the same.
The variance at each candidate location, and therefore also the conditional entropy, is a function of the Normalised sensor distance \(\Delta x=(x-x^{\prime})/\ell\) and will vary according to both the physical domain of interest and the structural stiffness. To illustrate these properties, Fig. 4 shows the covariance results of the displacements \(k_{ww}\), the rotations \(k_{\varphi\varphi}\), and the cross-covariance \(k_{w\varphi}\) between the two domains, as a function of the normalised distance \(\Delta x\) and the structural rigidity \(r\). Higher rigidity values progressively decrease the distance where the covariance between two sensors is zero for both \(k_{ww}\) and \(k_{\varphi\varphi}\), and in turn, increase the magnitude of the negative covariance that is observed after that point. The covariances approach zero as \(\Delta x\) grows, indicating a very low relation between far-away sensors.
For high rigidity values, the covariances tend to oscillate between negative and positive values, reflecting a complex behaviour between spatial points. In a low rigidity range no negative covariances are observed for the deflection domain, as it is modelled purely by the Squared Exponential kernel (c.f. Eq. 11). The results of the cross-covariance \(k_{w\varphi}\) further indicate that, because of the physics-informed nature of the model, information is shared between physical domains and different types of installed sensors, effectively reducing the total entropy and affecting the placement results.
## 5 Numerical study
In this section an investigation of the model derived in Sections 3 and 4 regarding the sensor placement strategy, stiffness identification, rigidity effects and noise influence is carried out. For this purpose, a simply supported beam under uniformly distributed load (c.f. Fig. 5) is taken as a fundamental example. The beam has a length \(L\), and the bending stiffness \(EI\) and shear stiffness \(kGA\) are initially set so that \(r=1\).
Following the particular structural system, the physics-informed Gaussian process model is built with the definition of the noiseless boundary conditions, such that before any measured data collection, the model's covariance matrix is
Figure 4: Influence of the rigidity \(r\) on the physics-informed covariance kernels for (left to right) deflections \(k_{ww}\), rotations \(k_{\varphi\varphi}\), and the cross-covariance kernel \(k_{w\varphi}\) between the two, as a function of the normalised distance \(\Delta x=(x-x^{\prime})/\ell\).
Figure 5: A simply supported beam model of length \(L\), bending stiffness \(EI\) and shear stiffness \(kGA\), subjected to a uniform distributed load.
defined by
\[\mathbf{K}=k_{ww}(\mathbf{x}_{w}^{\rm BC},\mathbf{x}_{w}^{\rm BC^{\prime}}),\]
where \(\mathbf{x}_{w}^{\rm BC}=\left[0,L\right]^{T}\) contains the support locations.
### Sensor placement optimisation
Before any measurement data is collected, a definition of the sensor placement location set \(\mathcal{S}\) must be made. Assuming that a total of \(N_{s}=7\) sensors shall be placed in a total of \(N_{p}=31\) equidistant nodes distributed throughout the beam's length, the number of different placement sets is determined by the binomial coefficient
\[\begin{pmatrix}N_{p}\\ N_{s}\end{pmatrix}=\frac{N_{p}!}{N_{s}!(N_{p}-N_{s})!},\]
and amounts to the order of \(10^{6}\) different individual combinations of location sets.
The derived sensor placement algorithm is employed and its results are compared to the standard entropy and mutual information models using the SE kernel [46, 54, 39], as shown in Fig. 6. The three sensor placement algorithms can efficiently determine sets of locations that minimize the domain entropy, for both deflection and rotation cases. The physics-informed model outperforms the two other algorithms, especially in the deflection domain, where a specific boundary condition is available. On the rotation domain, although no particular BC is defined, the PI model still gathers information from the cross-covariance kernel \(k_{pw}\), which effectively informs the model via the underlying differential equations.
The effects of the boundary conditions are also observed in the physics-informed model's placement locations. In the deflection domain, the sensors are placed away from the support locations. Similar behaviour is produced by the mutual information algorithm, which seeks the same effect by default, in contrast to the entropy minimization criterion [39]. This might be considered beneficial, given a general-purpose sensing condition or a loosely bounded domain, but it has disadvantages for specific structural cases, e.g. a cantilever. For the rotation case of a simply supported beam, for instance, it is advantageous to have a sensor placed on top of the supports, given that rotations at those points are likely to be large. This is automatically achieved by the physics informed and the entropy models, but not by the mutual information. In addition, since no connections or distinctions across domains exist for both the entropy and the mutual information criteria, the algorithms produce the same sensor placement sets for both the deflection and rotation sets. This is evidently not the case for the physics-informed model, which draws information across different domains via the cross-covariance kernels.
### Stiffness identification
After placing sensors and collecting measurements, the physics-informed GP model can be used to solve the inverse problem of identifying the bending stiffness \(EI\) and the shear stiffness \(kGA\). Throughout this section, a UDL of
Figure 6: Left: Normalised physics-informed entropy map, for both deflection and rotation domains, for all possible combinations of 7 sensors placed at \(31\) locations, for each sensor placement criterion. Right: The correspondent sensor positions for each placement algorithm.
magnitude \(q\) is applied to the structure. Analytical responses are calculated at the sensor locations and are further contaminated by white noise. The noise variance \(\sigma_{n,w}^{2}\) for the deflection response is calculated by defining a signal-to-noise ratio
\[\mathrm{SNR}_{w}=\frac{\sigma_{n,w}}{\max\lvert w\rvert}, \tag{38}\]
while the same applies to the rotation case. In this study, we use \(\mathrm{SNR}_{w}=\mathrm{SNR}_{\varphi}=20\). No particular prior knowledge is assumed for the Gaussian process parameters and sensor noise standard deviations, and therefore flat prior models are used. In contrast to those, bounded uniform priors are defined for both stiffness parameters, such that
\[p(EI) =\mathcal{U}(0.5,1.5)EI_{\mathrm{true}}, \tag{39}\] \[p(kGA) =\mathcal{U}(0.5,1.5)kGA_{\mathrm{true}}, \tag{40}\]
where \(EI_{\mathrm{true}}\) and \(kGA_{\mathrm{true}}\) are the numerically determined bending and shear stiffness values. The value of the applied uniformly distributed load is informed to the model at the same locations where deflection measurements are made. The model's parameters are then identified by sampling from the posterior using the Metropolis-Hastings algorithm.
#### 5.2.1 Sensor placement effects
At first, we evaluate the stiffness identification results from the three optimised sensor locations. The results are shown graphically in Fig. 7 and in numerical form in Tab. 1.
Given that a moderate level of noise and a high number of sensors is installed, all three sensor sets can effectively identify the stiffness parameters. The results for the bending stiffness deviate the most from the true value, which is a consequence of the defined \(r=1\). Rigidity effects will be discussed further in Sec. 5.2.4. The set of sensors based on the entropy and mutual information criteria are 25.8% and 20.0% higher than \(EI_{\mathrm{true}}\), while the physics-informed set has better accuracy, with a total error of 13.3%. In terms of uncertainty, all three sets have a similar standard deviation. For the shear stiffness case, the model is more accurate and the identification has more similarity with \(kGA_{\mathrm{true}}\), albeit they have a higher standard deviation when compared to the bending stiffness. A 5.9% shift in the mean is obtained for the entropy criterion, while the mutual information sensor set produces a result with a 4.8% error. The physics-informed model, however, is more accurate in the mean sense and has a value with 1.1% error from \(kGA_{\mathrm{true}}\). The mutual
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & \(p(EI/EI_{\mathrm{true}})\) & \(p(kGA/kGA_{\mathrm{true}})\) \\ & \(\mu\) [-] & \(\sigma\) [-] & \(\mu\) [-] & \(\sigma\) [-] \\ \hline Entropy & 1.258 & 0.026 & 0.941 & 0.073 \\ Mutual Information & 1.200 & 0.029 & 0.952 & 0.115 \\ Physics Informed & 1.133 & 0.027 & 0.989 & 0.107 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Normalised identified posterior probability distributions for the bending \(EI\) and shear \(kGA\) stiffness parameters, for the different sensor placement algorithms
Figure 7: Probability density functions for the normalised bending stiffness \(EI\) (left) and the shear stiffness \(kGA\) (right) for the sensor sets obtained from the entropy, mutual information and the physics-informed criteria.
information and the physics-informed models have similar uncertainty in the stiffness identification, while the entropy criterion, despite having a higher deviation in the mean sense, returns a smaller uncertainty. In this manner, the influence of the sensor placement is observed in the model's parameters, and a connection can be made between the total domain entropy \(H\left(x_{\mathrm{V}\setminus\mathcal{S}}|x_{\mathcal{S}}\right)\) and the posterior model \(p(EI,kGA|\mathbf{y},\mathbf{x})\).
#### 5.2.2 Boundary condition effects
Next, we evaluate the effects of informing the model with boundary conditions on the quality of the identified stiffness values. Although in general self-evident, the importance of providing BC information is worth investigating as the physics-informed GP model does not incorporate them by default, thus maintaining generality and not conforming to one specific structural system (c.f. Sec. 3.2). Boundary conditions are imposed, however, by including synthetic noiseless datasets with BC information (e.g. for a simply supported beam of length \(L\), \(x_{w}^{\mathrm{BC}}=[0,L]^{T}\) and \(y_{w}^{\mathrm{BC}}=[0,0]^{T}\), and \(\sigma_{n,w}^{\mathrm{BC}}=0\) m), effectively conditioning the GP model to the specified points and forcing it to collapse its uncertainty at the BC locations. The lack of boundary condition information leads, as in general expected, to a less accurate and more uncertain identification of the stiffness parameters \(EI\) and \(kGA\), as shown in Fig. 8. The GP model that includes BC information has increased performance in comparison to the one with no BC data sets, both in terms of the mean and the standard deviation values for \(p(EI/EI_{\mathrm{true}})\) and \(p(kGA/kGA_{\mathrm{true}})\). The numerical values of the identified parameters are shown in Tab. 2. The effects of boundary conditions on the model predictions in unobserved responses is further discussed in Sec. 5.3.
#### 5.2.3 Noise effects
Informing the model with the appropriate boundary conditions and optimising the locations for measurement collection are essential elements for the proper identification of structural parameters. Nevertheless, the accuracy of the results is directly influenced by the quality of the measurement data. To evaluate the noise effects on the identified stiffness parameters, the model with BCs from Sec. 5.2.2 is used in this section. Analytical measurements are contaminated with white noise defined by SNRs varying from 5 to 100. To account for stochastic effects, a Monte Carlo analysis with \(N_{\textit{MC}}=1000\) simulations are carried out and the outputs, in terms of the mean and standard deviation of the learned \(p(EI,kGA|\mathbf{y},\mathbf{x})\), are shown in Fig. 9 (left). Results indicate that, for high amounts of noise, a slight deviation from the mean value for the shear stiffness, and a more pronounced deviation in the bending stiffness case. As the noise level
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & \multicolumn{2}{c}{\(p(EI/EI_{\mathrm{true}})\)} & \multicolumn{2}{c}{\(p(kGA/kGA_{\mathrm{true}})\)} \\ & \(\mu\) [-] & \(\sigma\) [-] & \(\mu\) [-] & \(\sigma\) [-] \\ \hline Without BCs & 1.353 & 0.135 & 0.927 & 0.123 \\ With BCs & 1.079 & 0.091 & 0.984 & 0.062 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Normalised identified posterior probability distributions for the bending \(EI\) and shear \(kGA\) stiffness parameters, for the models with and without boundary condition information
Figure 8: Estimated probability density functions for the structural normalised bending and shear stiffness. The inclusion of BCs as synthetic noise-less measurement datasets is not mandatory, but leads to a more accurate stiffness identification.
decays, both structural parameters stabilize their means around the analytical values, and the model's uncertainty on the parameters decreases due to the provided measurements being less scattered around the analytical solution.
Improving the quality of the measuring devices enhances the quality of the stiffness identification. Nevertheless, this is not always a viable strategy from a technical or financial perspective. An alternative option proposed herein is to provide the model with additional measurements at each of the observed locations. This approach drastically increases the size of the data set and must be used with care as the computational complexity of the GP model scales with \(\mathcal{O}(N^{3})\)[21]. Although alternatives to this issue exist in the literature, e.g. partitioning the data into several groups and optimizing them individually [55], approximating the covariance matrix via sparsely sampled points [56], or using stochastic variational inference methods [57], the standard GP regression using all the data points is still considered here. To evaluate the effects of the possibly large number of data points (\(\mathrm{NDP}\)) provided to the model, a constant \(\mathrm{SNR}=10\) is assumed, and different numbers of measurements from deflections and rotations at the physics-informed sensor location data set are added to the model. The results are shown in Fig. 9 (right). The additionally provided data tends to stabilize the mean value for both bending and shear stiffness, while simultaneously reducing the uncertainty of the parameters. Nevertheless, an asymptotic limit on the standard deviation seems to exist, which is directly related to the quality of the measurement data. This becomes evident from the relation between the two plots in Fig. 9, as \(\sigma_{EI}\) and \(\sigma_{kGA}\) reduce with the increase of \(\mathrm{NDP}\), but do not reach levels as low as the ones observed for high values of \(\mathrm{SNR}\) in Fig 9 (left).
#### 5.2.4 Rigidity effects
The results discussed so far consider a fixed rigidity \(r=1\). The beam response, however, is a combination of bending and shear components and is therefore directly dependent on the rigidity parameter (see Fig. 2). Due to the varying response contribution as a function of \(r\), it is expected that the identified stiffness models \(p(EI,kGA|\mathbf{y},\mathbf{x})\) will also vary w.r.t the rigidity. Furthermore, the previous models were all identified using data sets from deflections and rotations simultaneously. In Fig. 10, results are shown for models trained with deflections only, rotations only, or both simultaneously, considering a noise level of \(\mathrm{SNR}=10\). In all cases, the accuracy of the bending stiffness identification is increased for low rigidity levels (\(r<10^{-2}\)), reflecting the physical interpretation that, at this range, the response is governed by bending effects. For rigidity values between \(10^{-2}\) and \(10^{0}\), neither the bending nor the shear stiffness is reliably identified, as the deflection is a combination of both stiffness values and the solution for the deflection-only GP model is not unique. Conversely, for cases where \(r>10^{0}\), the shear stiffness accuracy increases progressively, while the bending stiffness results are unreliable.
When only deflection data is provided during training (Fig. 10, left), a higher standard deviation is observed for stable rigidity ranges of \(p(EI,kGA|\mathbf{y},\mathbf{x})\), in both stiffness cases. The provision of rotations reduces the stiffness uncertainty (Fig. 10, centre), as rotations are in general more sensitive to changes in model parameters [58]. Combining the two datasets yields, however, the best prediction quality as the physics-informed model correlated the different readings based on the full covariance matrix as shown in Eq. 22.
Figure 9: Bending stiffness \(EI\) and shear stiffness \(kGA\) identification. Left: Influence of measurement noise (mean and 95% confidence interval) in terms of the signal-to-noise ratio (SNR). Right: Mean and 95% confidence interval as a function of the number measurements (NDP) at each sensor location provided for training, considering a fixed \(\mathrm{SNR}=10\).
### Prediction of unobserved responses
The physics-informed GP model is derived for all the quantities of interest related via the differential equation. Even though available data may be limited for certain locations and specific physical responses, the nature of the GP model allows for predictions of all physical quantities. Using the sensor locations obtained via the physics-informed criterion, a comparison is now made for predictions between models learned with and without the presence of BCs. For that purpose, the simply supported beam model under UDL is defined with \(r=1\) and \(\mathrm{SNR}=20\).
The noisy measurements for deflections and rotations, along with the respective predictions for the cases with and without BCs are shown in Fig. 11. The inclusion of a noiseless boundary condition improves the quality of the predictions. The deflection uncertainty for the BC model reduces at locations closer to the supports and takes a maximum value at mid-span. In contrast, the model without BCs is characterized by a higher standard deviation throughout the length of the beam model. In addition, the prediction mean matches closely the analytical results in the case where BCs are present, while the model without BCs shows discrepancies for the mid-span prediction, and does not return a zero deflection at the supports. In the rotation case, both models stray from the analytical results. Nevertheless, the mean value of the model with BCs better approximates the true results, in comparison to the mean prediction of the model without BCs.
Figure 11: Simply supported beam with uniform loading: normalised deflection \(w\) (left) and rotation \(\varphi\) (right). The shaded area represents the 95% confidence interval. The boundary condition on displacement reduces uncertainty at the BCs and improves prediction quality.
Figure 10: Mean and 95% confidence interval of the identified bending stiffness \(EI\) and shear stiffness \(kGA\) in relation to the beam rigidity \(r=3EI/L^{2}kGA\). From left to right: Regression using only deflection data, only rotation data, and both simultaneously.
Although no data from strain measurements and internal forces are included during the training of the physics-informed GP model, the full covariance formulation given in Eq. 22 allows for a joint, multi-output prediction relating unobserved physical responses with the measurement data. Given the beam height \(h\) and a relative distance \(z\) from the neutral axis, the strain field is determined from the trained Gaussian processes models and compared to the analytical solution, as shown in Fig. 12. The model with BCs closely resembles the analytic solution for the strain field, especially at the cross-section extremes (\(z/h=\pm 0.50\)). The model without boundary condition information, however, can approximate the correct strains around the neutral axis level (\(z/h=0\)) but loses accuracy at the section borders, particularly close to the support locations.
The results for the internal forces are shown in Fig. 13. A significant bias is observed in the bending moments of the model with no BCs, along with substantial uncertainty in the predictions. Although less apparent, a deviation is also observed in the shear predictions. The model containing the information of the BCs, however, displays a virtually perfect result in terms of bending moments and shears, approximating with high precision and accuracy the two physical quantities, while their standard deviation collapses to a value substantially smaller than the non-informed model. For a
Figure 12: Simply supported beam with uniform loading: Normalised mean strain field as a function of length and section height for (left) the analytical model, (centre) the GP model with the inclusion of boundary conditions and (right) the GP model without BCs.
Figure 13: Simply supported beam with uniform loading: Normalised bending moment \(M\) (left) and shear \(V\) (right). The shaded area represents the 95% confidence interval. The inclusion of boundary conditions decreases internal force uncertainty and increases their accuracy.
better, numerical comparison of the prediction means, the root mean squared error (RMSE) for the different models is shown in Tab. 3.
## 6 Experiments
### Setup
We now present a validation of the constructed GP framework based on an experimental setup. The test structure consisted of a simply supported beam of \(3\) m in length, as shown in Fig. 14 (top). The beam is defined by a rectangular hollowed cross-section (c.f. dimensions in Fig. 14, bottom right). The structural rigidity is \(r=6\cdot 10^{-4}\) and the deflections are therefore primarily governed by bending effects. The assumed bending stiffness, calculated with the section and material properties, amounts to \(EI_{0}=11330\) Nm\({}^{2}\). The structure is subjected to a uniformly distributed load of magnitude \(q=670\) N/m, and the corresponding response, in terms of deflections, rotations and strains, was measured at various points throughout the length of the model.
The deflections were measured by various types of sensors, namely five laser displacement transducers (LDT), eight inductive displacement transducers (IDT) and two draw wire sensors (DWS). The measurements were obtained using a data acquisition (DAQ) system with 24-bit analogue-to-digital (AD) conversion. In addition, analogue deflection readings were obtained using two dial gauges (DG), as shown in detail in Fig. 14 (centre right). The rotation response was measured exclusively at the support locations (c.f. Fig. 14, centre left), simultaneously by a set of digital inclinometers (DI) on either side of the beam connected to the DAQ system and another set of analogue inclinometers (AI) at the same positions. Furthermore, strain measurements were taken at three locations at the bottom side of the structure using a Wheatstone half-bridge strain gauge (SG) circuit. Employing the DAQ system, data was acquired with a constant sampling rate of \(20\) Hz. Analogue readings (dial gauge and inclinometer) were taken once the reading stabilized in a single value after the loading was applied. Details of the employed sensors are given in Table 4, and the location of each installed sensor is indicated in Fig. 14 (bottom left).
A sample of the readings from each sensor is shown in Fig. 15 (left). Analysing at first the deflection readings, the inductive displacement transducer located at \(x_{\mathrm{IDT}}=2.50\) m malfunctioned during the experiment. Its time-series response, shown in Fig. 15 (right), displays an impulse-like excitation which did not take place during the static test of the structure and is not observed in the remaining sensors. In addition, the draw wire transducer at \(x_{\mathrm{DWS}}=1.88\) m has a biased response, characterized by a constant deviation from the qualitative expected reading, when compared to the other sensors in its proximity. Lastly, the readings from the three laser sensors close to the midspan are higher in magnitude when compared to the readings from the IDTs installed at the same locations. The readings from the digital inclinometers are very consistent and have a low amount of noise. When compared to the analogue rotation readings, they are higher in magnitude at both support points.
\begin{table}
\begin{tabular}{l c c} \hline \hline Sensor set & Range & Resolution \\ \hline LDT & 400 \(\pm\) 100 mm & DAQ \\ IDT & 0 - 50 mm & DAQ \\ DWS & 0 - 2000 mm & DAQ \\ DG & 0 - 10 mm & 0.01 mm \\ DI & \(\pm\) 3\({}^{\circ}\) & DAQ \\ AI & \(\pm\) 180 \({}^{\circ}\) & 0.01 \({}^{\circ}\) \\ \hline SG & Circuit: Wheatstone half-bridge, 350 \(\Omega\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Sensor types and properties. The DAQ has a resolution of 24 bits.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & \(w/\mathrm{max}|w_{\mathrm{ana}}|\) & \(\varphi/\mathrm{max}|\varphi_{\mathrm{ana}}|\) & \(\epsilon/\mathrm{max}|\epsilon_{\mathrm{ana}}|\) & \(M/\mathrm{max}|M_{\mathrm{ana}}|\) & \(V/\mathrm{max}|V_{\mathrm{ana}}|\) \\ & [-] & [-] & [-] & [-] & [-] \\ \hline Without BCs & \(1.76\cdot 10^{-2}\) & \(6.65\cdot 10^{-2}\) & \(4.51\cdot 10^{-2}\) & \(2.16\cdot 10^{-1}\) & \(3.08\cdot 10^{-2}\) \\ With BCs & \(1.14\cdot 10^{-3}\) & \(3.65\cdot 10^{-2}\) & \(2.55\cdot 10^{-2}\) & \(1.93\cdot 10^{-7}\) & \(5.13\cdot 10^{-7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: RMSE comparison for the predictions from the models with and without boundary conditions, for all physical quantities
### Stiffness identification
At first, we consider different Gaussian process models with \(\mathrm{NDP}=7\) measurement values at each sensor location and train them with data from one individual set of deflection sensors at a time, without the inclusion of rotation readings. All the models return results around \(10\)% smaller than the analytically estimated \(EI_{0}\) (c.f. Fig. 16 (left) and Tab. 5). The model trained using the laser displacement transducer has the smallest variance, due to the low noise level in the readings. The results based on the inductive displacement transducer have a higher mean compared to the laser sensor, which is in agreement with the smaller deflections measured at midspan. In addition, a higher uncertainty is observed as a result of the faulty sensor at \(x_{\mathrm{IDT}}=2.50\) m. The dial gauge results have a mean value between the LDT and IDT sensor sets but display a higher uncertainty due to the limited number of installed sensors. Finally, the draw wire sensors have a similar mean value but a much bigger variance, since only two sensors are available and one of them, at \(x_{\mathrm{DWS}}=1.88\), has biased readings.
To demonstrate the physics-informed GP ability of multi-fidelity sensor fusion, we now train a combined model containing the four different deflection data sets from each sensor type. To this end, a different noise parameter \(\sigma_{n}\) is assumed for each individual set of measurements, to account for differences in sensor quality. The combined stiffness model is shown in Fig. 16 (left). Results are guided by the model based on the laser sensor set, as it contained the
Figure 14: Top: deflection and rotation sensors positioned along the length of the beam, subjected to a uniformly distributed load. Centre: the analogue (AI) and digital (DI) inclination sensors (left) at one of the supports and (right) a detail of three deflection sensors: a displacement transducer (IDT), a draw-wire sensor (DWS) and a dial gauge (DG), while the laser displacement transducers (LDT) were installed at the floor. Bottom: a sketch of the positions of all installed sensors (left) and the beam’s cross-section dimensions and thickness \(t\) (right).
highest number of sensors and the highest SNR. Nevertheless, the mean value of the combined model is shifted towards a higher stiffness, to account for the remaining measurements, especially at midspan.
Incorporating rotation readings from both the analogue and digital sensor sets in the GP model implies heterogeneous sensor fusion and alters the characteristics of the identified stiffness (c.f. Fig. 16 (right) and Tab. 5). In comparison to the pure deflection case, the quality of the identified stiffness model is increased, as rotations are generally more sensitive to changes in model parameters [58]. We first analyse the combination of individual deflection sensor sets with rotations. The laser sensor results have only a slight shift in mean, indicating a good agreement between the heterogeneous measurements. In all the other models, however, the mean values stabilize at a higher stiffness, reflecting the higher deflections measured by the laser displacement sensor. Notably, the variance of \(p(EI/EI_{0})\) for the draw-wire sensor set decreases by a factor of 11, which demonstrates how the inclusion of heterogeneous measurements adjusts the physics-informed GP model. In the case where all the measurements (deflections and rotations) are combined in a single GP model, a slight shift in the mean value of \(p(EI/EI_{0})\) is observed, along with an increase in variance. This may be the effect of the combination of heterogeneous measurements during the training process, as the GP model converges to an average stiffness that explains best the training data and accounts for measurement differences via an increase in uncertainty.
Figure 16: Normalised identified stiffness distributions for GP models without (left) and with (right) information of the rotation values measured at the supports. Models are trained separately with the laser displacement transducer (LDT), displacement transducer (IDT), draw wire sensor (DWS), dial gauge (DG) sensors, or all combined.
Figure 15: Left: Deflection measurements from the laser displacement transducer (LDT), displacement transducer (IDT), draw wire sensor (DWS), dial gauge (DG) sensors, along with the digital (DI) and analogue (AI) inclinometer measurements. Right: The corresponding deflection measurement time series for \(x_{\rm{IDT}}=2.50\) m, \(x_{\rm{LDT}}=1.50\) m, \(x_{\rm{DWS}}=1.88\) m and \(x_{\rm{DG}}=1.00\) m.
### Predictions of unobserved responses
With the probability distributions for stiffness in hand, we can further use the model for predictions of physical quantities. To this end, the model for \(p(EI)\) considering the combined case of all sensors from Sec. 6.2 is employed. At first, we focus on the deflection and rotation predictions (c.f. Fig. 17), for which data at different positions was used during training. Because the identified stiffness is lower than \(EI_{0}\), an increase in response amplitude is observed for both deflections and rotations, when compared to the analytical model using the original assumption. The deflection predictions fit the training points accurately, and at midspan, they fall between the readings of the LDT and the IDT, which is in agreement with the intermediate average value of \(EI\) (see Fig. 16, right). Prediction uncertainty decreases towards the supports, due to the effect of the noise-less boundary conditions supplied to the model during training. The noisy readings from the IDT at \(x_{\mathrm{IDT}}=2.50\) m and the bias on the DWT measurements at \(x_{\mathrm{DWS}}=1.88\) m are ignored by the model, and do not influence the predictions at their respective locations.
The cross-covariances of the physics-informed GP allow predictions for physical quantities that were not included as part of the training data. In this validation case, the unobserved responses amount to strains, bending moments and shear forces. In the case of strains (c.f. Fig. 18, left), the predicted magnitude is higher than the original model, with assumed \(EI_{0}\). Similar to the deflections and rotations, this is explained by the lower stiffness value identified based on the GP learning. The three strain gauges installed at the bottom fibre of the cross-section are now used for the validation of the predictions. The measured data are in good agreement with the GP's strain values, falling within the 95% confidence interval for all three locations. For the internal forces, a nearly identical result is obtained between the original and the predicted bending moments and shear forces. Because the structure is statically determined, the internal forces do not depend on the structural stiffness and are solely determined by the applied load, which is reflected by the GP model outputs. Since the load values are analytically calculated and assumed noise-less, the uncertainty in internal forces follows the same pattern and their variance is insignificant in comparison to their magnitude.
Figure 17: Inference in the physics-informed Gaussian process using the experimental data set. Left: deflection predictions, along with boundary condition information and readings from the laser displacement transducer (LDT), displacement transducer (IDT), draw wire sensor (DWS), and dial gauge (DG) sensors (left). and rotation predictions with the corresponding measurements (right). Right: rotation predictions with the training data from the digital (DI) and analogue (AI) inclinometers.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Sensor set & \multicolumn{2}{c}{No rotations} & \multicolumn{2}{c}{With rotations} \\ & \(\mu\) [-] & \(\sigma\) [-] & \(\mu\) [-] & \(\sigma\) [-] \\ \hline Combined & 0.899 & \(1.7\cdot 10^{-3}\) & 0.902 & \(3.8\cdot 10^{-3}\) \\ Laser displacement transducer (LDT) & 0.897 & \(1.9\cdot 10^{-3}\) & 0.899 & \(2.1\cdot 10^{-3}\) \\ Inductive displacement transducer (IDT) & 0.912 & \(3.6\cdot 10^{-3}\) & 0.918 & \(1.6\cdot 10^{-3}\) \\ Draw wire sensor (DWS) & 0.906 & \(23.2\cdot 10^{-3}\) & 0.920 & \(2.1\cdot 10^{-3}\) \\ Dial gauge (DG) & 0.906 & \(7.3\cdot 10^{-3}\) & 0.919 & \(2.4\cdot 10^{-3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mean and standard deviation from the bending stiffness distributions \(p(EI/EI_{0})\) identified from GPs with single sensor data sets, and the model with all data sets combined
## 7 Conclusions
This paper presented a physics-informed Gaussian process model for Timoshenko beam elements and its application on stiffness identification and the probabilistic prediction of unobserved responses. The developed model is able to learn from heterogeneous data types, such as deflections, rotations and strains, building correlation between them via analytically-derived cross-covariance kernels. The model seamlessly incorporates multi-fidelity measurements, aggregating data of various quality levels to reach an optimal training state. In addition, an entropy-based method for sensor placement optimisation was extended to account for the physics-informed aspect of the GP model, allowing for heterogeneous sensor placement and identifying information gains from measurement points across physical domains.
The identification of stiffness was observed to depend on the quality of the sensor placement. Furthermore, it is also connected to the structural rigidity, which was built into the GP model by construction, and the accuracy of the prediction was correlated with the stiffness component that governs the response. It is worth noting that a basic assumption is that the collected data originates from a process described by the target PDE, in this case, the Timoshenko static beam theory, and deviations from it, e.g. cases where normal forces create additional deflections, may lead to poor stiffness identification results. In addition, due to the modelling assumptions, non-linear effects on the beam's static responses are not accounted for and are a topic for future research. The identified stiffness was subsequently used to obtain probabilistic predictions of unobserved responses, such as internal forces or deflections and rotations where sensors were not installed. This allowed for full state estimation and can be applied to locations where placing a sensor is not viable. The current strategy is limited to smooth and continuous responses, however, due to the intrinsic assumptions of the squared exponential kernel.
The probabilistic aspect of the GP model yielded a physics-informed sensor placement optimisation method that surpassed conventional entropy-based approaches due to its ability to integrate prior structural knowledge, such as boundary conditions. This was accomplished through the inclusion of noise-free synthetic data points placed strategically within the structure. The proposed method is particularly effective for optimizing heterogeneous sensors, due to the use of cross-covariance kernels that describe the correlation of different sensor types, effectively carrying information from sensors placed across different physical domains. Results demonstrated that the proposed approach outperforms standard techniques found in the literature.
Validation of the developed model was given in the form of an experimental static test. The Gaussian process model was highly effective at identifying outliers in the form of noisy sensor readings and was capable of fusing data from heterogeneous multi-fidelity sensor types, leading to optimized stiffness values that are in closer agreement with the measured data. Nevertheless, for practical applications, the amount of measured data remains a challenge, as it demands significant computational resources.
The model presented in this paper has potential applications in the field of Structural Health Monitoring (SHM) for both mechanical and structural systems. It is particularly useful in cases involving deep and rigid beams, where traditional system identification techniques may yield poor results. In addition, the stochastic aspect of the identified parameters
Figure 18: Prediction of unobserved responses in the physics-informed Gaussian process using the experimental data set: Strains at the bottom fibre with three strain gauge measurements used for prediction only (left), bending moments (centre) and shear forces (right). The shaded area represents the 95% confidence interval.
and the fully Bayesian response predictions provide results with valuable confidence intervals, instead of point estimates as is usually the case with traditional methods. Future studies may extend the model to more complex structural systems, opening up new avenues of research.
## Acknowledgements
IK gratefully acknowledges the support by the German Research Foundation (DFG) [Project No. 491258960], Darwin College and the Department of Engineering, University of Cambridge.
|
2306.00125 | Graph Colouring is Hard for Algorithms Based on Hilbert's
Nullstellensatz and Gröbner Bases | We consider the graph $k$-colouring problem encoded as a set of polynomial
equations in the standard way over $0/1$-valued variables. We prove that there
are bounded-degree graphs that do not have legal $k$-colourings but for which
the polynomial calculus proof system defined in [Clegg et al '96, Alekhnovich
et al '02] requires linear degree, and hence exponential size, to establish
this fact. This implies a linear degree lower bound for any algorithms based on
Gr\"{o}bner bases solving graph $k$-colouring using this encoding. The same
bound applies also for the algorithm studied in a sequence of papers [De Loera
et al '08,'09,'11,'15] based on Hilbert's Nullstellensatz proofs for a slightly
different encoding, thus resolving an open problem mentioned in [De Loera et al
'08,'09,'11] and [Li '16]. We obtain our results by combining the polynomial
calculus degree lower bound for functional pigeonhole principle (FPHP) formulas
over bounded-degree bipartite graphs in [Mik\v{s}a and Nordstr\"{o}m '15] with
a reduction from FPHP to $k$-colouring derivable by polynomial calculus in
constant degree. | Massimo Lauria, Jakob Nordström | 2023-05-31T19:03:00Z | http://arxiv.org/abs/2306.00125v1 | # Graph Colouring is Hard for Algorithms Based on
###### Abstract
We consider the graph \(k\)-colouring problem encoded as a set of polynomial equations in the standard way over \(0/1\)-valued variables. We prove that there are bounded-degree graphs that do not have legal \(k\)-colourings but for which the polynomial calculus proof system defined in [10, 11]. '96, Alekhnovich et al. '02] requires linear degree, and hence exponential size, to establish this fact. This implies a linear degree lower bound for any algorithms based on Grobner bases solving graph \(k\)-colouring using this encoding. The same bound applies also for the algorithm studied in a sequence of papers [1, 10, 11, 12] based on Hilbert's Nullstellensatz proofs for a slightly different encoding, thus resolving an open problem mentioned in [10, 11, 12] and [13]. We obtain our results by combining the polynomial calculus degree lower bound for functional pigeonhole principle (FPHP) formulas over bounded-degree bipartite graphs in [14] with a reduction from FPHP to \(k\)-colouring derivable by polynomial calculus in constant degree.
## 1 Introduction
Given an undirected graph \(G=(V,E)\) and a positive integer \(k\), can the vertices \(v\in V\) be coloured with at most \(k\) colours so that no vertices connected by an edge have the same colour? This _graph colouring problem_ is perhaps one of the most extensively studied NP-complete problems. It is widely believed that any algorithm for this problem has to run in exponential time in the worst case, and indeed the currently fastest algorithm for \(3\)-colouring runs in time \(O(1.3289^{n})\)[1]. A survey on various algorithms and techniques for so-called exact algorithms is [14].
Many graph colouring instances of interest might not exhibit worst-case behaviour, however, and therefore it makes sense to study algorithms without worst-case guarantees and examine how they perform in practice. Dually, it can be of interest to study weak models of computation, which are nevertheless strong enough to capture the power of such algorithms, and prove unconditional lower bounds for these models. Obtaining such lower bounds is the goal of this work.
### Brief Background
Since current state-of-the-art algorithms for propositional satisfiability such as _conflict-driven clause learning (CDCL)_[1, 2, 15] are ultimately based on the _resolution proof system_[1], it
is perhaps not so surprising that this approach can be used to solve graph colouring problems as well. According to [1], McDiarmid developed a method for deciding \(k\)-colourability that captures many concrete algorithms [10]. This method, viewed as a proof system, is simulated by resolution.
There are exponential lower bounds for resolution proofs of non-\(k\)-colourability that apply to any such method. In particular, the paper [1] presents average-case exponential lower bounds for random graph \(k\)-colouring instances sampled so that the graphs are highly likely not to be \(k\)-colourable. This result is obtained by proving width lower bounds, i.e., lower bounds on the size of a largest clause in any resolution refutations of the formulas, and then using that linear width lower bounds implies exponential size lower bounds [1].
Another possible approach is to attack the \(k\)-colouring problem using algebra. Various algebraic methods have been considered in [14, 15, 16, 17]. The thesis [1] contains the first explicit attempt we know of to encode the \(3\)-colouring problem using Hilbert's Nullstellensatz. At a high level, the idea is to write the problem as a set of polynomial equations \(\{f_{i}(x_{1},\ldots,x_{n})=0\mid i\in[m]\}\) over a suitable field \(\mathbb{F}\) so that legal colourings correspond to solutions, and if this is done in the right way it holds that this system of equations has no solution if and only if there are polynomials \(g_{1},\ldots,g_{m}\) such that \(\sum_{i=1}^{m}g_{i}f_{i}=1\). This latter equality is referred to as a _Nullstellensatz certificate_ of non-colourability, and the _degree_ of this certificate is the largest degree of any polynomial \(g_{i}f_{i}\) in the sum. Later papers based on Nullstellensatz and Grobner bases such as [1, 16, 17] have attracted a fair amount of attention. For this work, we are particularly interested in the sequence of papers [15, 16, 17, 18], which uses an encoding of the \(k\)-colouring problem that will be discussed more in detail later in the paper.
There seem to be no formally proven lower bounds for these algebraic methods. On the contrary, the authors of [15] report that essentially all of the benchmarks they have studied have Nullstellensatz certificates of constant (and very small) degree. Indeed, no lower bounds for graph colouring is known for the corresponding proof systems _Nullstellensatz_[10] or the stronger system _polynomial calculus_[12, 1]. Intriguingly, in a close parallel to the case for resolution it is known that strong enough lower bounds on polynomial calculus degree imply exponential lower bounds on proof size [10], but the techniques for proving degree lower bounds are much less developed than the width lower bound techniques for resolution.
There have been degree lower bounds proven for polynomial calculus refutations of concrete systems of polynomial equations, but in most cases these have been encodings of obviously false statements (such as negations of the pigeonhole principle or graph handshaking lemma), rather than computationally hard problems. For some of these problems degree lower bounds can be obtained by making an affine transformation from \(\{0,1\}\)-valued variables to \(\{-1,+1\}\)-valued variables [1, 2], but this only works for polynomial equations with the right structure and only for fields of characteristic distinct from \(2\). A general and powerful method, which is independent of the field characteristic, was developed in [1], but has turned out to be not so easy to to apply (except in a few papers such as [1, 1]). A slightly different, and formally speaking somewhat incomparable, version of the approach in [1] was recently presented in [17], and this latter paper also more clearly highlighted the similarities and differences between resolution width lower bound techniques and polynomial calculus degree lower bound techniques. The new framework in [17] was used to establish a new degree lower bound which plays a key role in our paper.
### Our Contributions
We exhibit explicit families of non-\(k\)-colourable graphs of bounded vertex degree such that the canonical encoding of the corresponding \(k\)-colouring instances into systems of polynomial equations over \(\{0,1\}\)-valued variables require linear degree to be refuted in polynomial calculus.
**Theorem 1.1** (informal).: _For any constant \(k\geq 3\) there are explicit families of graphs \(\{G_{n}\}_{n\in\mathbb{N}}\) of size \(\mathrm{O}(n)\) and constant vertex degree, which are not \(k\)-colourable but for which the polynomial calculus proof system requires linear degree, and hence exponential size, to prove this fact, regardless of the
underlying field._
Our degree lower bound also applies to a slightly different encoding with primitive \(k\)th roots of unity used in [10, 11] to build \(k\)-colouring algorithms based on Hilbert's Nullstellensatz. These algorithms construct certificates of non-\(k\)-colourability by solving linear systems of equations over the coefficients of all monomials up to a certain degree.
Just as the algorithms in [10, 11], our lower bound does not work for all fields (the field must have an extension field in which there is a primitive \(k\)th root of unity). For simplicity, we state below a concrete result for Nullstellensatz certificates over \(\mathrm{GF}(2)\) for non-\(3\)-colourability, which is one of the main cases considered in [10, 11].
**Corollary 1.2**.: _There are explicit families of non-\(3\)-colourable graphs such that the algorithms based on Hilbert's Nullstellensatz over \(\mathrm{GF}(2)\) in [10, 11] need to find certificates of linear degree, and hence must solve systems of linear equations of exponential size, in order to certify non-\(3\)-colourability._
We remark that Corollary 1.2 answers an open question raised in, for example, [10, 11, 12].
Finally, we want to mention that the graph colouring instances that we construct turn out to be easy for the proof system _cutting planes_[13], which formalizes the integer linear programming algorithm in [14, 15] and underlies so-called _pseudo-Boolean_ SAT solvers such as, for instance, _Sat4j_[1, 16].
**Proposition 1.3**.: _The graph colouring instances for the non-\(k\)-colourable graphs in Theorem 1.1 have polynomial-size refutations in the cutting planes proof system._
### Techniques
Perhaps somewhat surprisingly, no heavy-duty machinery is required to establish Theorem 1.1. Instead, all that is needed is a nifty reduction. Our starting point is the so-called _functional pigeonhole principle (FPHP) formula_ restricted to a bipartite graph of bounded left degree \(k\). This formula expresses the claim that a set of pigeons \(i\in I\) can be mapped to a set of pigeonholes \(j\in J\) in a one-to-one fashion, where in addition the pigeons are constrained so that every pigeon can choose not between all available holes but only between a set of \(k\) holes as specified by the bipartite graph. Clearly, FPHP formulas are unsatisfiable when \(|I|>|J|\).
Any instance of a graph FPHP formula can be viewed as a constraint satisfaction problem by ordering the available holes for every pigeon in some arbitrary but fixed way, and then keeping track of where each pigeon is mapped by recording the ordinal number of its chosen pigeonhole. If the \(c\)th hole for pigeon \(i\) and the \(c^{\prime}\)th hole for pigeon \(i^{\prime}\) is one and the same hole \(j\), then pigeons \(i\) and \(i^{\prime}\) cannot be allowed to make choices \(c\) and \(c^{\prime}\) simultaneously. If we view this constraint as an edge in graph with the pigeons \(I\) as vertices, this is already close to a graph colouring instance, except that what is forbidden for the neighbours \(i\) and \(i^{\prime}\) is not the same colour \(c\) but some arbitrary pair of possibly distinct colours \((c,c^{\prime})\). However, the idea outlined above can be turned into a proper reduction from graph FPHP formulas to \(k\)-colouring instances by using appropriately constructed gadgets of constant size.
We then combine this reduction with the recent polynomial calculus degree lower bound in [13], which works as long as the underlying bipartite graph is a _boundary expander_ (a.k.a. _unique-neighbour expander_). More precisely, we show that the reduction from FPHP to graph \(k\)-colouring sketched above can be computed in polynomial calculus in low degree. Therefore, any low-degree polynomial calculus refutations of the graph \(k\)-colouring instances could be used to obtain low-degree refutations of FPHP instances, but [13] tells us that FPHP instances over expander graphs require linear degree.
In order to obtain Corollary 1.2, we assume that we have a low-degree Nullstellensatz certificate (or, more generally, a polynomial calculus proof) of non-colourability for the roots-of-unity encoding in [10, 11]. Then it is not hard to show that if the field we are working in contains a primitive \(k\)th root of unity, we can apply a linear variable substitution to obtain a polynomial calculus
refutation in essentially the same degree of the colouring instance in the encoding with \(\{0,1\}\)-valued variables. The corollary now follows from Theorem 1.1.
As should be clear from the discussion above, the hardness of our graph colouring instances ultimately derives from the pigeonhole principle. However, this combinatorial principle is well-known to be easy for cutting planes. We establish Proposition 1.3 by showing that cutting planes can unpack the reduction described above to recover the original pigeonhole principle instance, after which this instance can be efficiently refuted.
### Outline of This Paper
The rest of this paper is organized as follows. We start by presenting some proof complexity preliminaries and discussing how to encode the graph colouring problem in Section 2. In Section 3 we describe our graph \(k\)-colouring instances and prove that they are hard for polynomial calculus, and in Section 4 we show that the same instances are easy for cutting planes. We conclude in Section 5 by discussing some directions for future research.
## 2 Preliminaries
Throughout this paper \(x_{1},\ldots,x_{n}\) denote \(\{0,1\}\)-valued variables, where we think of \(1\) as true and \(0\) as false. We write \(\mathbb{N}=\{0,1,2,\ldots\}\) for the natural numbers and denote \(\mathbb{N}^{+}=\mathbb{N}\setminus\{0\}\). For \(n\in\mathbb{N}^{+}\) we use the standard notation \([n]=\{1,2,\ldots,n\}\). For a set \(E\), we use the shorthand \(e\neq e^{\prime}\in E\) to index over pairs of distinct elements \(e,e^{\prime}\in E\), \(e\neq e^{\prime}\).
### Proof Complexity
_Polynomial calculus (PC)_[1] is a proof system based on algebraic reasoning where one expresses constraints over Boolean variables as polynomial equations and applies algebraic manipulations to deduce new equations. The constraints are over \(\{0,1\}\)-valued variables \(x_{1},\ldots,x_{n}\), and each constraint is encoded as a polynomial \(Q\) in the ring \(\mathbb{F}[x_{1},\ldots,x_{n}]\), where \(\mathbb{F}\) is some fixed field. The intended meaning is that \(Q=0\) if and only if the constraint is satisfied, but we omit "\(=0\)" below and only write the polynomial \(Q\). A _PC derivation_ of a polynomial \(R\) from a set of polynomials \(\mathcal{S}=\{Q_{1},\ldots,Q_{m}\}\) is a sequence \((P_{1},\ldots,P_{\tau})\) such that \(P_{\tau}=R\) and for \(1\leq t\leq\tau\) the polynomial \(P_{t}\) is obtained by one of the following derivation rules:
* **Boolean axiom:**\(P_{t}\) is \(x^{2}-x\) for some variable \(x\);
* **Initial axiom:**\(P_{t}\) is one of the polynomials \(Q_{j}\in\mathcal{S}\);
* **Linear combination:**\(P_{t}=\alpha P_{i}+\beta P_{j}\) for \(1\leq i,j<t\) and some \(\alpha,\beta\in\mathbb{F}\);
* **Multiplication:**\(P_{t}=xP_{i}\) for \(1\leq i<t\) and some variable \(x\).
A _PC refutation_ of \(\mathcal{S}\) is a derivation of the multiplicative identity \(1\) of \(\mathbb{F}\) from \(\mathcal{S}\). Note that the Boolean axioms make sure that variables can only take values \(0\) and \(1\). For this reason, we can assume without loss of generality that all polynomials appearing in PC derivations are multilinear.
The _size_ of a polynomial \(P\) is the number of distinct monomials in it when it is expanded out as a linear combination of monomials,1 and the _degree_ of \(P\) is the largest (total) degree of any monomial in \(P\). The size of a PC derivation \(\pi\) is the sum of the sizes of all polynomials in \(\pi\), and the degree is the maximal degree of any polynomial in \(\pi\). One can also define the _length_ of a PC derivation as the number of derivation steps in it, but this not so interesting a measure since it may fail to take account of
polynomials of exponential size.2 A fundamental fact about PC is that the size and degree measures are tightly related as stated next.
Footnote 2: Indeed, if multiplication is defined to multilinearize polynomials automatically, as in, e.g., [1], then any unsatisfiable CNF formula encoded into polynomials in the natural way can be refuted in linear length—see [13] for details.
**Theorem 2.1** ([199]).: _For any set \(\mathcal{S}\) of inconsistent polynomials of degree at most \(d^{\prime}\) over \(n\) variables it holds that if the minimum degree of any PC refutation for \(\mathcal{S}\) is at least \(d\), then any PC refutation of \(\mathcal{S}\) has size \(\exp\bigl{(}\Omega\bigl{(}(d-d^{\prime})^{2}/n\bigr{)}\bigr{)}\)._
In particular, if the polynomials in \(\mathcal{S}\) have constant degree but require refutations of degree linear in the number of variables \(n\), then any refutation must have exponential size.
We remark that there is also a slightly more general version of this proof system known as _polynomial calculus (with) resolution (PCR)_[1]. The difference is that PCR has separate formal variables \(x\) and \(\overline{x}\) to represent both positive and negative literals when translating CNF formulas into sets of polynomials, as well as _complementarity axioms_\(x+\overline{x}-1\) to ensure that \(x\) and \(\overline{x}\) take opposite values. This yields a nicer and more well-behaved proof system. The change from PC to PCR does not affect the degree needed to refute an inconsistent set of polynomial equations, however, and Theorem 2.1 holds also for PCR. Therefore, the lower bounds we show in this paper apply both to PC and PCR. The presence of Boolean axioms allows to derive \(\prod_{i}x_{i}^{\ell_{i}}-\prod_{i}x_{i}\) for \(\ell_{i}\geq 1\) in degree \(\sum_{i}\ell_{i}\) and polynomial size. Therefore whenever we need to derive some polynomial it is sufficient to derive its multilinear version.
Another aspect worth noticing is that it makes perfect sense to define polynomial calculus also for sets of polynomial equations that do not include Boolean axioms \(x^{2}-x\). One variant studied in the literature is to add include axioms \(x^{k}-1\) instead, i.e., to insist that the value of \(x\) should be a \(k\)th root of unity. In such a setting it is no longer necessarily true that large degree implies large size, however.
In this paper we will also consider _cutting planes (CP)_[10], which is a proof system based on manipulation of inequalities \(\sum_{i}a_{i}x_{i}\geq\gamma\) where \(a_{i}\) and \(\gamma\) are integers and \(x_{1},\ldots,x_{n}\) are \(\{0,1\}\)-valued variables. A _CP derivation_ of an inequality \(B\) from a set of inequalities \(\mathcal{S}=\{A_{1},\ldots,A_{m}\}\) is a sequence \((B_{1},\ldots,B_{\tau})\) such that \(B_{\tau}=B\) and for \(1\leq t\leq\tau\) the inequality \(B_{t}\) is obtained by one of the following derivation rules:
* **Variable axiom:**\(B_{t}\) is either \(x\geq 0\) or \(-x\geq-1\) for some variable \(x\).
* **Initial axiom:**\(B_{t}\) is some \(A_{j}\in\mathcal{S}\);
* **Sum:**\(B_{t}=B_{i}+B_{j}\) for \(1\leq i,j<t\).
* **Scalar multiplication:**\(B_{t}=cB_{i}\) for \(1\leq i<t\) and \(c\in\mathbb{N}\);
* **Division:** The inequality \(B_{t}\) is \[\sum_{i}\frac{a_{i}}{c}x_{i}\geq\Bigl{\lceil}\frac{\gamma}{c}\Bigr{\rceil}\] (2.1) where \(c\) divides all \(a_{1},\ldots,a_{n}\) and \(\sum_{i}a_{i}x_{i}\geq\gamma\) is some inequality \(B_{i}\) for \(1\leq i<t\).
A CP refutation of \(\mathcal{S}=\{A_{1},\ldots,A_{m}\}\) is a derivation from \(\mathcal{S}\) of the inequality \(0\geq 1\). In what follows, we will often write \(\sum_{i}a_{i}x_{i}\leq\gamma\) as an alias for \(\sum_{i}-a_{i}x_{i}\geq-\gamma\), and we will also use \(\sum_{i}a_{i}x_{i}=\gamma\) as a shorthand for the two inequalities \(\sum_{i}a_{i}x_{i}\leq\gamma\) and \(\sum_{i}a_{i}x_{i}\geq\gamma\).
The _length_ of a CP derivation is the number of derivation steps in it. The _size_ of a linear inequality \(\sum_{i}a_{i}x_{i}\geq\gamma\) is the number of variables plus the bit size of representations of the constant term \(\gamma\) and all coefficients \(a_{i}\), and the size of a CP derivation \(\pi\) is the sum of the sizes of all inequalities in \(\pi\). We do not know of any degree-like measure for CP that would yield relation as that between size and degree for PC in Theorem 2.1. One usually does not distinguish too carefully between length and size for CP since by [1] all coefficients in a CP refutation can be assumed to have at most exponential size, and are hence representable with a linear number of bits.
For a partial mapping \(\rho:D\to R\) from a domain \(D\) to a range \(R\) we let \(\mathrm{dom}(\rho)\) denote the set of element with an image. For \(d\in D\setminus\mathrm{dom}(\rho)\) we write \(\rho(d)=*\). Given a partial assignment or
_restriction_\(\rho\) of variables \(x_{1},\ldots,x_{n}\) to values in \(\{0,1\}\) and a polynomial \(P\) or a linear inequality \(A\), we denote by \(P\!\upharpoonright_{\rho}\) and \(A\!\upharpoonright_{\rho}\) the polynomial and linear inequality obtained from \(P\) and \(A\) by restricting the variables in the domain of \(\rho\) to the corresponding values and making obvious syntactic simplifications. Given a derivation \(\pi\) in PC or CP, we denote by \(\pi\upharpoonright_{\rho}\) the sequence of restricted polynomials or linear inequalities, respectively. It is straightforward to verify that if \(\pi\) is a CP derivation of an inequality \(A\) from \(\mathcal{S}\), then \(\pi\upharpoonright_{\rho}\) can be viewed (after simple syntactic manipulations) as a derivation of \(A\!\upharpoonright_{\rho}\) from \(\mathcal{S}\!\upharpoonright_{\rho}\) of at most the same length, and the same holds for PC with respect to size and degree.
### The Graph Colouring Problem
A _legal \(k\)-colouring_ of an undirected graph \(G=(V,E)\) with vertices \(V(G)=V\) and edges \(E(G)=E\) is a mapping \(\chi:V\to[k]\) such that for every edge \((u,v)\in E\) it holds that \(\chi(u)\neq\chi(v)\). The _chromatic number_\(\chi(G)\) of \(G\) is the smallest \(k\) such that a legal \(k\)-colouring of \(G\) exists. In the rest of this paper, colourings will often be assumed to be legal unless specified otherwise, so we will sometimes omit this prefix when no misunderstanding can occur. Also, it will sometimes be convenient to number the \(k\) colours \(0,1,\ldots,k-1\) instead of \(1,2,\ldots,k\), and we will be fairly relaxed about this issue, implicitly identifying colours \(0\) and \(k\) whenever convenient.
Given a graph \(G\) we can encode the \(k\)-colourability problem in a natural way as a system of polynomial equations over Boolean variables
\[\sum_{j=1}^{k}x_{v,j} =1 v\in V(G)\text{,} \tag{2.2a}\] \[x_{v,j}x_{v,j^{\prime}} =0 v\in V(G)\text{, }j\neq j^{\prime}\in[k]\text{,}\] (2.2b) \[x_{u,j}x_{v,j} =0 (u,v)\in E(G)\text{, }j\in[k]\text{,} \tag{2.2c}\]
with the intended meaning that \(x_{v,j}=1\) if vertex \(v\) has colour \(\chi(v)=j\). It is clear that this system of equations has a solution if and only if the graph \(G\) is \(k\)-colourable.
We will also be interested in an alternative algebraic representation of the \(k\)-colouring problem appearing, e.g., in [10, 11, 12]. In this encoding every vertex \(v\in V\) has a single associated variable \(y_{v}\) which takes values in \(\{1,\omega,\omega^{2},\ldots,\omega^{k-1}\}\), where \(\omega\) is a primitive \(k\)th root of unity. The intended meaning is that \(y_{v}=\omega^{j}\) if vertex \(v\) has colour \(j\in\{0,1,\ldots,k-1\}\). The colouring constraints are enforced by the polynomial equations
\[y_{v}^{k} =1 v\in V(G)\text{,} \tag{2.3a}\] \[\sum_{j=0}^{k-1}(y_{u})^{j}(y_{v})^{k-1-j} =0 (u,v)\in E(G)\text{,} \tag{2.3b}\]
where the polynomials live in a polynomial ring over a field of characteristic that is not a positive number dividing \(k\). Clearly, Equation (2.3a) forces the vertex \(v\) to take some colour. A moment of thought reveals that Equation (2.3b) correctly encodes an edge constraint: if \(y_{u}=\omega^{a}\) and \(y_{v}=\omega^{b}\), then the sum evaluates to \(\omega^{b(k-1)}\sum_{j=0}^{k-1}\omega^{j(a-b)}\), which equals \(0\) when \(a\neq b\) and \(k\omega^{b(k-1)}\neq 0\) otherwise. The latter formulation of \(k\)-colouring only makes sense if the characteristic of the underlying field \(\mathbb{F}\) is either \(0\) or a positive integer that does not divide \(k\). In this case, we also know that there exists an extension field \(\mathbb{E}\) of \(\mathbb{F}\) that contains a primitive \(k\)th root of unity \(\omega\)[14, Chapter VI.3].
A simple but important observation for us is that the choice of the polynomial encoding is not too important if we want to study how large degree is needed in polynomial calculus when proving that some graph \(G\) is not \(k\)-colourable, provided that the field we are in contains, or can be extended to contain, a primitive \(k\)th root of unity.
**Proposition 2.2**.: _Suppose that Equations (2.3a)-(2.3b) have a polynomial calculus refutation of degree \(d\) over some field \(\mathbb{F}\) of characteristic that is not a positive number dividing \(k\). Then \(\mathbb{F}\) can be extended to a field \(\mathbb{E}\) containing a primitive \(k\)th root of unity \(\omega\), and it holds that Equations (2.2a)-(2.2c) have a polynomial calculus refutation over \(\mathbb{E}\) of degree \(\max\{2k,d\}\)._
Proof.: By the assumption on the characteristic of \(\mathbb{F}\), we already argued that there exists some extension field \(\mathbb{E}\) of \(\mathbb{F}\) that contains a primitive \(k\)th root of unity \(\omega\). We plan to translate a polynomial calculus refutation \(\pi\) of Equations (2.3a)-(2.3b), into a refutation of Equations (2.2a)-(2.2c), and the first step of this process is to apply the linear substitutions
\[y_{v}\mapsto\sum_{j=1}^{k}x_{v,j}\omega^{j} \tag{2.4}\]
to all variables in all polynomials in \(\pi\) to obtain a new sequence of polynomials \(\pi^{\prime}\) in variables \(x_{v,j}\). These substituted polynomials in \(\pi^{\prime}\) have coefficients in \(\mathbb{E}\) and we will use them to form the skeleton of our new refutation. In order to turn \(\pi^{\prime}\) into a refutation of Equations (2.2a)-(2.2c) we are going to
* show that for every application of a derivation rule in \(\pi\), it is possible to derive the corresponding substituted consequence from the substituted premises with no increase in the degree;
* show that the substituted axioms in \(\pi^{\prime}\) can be derived from Equations (2.2a)-(2.2c) in degree \(2k\).
The first item is almost immediate. All applications of the linear combination rule in \(\pi\) remain valid in \(\pi^{\prime}\), since the substitution is a linear operator. When \(y_{u}p\) is derived from \(p\) in \(\pi\) by the means of an application of the multiplication rule, in the new refutation we need to derive \(\sum_{j=1}^{k}x_{v,j}\omega^{j}p^{\prime}\) from \(p^{\prime}\), where \(p^{\prime}\) is the substituted version of \(p\). To do that it is sufficient to derive each \(x_{v,j}p^{\prime}\) from \(p^{\prime}\) and then take a linear combination. Notice that the degree of this derivation is the same as in \(\pi\).
It remains to argue is that the substituted versions of the initial axioms (2.3a)-(2.3b) in \(\pi\) can be derived from the axioms (2.2a)-(2.2c) available to the new refutation. We first derive the substituted version of the axiom \(y_{v}^{k}-1\) (which is just axiom (2.3a) with the additive \(1\) moved to the left side), namely
\[\left(\sum_{j=1}^{k}x_{v,j}\omega^{j}\right)^{k}-1\enspace, \tag{2.5}\]
which after expansion becomes
\[\sum_{j=1}^{k}x_{v,j}^{k}\omega^{jk}-1+Q\enspace. \tag{2.6}\]
where each monomial in \(Q\) contains some factor on the form \(x_{v,j}x_{v,j^{\prime}}\) for \(j\neq j^{\prime}\). We use that \(\omega^{k}\) is \(1\) and we rewrite (2.6) as
\[\sum_{j=1}^{k}\sum_{\ell=0}^{k-2}x_{v,j}^{\ell}(x_{v,j}^{2}-x_{v,j})+\left( \sum_{j=1}^{k}x_{v,j}-1\right)+Q\enspace, \tag{2.7}\]
The \(Q\) part is derivable from axioms (2.2b), and the rest can be derived from boolean axioms and axiom (2.2a).
Now we focus on the substituted axiom (2.3b), which is
\[\sum_{j=0}^{k-1}\left(\sum_{a=1}^{k}x_{u,a}\omega^{a}\right)^{j}\left(\sum_{b =1}^{k}x_{v,b}\omega^{b}\right)^{k-1-j}\enspace. \tag{2.8}\]
It will be more convenient for us to first derive the polynomial of degree \(2k-1\)
\[\sum_{j=0}^{k-1}\left(\sum_{a=1}^{k}x_{u,a}\omega^{a}\right)^{k+j}\left(\sum_ {b=1}^{k}x_{v,b}\omega^{b}\right)^{2k-1-j}\enspace, \tag{2.9}\]
and then show that it is equivalent to (2.8) using the equation (2.5) proved above. After expansion polynomial (2.9) becomes
\[\sum_{j=0}^{k-1}\sum_{a=1}^{k}\sum_{b=1}^{k}x_{u,a}^{k+j}x_{v,b}^{(2k-1-j)}\omega^ {(k+j)a}\omega^{(2k-1-j)b}\quad+\quad Q^{\prime}\enspace, \tag{2.10}\]
where each monomial in \(Q^{\prime}\) contains either some factor \(x_{u,a}x_{u,a^{\prime}}\) for \(a\neq a^{\prime}\) or some \(x_{v,b}x_{v,b^{\prime}}\) for \(b\neq b^{\prime}\). Since \(k+j\) and \(2k-1+j\) are both greater than zero for \(j\in[0,k-1]\), the Boolean axioms can be used to prove the equivalence of each \(x_{u,a}^{k+j}x_{v,b}^{(2k-1-j)}\) with \(x_{u,a}x_{v,b}\). This fact, together with the fact that \(\omega^{k}=1\), allows us to reduce to the derivative of
\[\sum_{j=0}^{k-1}\sum_{a=1}^{k}\sum_{b=1}^{k}x_{u,a}x_{v,b}\omega^{ja}\omega^{( k-1-j)b}\quad+\quad Q^{\prime}\enspace. \tag{2.11}\]
All monomial in \(Q^{\prime}\) are derivable from axioms (2.2b). To derive first summand we change the order of the summation and split it into two parts, depending on whether \(a=b\), to obtain
\[\sum_{a=1}^{k}x_{u,a}x_{v,a}\sum_{j=0}^{k-1}\omega^{ja}\omega^{(k-1-j)a}\quad +\quad\sum_{a=1}^{k}\sum_{b=1,b\neq a}^{k}x_{u,a}x_{v,b}\underbrace{\left( \omega^{b(k-1)}\sum_{j=0}^{k-1}\omega^{j(a-b)}\right)}_{\text{equal to 0 for a $\neq b$}}\enspace. \tag{2.12}\]
The axioms (2.2c) allow to derive the first part of (2.12), while the second part is identically zero. In conclusion we have shows how to derive substituted axioms (2.5) and (2.8) in degree \(\leq 2k\) and therefore we have concluded the translation of refutation \(\pi\).
For later use, we note that we can also encode the \(k\)-colourability problem for a graph \(G\) as a system of linear inequalities
\[\sum_{j=1}^{k}x_{v,j} \geq 1 v\in V(G), \tag{2.13a}\] \[x_{v,j}+x_{v,j^{\prime}} \leq 1 v\in V(G),\,j\neq j^{\prime}\in[k],\] (2.13b) \[x_{u,j}+x_{v,j} \leq 1 (u,v)\in E(G),\,j\in[k], \tag{2.13c}\]
in a format amenable to cutting planes reasoning.
## 3 Worst-Case Lower Bound for Polynomial Calculus
We now show how to explicitly construct a family of graphs which are not \(k\)-colourable but for which polynomial calculus proofs of this fact (over any field) require degree linear in the number of vertices in the graphs. We do this in three steps:
1. First, we show how to reduce instances of _functional pigeonhole principle (FPHP) formulas_ defined over bipartite graphs of bounded degree to graph colouring instances so that there is a one-to-one mapping of pigeons to holes if and only if the graph is \(k\)-colourable.
2. Then we show that polynomial calculus is able to carry out this reduction in constant degree, so that a low-degree PC proof of graph non-colourability can be used to obtain a low-degree refutation of the corresponding FPHP instance.
3. Finally, we appeal to a linear lower bound on degree for refuting FPHP instances over bipartite expander graphs from [16].
Let us start by giving a precise description of our functional pigeonhole principle instances. We have a set of pigeons \(I\) which want to fly into a set of holes \(J\), with each pigeon flying into exactly one hole in a one-to-one fashion. However, the choices of holes for the pigeons are constrained, so that pigeon \(i\) can fly only to the holes in \(J(i)\subseteq J\), where we have \(|J(i)|=k\). If we use variables \(p_{i,j}\) to denote that pigeon \(i\) flies into hole \(j\), we can write the constraints on such a mapping as a set of polynomial equations
\[\sum_{j\in J(i)}p_{i,j} =1 i\in I, \tag{3.1a}\] \[p_{i,j}p_{i,j^{\prime}} =0 i\in I,\,j\neq j^{\prime}\in J(i).\] (3.1b) \[p_{i,j}p_{i^{\prime},j} =0 i\neq i^{\prime}\in I,\,j\in J(i)\cap J(i^{\prime}). \tag{3.1c}\]
Note that an instance encoded by Equations (3.1a)-(3.1c) can also be naturally viewed as a bipartite graph \(B\) with left vertex set \(I\), right vertex set \(J\), and edges from each \(i\in I\) to all \(j\in J(i)\). In what follows, we will mostly reason about FPHP instances in terms of their representations as bipartite graphs.
In the standard setting, we let \(I=[n]\) and \(J=[n-1]\) for some \(n\in\mathbb{N}\), in which case the collection of constraints (3.1a)-(3.1c) is clearly unsatisfiable. Nevertheless, it was shown in [14] that if the underlying bipartite graph is a so-called _boundary expander_, then any PC refutation of Equations (3.1a)-(3.1c) requires \(\Omega(n)\) degree and thus, by Theorem 2.1, exponential size. The _boundary_ of some set \(X\) of vertices in a (either simple or bipartite) graph \(G\) is the set of vertices in \(V(G)\setminus X\) that have exactly one neighbor in \(X\). Informally, a (bipartite) boundary expander is (bipartite) graph for which every set of (left) vertices of reasonable size has large boundary. For our results we do not need to go into the technical details of the lower bound for FPHP. It suffices to use the following claim as a black box.
**Theorem 3.1** (By the proof of [14]).: _For any integer \(k\geq 3\), consider a family of bipartite graphs \(\{B_{n}\}_{n\in\mathbb{N}}\) with_
* \(n\) _vertices on the left side,_ \(n-1\) _vertices on the right side, left degree_ \(k\)_, and right degree_ \(\mathrm{O}(k)\)_,_
* _there are universal constants_ \(\alpha,\delta\) _so that for any_ \(n\in\mathbb{N}\) _and any set_ \(I\) _of at most_ \(\alpha n\) _vertices on the left side of_ \(B_{n}\)_, the size of the boundary of_ \(I\) _is at least_ \(\delta|I|-1\)_._
_Any polynomial calculus refutation of the constraints (3.1a)-(3.1c) corresponding to \(B_{n}\) requires degree \(\Omega(n)\)._
To be precise, the lower bound in Theorem 3.1 was proven for a slightly different encoding of Equations (3.1a)-(3.1c)-namely the one obtained from the natural translation of CNF formulas into polynomial equations--but the two encodings imply each other and can be used to derive each other in degree \(\mathrm{O}(k)\) by the implicational completeness of polynomial calculus. Hence, the lower bound holds for both encodings.
We proceed to describe the reduction from functional pigeonhole principle instances to graph colouring instances. Our starting point is an FPHP instance on a bipartite graph \(B\) with pigeons \(I=[n]\) and holes \(J\) where every pigeon has exactly \(d_{I}=k\) holes to choose from and every hole can take \(\mathrm{O}(k)\) pigeons; i.e., the bipartite graph \(B\) is left-regular of degree \(k\) and has right degree \(\mathrm{O}(k)\). Based on this instance we construct a graph \(G=G(B)\) such that \(G\) is \(k\)-colourable if and only if the functional pigeonhole principle on \(B\) is satisfiable.
By way of overview, the graph \(G(B)\) has \(n\) special vertices corresponding to the pigeons, and the colours of these vertices encode how the pigeons are mapped to holes. For every pair of pigeons \(i,i^{\prime}\) that can be mapped to the same hole \(j\) we add a gadget that forbids the colouring of the pigeon vertices \(i\) and \(i^{\prime}\) that corresponds to them being mapped to hole \(j\). These gadgets have a couple of pre-coloured vertices, but we eliminate such pre-colouring by adding one more simple gadget.
In more detail, the main idea behind the reduction is to view the choices \(J(i)\) for each pigeon \(i\in I\) as taking the first, second,..., \(k\)th edge. We fix an arbitrary enumeration of the elements of \(J(i)\) for each \(i\in I\), associating distinct numbers \(1,2,\ldots,k\) to the edges out of the vertex \(i\) in \(B\). We say that _pigeon \(i\) flies to hole \(j\) using its \(\operatorname{\mathit{cth}}\) edge_ if the edge connecting pigeon \(i\) to hole \(j\) is labelled by \(c\in[k]\), and use the notation \(i\gets c\) for this (suppressing the information about the hole \(j\)). Pigeon \(i\) taking the \(\operatorname{\mathit{cth}}\) edge corresponds to the special \(i\)th pigeon vertex being coloured with colour \(c\).
Consider two distinct pigeons \(i\neq i^{\prime}\in I\) and a hole \(j\in J(i)\cap J(i^{\prime})\). If pigeon \(i\) flies to hole \(j\) using its \(c\)th edge and pigeon \(i^{\prime}\) flies to hole \(j\) using its \(c^{\prime}\)th edge, then the translation of the injectivity constraint (3.1c) expressed in terms of \(k\)-colourings is that vertices \(i\) and \(i^{\prime}\) cannot be simultaneously coloured by colours \(c\) and \(c^{\prime}\), respectively.
Let us now give a precise description of the graph gadgets we employ to enforce such injectivity constraints. These will be partially pre-coloured graphs \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) as depicted in Figures 0(a) and 0(b). The gadget constructions start with two disjoint \(k\)-cliques for pigeons \(i\) and \(i^{\prime}\). For the sake of exposition we call the former the left clique and the latter the right clique, although the construction fully symmetric. We refer to the vertices in the left clique as \(\ell_{1},\dots,\ell_{k}\) numbered in a clockwise fashion starting with the first vertex at the bottom, and in a symmetric fashion the vertices in the right clique are referred to as \(r_{1},\dots,r_{k}\) numbered anti-clockwise starting at the bottom.
To the first vertex \(\ell_{1}\) in the left \(k\)-clique we connect the vertex \(i\). To vertices \(\ell_{2},\dots,\ell_{k-1}\) we connect a new vertex pre-coloured with colour \(c\). For the right \(k\)-clique we do a similar construction: to the first vertex \(r_{1}\) we connect the vertex \(i^{\prime}\) and to the next \(k-2\) vertices \(r_{2},\dots,r_{k-1}\) we connect a new vertex pre-coloured with colour \(c^{\prime}\).
The final step of the construction depends on whether \(c=c^{\prime}\) or not. If \(c=c^{\prime}\), then we add an edge between the final two vertices \(\ell_{k}\) and \(r_{k}\) in the cliques. If \(c\neq c^{\prime}\), then we instead merge these two vertices into a single vertex as shown in Figure 0(b). We want to stress that except for \(i\) and \(i^{\prime}\) all vertices in the construction are new vertices that do not occur in any other gadget. Let us collect for the record some properties of this gadget construction.
**Claim 3.2**.: The pre-coloured graph gadget \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) has the following properties:
1. \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) has \(\operatorname{O}(k)\) vertices.
2. \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) has two pre-coloured vertices of degree \(\operatorname{O}(k)\).
3. For every \((b,b^{\prime})\neq(c,c^{\prime})\) there is a legal \(k\)-colouring \(\chi\) of \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) extending the pre-colouring and satisfying \(\chi(i)=b\) and \(\chi(i^{\prime})=b^{\prime}\). No such legal \(k\)-colouring of \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) exists for \((b,b^{\prime})=(c,c^{\prime})\).
Proof.: The first two properties obviously hold by construction.
To prove Property 3, let us focus on the left clique in either of the two variant of the gadget. If \(\chi(i)=c\), then clearly vertex \(\ell_{1}\) in the left clique cannot take colour \(c\). Since the pre-coloured vertex connected to vertices \(\ell_{2},\dots,\ell_{k-1}\) of the clique also has colour \(c\), and since any legal colouring must use all available colours for the clique, this forces \(\chi(\ell_{k})=c\). If \(\chi(i)\neq c\), however, then we can colour vertex \(\ell_{1}\) with colour \(c\), and then choose any permutation of the remaining colours for the other vertices in the left clique, giving the vertex \(\ell_{k}\) at least two distinct colours to choose between after the other clique vertices \(\ell_{2},\dots,\ell_{k-1}\) connected to the pre-coloured vertex have been coloured.
Consider now the case \(c=c^{\prime}\), so that we have the graph gadget \(G_{(i,i^{\prime})\neq(c,c)}\) in Figure 0(a). By symmetry, if \(\chi(i^{\prime})=c^{\prime}\), then this forces \(\chi(r_{k})=c\), but there are at least two choices for the colour of \(r_{k}\) if \(\chi(i^{\prime})\neq c^{\prime}\). It follows that if \(i\gets c\) and \(i^{\prime}\gets c\), then vertices \(\ell_{k}\) and \(r_{k}\) both have to get the same colour \(c\) to avoid
conflicts in the left and right \(k\)-cliques, respectively, which causes a conflict along the edge \((\ell_{k},r_{k})\). As long as one of \(i\) and \(i^{\prime}\) is assigned a colour other than \(c\), however, \(G_{(i,i^{\prime})\not\vdash(c,c)}\) can be legally \(k\)-coloured.
For \(c\neq c^{\prime}\) we instead have the graph gadget \(G_{(i,i^{\prime})\not\vdash(c,c^{\prime})}\) in Figure 0(b). Following the same line of reasoning as above, if \(i\gets c\), then the left \(k\)-clique forces \(\ell_{k}\) to take colour \(c\), and if \(i^{\prime}\gets c^{\prime}\), then the right \(k\)-clique forces \(r_{k}=\ell_{k}\) to take colour \(c^{\prime}\neq c\), which is a conflict. If \(i\gets c\) but \(i^{\prime}\not\vdash c^{\prime}\), however, then \(\ell_{k}\) can first be coloured with colour \(c\) to satisfy the left \(k\)-clique, after which the \(k\)-colouring of the right \(k\)-clique can be completed taking \(\chi(\ell_{k})\) into consideration, and the reasoning is symmetric if \(i^{\prime}\gets c^{\prime}\) but \(i\not\vdash c\). The case when both \(i\not\vdash c\) and \(i^{\prime}\not\vdash c^{\prime}\) is dealt with in a similar way. This establishes the claim.
We write \(\widehat{G}=\widehat{G}(B)\) to denote the graph consisting of the union of all gadgets \(G_{(i,i^{\prime})\not\vdash(c,c^{\prime})}\) for all \(i\neq i^{\prime}\in I\) and all \(c,c^{\prime}\) such that if pigeon \(i\) uses its \(c\)th edge and pigeon \(i^{\prime}\) uses its \(c^{\prime}\)th edge in \(B\), then they both end up in the same hole \(j\in J\). All vertices corresponding to pigeons \(i\in I\) are shared between gadgets \(G_{(i,i^{\prime})\not\vdash(c,c^{\prime})}\) in \(\widehat{G}\), but apart from this all subgraphs \(G_{(i,i^{\prime})\not\vdash(c,c^{\prime})}\) are vertex-disjoint. We next state some properties of \(\widehat{G}\).
**Lemma 3.3**.: _Consider an FPHP instance encoded by Equations (3.1a)-(3.1c) for a left-regular bipartite graph with left degree \(d_{I}=k\) and bounded right degree \(d_{J}=\operatorname{O}(k)\), and let \(\widehat{G}\) be the partially \(k\)-coloured graph obtained as described above. Then \(\widehat{G}\) has \(\operatorname{O}\!\left(k^{4}|I|\right)\) vertices and maximal vertex degree \(\operatorname{O}(k^{2})\), and the number of pre-coloured vertices is \(\operatorname{O}\!\left(k^{2}|I|\right)\). Furthermore, the partial \(k\)-colouring of \(\widehat{G}\) can be extended to a complete, legal \(k\)-colouring of \(\widehat{G}\) if and only if there is a way to map each pigeon \(i\in I\) to some hole \(j\in J\) without violating any constraint in (3.1a)-(3.1c)._
Proof.: Without loss of generality we can assume that \(|J|\leq k|I|\) (otherwise there are holes that cannot be used by any pigeon and that can thus be discarded). Each gadget \(G_{(i,i^{\prime})\not\vdash(c,c^{\prime})}\) has \(\operatorname{O}\!\left(k\right)\) vertices and there are at most \((d_{J})^{2}=\operatorname{O}\!\left(k^{2}\right)\) distinct pairs of pigeons that can fly to any single hole \(j\), meaning that we have a total of at most \(\operatorname{O}\!\left(k^{2}|J|\right)\) injectivity constraint gadgets \(G_{(i,i^{\prime})\not\vdash(c,c^{\prime})}\). Therefore, by a crude estimate \(\widehat{G}\) has at most \(\operatorname{O}\!\left(k^{4}|I|\right)\) vertices in total.
By Claim 3.2 at most \(\operatorname{O}\!\left(k^{2}|I|\right)\) vertices in \(\widehat{G}\) are pre-coloured. Each pigeon vertex labelled by \(i\in I\) is involved in at most \(d_{I}d_{J}=\operatorname{O}\!\left(k^{2}\right)\) injectivity constraint gadgets, so such vertices have degree \(\operatorname{O}\!\left(k^{2}\right)\), while all other vertices have degree \(\operatorname{O}\!\left(k\right)\).
For any complete colouring of \(\widehat{G}\) extending the pre-colouring, the colours \(\chi(i)=c_{i}\) assigned to pigeon vertices \(i\in I\) define a mapping from pigeons to holes via the chosen edges \(c_{i}\). It follows from Claim 3.2 that this colouring is legal only if pigeons are mapped to holes in a one-to-one fashion, which implies that Equations (3.1a)-(3.1c) are satisfiable. In the other direction, for any one-to-one mapping of pigeons to holes we can colour vertex \(i\) by the colour \(c_{i}\) corresponding to the edge it uses to fly to its hole, and such a colouring can be combined with the pre-colouring complete, to produce a legal \(k\)-colouring.
To finalize our reduction we need to get rid of the pre-coloured vertices in \(\widehat{G}\). To this end, we first make the following observation. Recall that for every every pigeon \(i\in I\) we fixed an enumeration of the edges to holes \(j\in J(i)\) in \(B\), so that the choice of an edge corresponds to the choice of a colour. Suppose we apply some arbitrary but fixed permutation \(\sigma\) on \([k]\) to all such enumerations for the pigeons \(i\in I\). Clearly, this does not change the instance in any significant way. If it was the case before that pigeon \(i\) and \(i^{\prime}\) could not simultaneously take the \(c\)th and \(c^{\prime}\)th edges, respectively, then now these pigeons cannot simultaneously take the \(\sigma(c)\)th and \(\sigma(c^{\prime})\)th edges, respectively. In other words, Lemma 3.3 is invariant with respect to any permutation of the colours \([k]\), and we could imagine the reduction as first picking some permutation \(\sigma\) and then constructing \(\widehat{G}\) with respect to this permutation.
A simple way of achieving this effect would be to construct a separate "pre-colouring \(k\)-clique" consisting of \(k\) special vertices \(\gamma_{1},\dots,\gamma_{k}\), and then identify all vertices in \(\widehat{G}\) pre-coloured with colour \(c\) with the vertex \(\gamma_{c}\). It is not hard to see that the resulting graph would be \(k\)-colourable if and only if the pre-colouring of \(\widehat{G}\) could be extended to a complete, legal \(k\)-colouring, and using Lemma 3.3 we would
obtain a valid reduction from the functional pigeonhole principle to graph \(k\)-colouring. However, the final graph would have degree \(\Omega\big{(}k^{3}|I|\big{)}\), and we would like to obtain a graph of bounded degree.
To keep the graph vertex degree independent of the size \(|I|\) of the left-hand side of the FPHP bipartite graph \(B\), we instead construct a pre-colouring gadget using a slight modification of the above idea. Consider a set \(\{\gamma_{1},\gamma_{2},\ldots,\gamma_{M}\}\) of new vertices, for \(M\) to be fixed later. For every segment of \(k\) consecutive vertices \(\{\gamma_{t},\gamma_{t+1},\ldots,\gamma_{t+k-1}\}\) we add all edges \(\big{\{}(\gamma_{c},\gamma_{c^{\prime}})\big{|}\,c\neq c^{\prime}\in\{t,t+1, \ldots,t+k-1\}\big{\}}\) so that they form a \(k\)-clique as illustrated in Figure 2 (where as in Figure 1 we have \(k=4\)). Next, we go through all the pre-coloured vertices in \(\widehat{G}\): if a vertex should be pre-coloured by \(c\), then we identify it with the first vertex \(\gamma_{t}\) such that \(t\equiv c\pmod{k}\) and such that \(\gamma_{t}\) has not already been assigned at a previous step to some other pre-coloured vertex. If we choose \(M=\mathrm{O}\big{(}k^{3}|I|\big{)}\), then we are guaranteed to have enough vertices \(\gamma_{t}\) to be able to process all pre-coloured vertices in this way.
Our final graph \(G=G(B)\) is the previous graph \(\widehat{G}\) with pre-coloured vertices identified with (uncoloured) vertices in the additional pre-colouring gadget as just described. Clearly, \(G\) is \(k\)-colourable if and only if the pre-colouring of \(\widehat{G}\) can be completed to a legal \(k\)-colouring. We summarize the properties of our reduction in the following proposition.
**Proposition 3.4**.: _Given a graph FPHP formula over a left-regular bipartite graph \(B\) with left degree \(d_{I}=k\) and bounded right degree \(d_{J}=\mathrm{O}(k)\), there is an explicit construction of a graph \(G=G(B)\) such that \(G\) has \(\mathrm{O}\big{(}k^{4}|I|\big{)}\) vertices of maximal vertex degree \(\mathrm{O}(k^{2})\) and is \(k\)-colourable if and only if it is possible to map pigeons to holes in accordance with the restrictions in \(B\) in a one-to-one fashion, i.e., if and only if Equations (3.1a)-(3.1c) are simultaneously satisfiable._
Proof.: The number of vertices of \(G=G(B)\) is at most the number of vertices of \(\widehat{G}=\widehat{G}(B)\) plus \(M=\mathrm{O}\big{(}k^{3}|I|\big{)}\) additional vertices enforcing the pre-colouring. The pre-coloured vertices in the injectivity constraint gadgets get at most \(2k-1\) new neighbours, so their degree is still \(\mathrm{O}(k)\). Hence, the number of vertices and the vertex degree bound in Lemma 3.3 remain valid.
To prove the soundness and completeness of the reduction, note that any colouring \((c_{1},\ldots,c_{M})\) of \((\gamma_{1},\ldots,\gamma_{M})\) is completely determined by the colouring of the first \(k\)-clique \(\{\gamma_{1},\ldots,\gamma_{k}\}\), so that \(c_{t}=c_{t^{\prime}}\) holds whenever \(t\equiv t^{\prime}\pmod{k}\).
Assume that \(G\) has a \(k\)-colouring \(\chi\). Every vertex that was pre-coloured with colour \(c\in[k]\) in \(\widehat{G}\) has colour \(\chi(\gamma_{c})\) in \(G\). This means that the colouring of \(G\) induces, up to renaming of colours, a completion of the partial colouring of \(\widehat{G}\). By Lemma 3.3, this implies that the FPHP instance is satisfiable.
In the other direction, if the FPHP instance is satisfiable we do as in the proof of Lemma 3.3 and colour each vertex \(i\) by the colour \(c_{i}\) corresponding to the edge it uses to fly to its hole in some one-to-one mapping. We also colour vertices \(\gamma_{1},\ldots,\gamma_{k}\) with colours \(\chi(\gamma_{t})=t\), and extend this colouring to the rest of the vertices in the pre-colouring gadget in Figure 2. By Lemma 3.3 this partial colouring can be completed to obtain a legal \(k\)-colouring.
Since our reduction encodes local injectivity constraints into local colouring constraints, it stands to reason that we should be able to translate between these two types of constraints using low degree derivations. In particular, it seems reasonable to expect that any low-degree refutation of the \(k\)-colouring problem for \(G(B)\) should yield a low-degree refutation for the functional pigeohole principle on \(B\). This is indeed the case, as stated in the next lemma.
Figure 2: Pre-colouring gadget with uncoloured vertices to be identified with the pre-coloured vertices in \(\widehat{G}\).
**Lemma 3.5**.: _Consider the graph \(G=G(B)\) obtained from a bipartite graph \(B\) as in Proposition 3.4. If the \(k\)-colourability constraints (2.2a)-(2.2c) for \(G\) have a PC refutation in degree \(d\), then the functional pigeonhole principle constraints (3.1a)-(3.1c) defined over \(B\) have a PC refutation of degree at most \(2d\)._
We will spend what remains of this section on proving this lemma. The proof is quite similar in spirit to that of Proposition 2.2. We start by assuming that we have a PC refutation of Equations (2.2a)-(2.2c) in degree \(d\). Our first step is to substitute all variables \(x_{v,j}\) in this refutation with polynomials of degree at most \(2\) in variables \(p_{i,j}\). In the second step, we argue that if we apply this substitution to the axioms in (2.2a)-(2.2c), then we can derive the resulting substituted polynomials from Equations (3.1a)-(3.1c) by PC derivations in low degree. Taken together, this yields a PC refutation in low degree of the FPHP instance (3.1a)-(3.1c).
To describe the substitution, let us focus on a single gadget \(G_{(i,i^{\prime})\not\sim(c,c^{\prime})}\). The first step is to express all equations for this gadget as equations over variables \(x_{i,1},\ldots x_{i,k},x_{i^{\prime},1},\ldots x_{i^{\prime},k}\). Note that these variables are essentially the same as those from the pigeonhole principle instance, except that instead of \(p_{i,j}\) we use the variable \(x_{i,c}\) where \(c\) is the number of the edge pigeon \(i\) uses to fly to hole \(j\), but for the sake of exposition we want to keep using the language of colourings.
Let \(w\) and \(w^{\prime}\) be the vertices that are supposed to be pre-coloured with colours \(c\) and \(c^{\prime}\), respectively. We stress that now we are considering the graph \(G\) which has no pre-coloured vertices, and in particular all the variables mentioning the vertices \(w\) and \(w^{\prime}\) are unassigned. Recall that \(w\) and \(w^{\prime}\) also appear in the gadget depicted in Figure 2, where they are identified with some vertices \(\gamma_{t}\) and \(\gamma_{t^{\prime}}\) such that \(t\equiv c\) and \(t^{\prime}\equiv c^{\prime}\pmod{k}\).
For any pair \((b,b^{\prime})\) of colours different from \((c,c^{\prime})\), Claim 3.2 guarantees that we can pick some colouring \(\chi_{(b,b^{\prime})}\) for the gadget \(G_{(i,i^{\prime})\not\sim(c,c^{\prime})}\) such that \(\chi_{(b,b^{\prime})}(i)=b\), \(\chi_{(b,b^{\prime})}(i^{\prime})=b^{\prime}\), \(\chi_{(b,b^{\prime})}(w)=c\) and \(\chi_{(b,b^{\prime})}(w^{\prime})=c^{\prime}\). Fix for the rest of this proof such a colouring \(\chi_{(b,b^{\prime})}\) for the gadget \(G_{(i,i^{\prime})\not\sim(c,c^{\prime})}\) for every \((b,b^{\prime})\not=(c,c^{\prime})\). Then we can write the colour of any vertex \(v\) in \(G_{(i,i^{\prime})\not\sim(c,c^{\prime})}\) other than the pigeon vertices \(i\) and \(i^{\prime}\) as a function of \((b,b^{\prime})\). In more detail, we can express every variable \(x_{v,j}\), for \(v\not\in\{i,i^{\prime}\}\), as a degree-\(2\) polynomial over the variables \(x_{i,1},\ldots x_{i,k},x_{i^{\prime},1},\ldots x_{i^{\prime},k}\) by summing over the monomials \(x_{i,b}x_{i^{\prime},b^{\prime}}\) corresponding to the choices of colours \((b,b^{\prime})\) for \((i,i^{\prime})\) for which the colouring \(\chi_{(b,b^{\prime})}\) assigns colour \(j\) to vertex \(v\), or in symbols
\[x_{v,j}\mapsto\sum_{(b,b^{\prime})\not=(c,c^{\prime}),\,\chi_{(b,b^{\prime}) }(v)=j}x_{i,b}x_{i^{\prime},b^{\prime}}\enspace. \tag{3.2}\]
Notice that for the vertices \(w\) and \(w^{\prime}\) the substitutions we obtain from (3.2) are
\[x_{w,c} \mapsto \sum_{(b,b^{\prime})\not=(c,c^{\prime})}x_{i,b}x_{i^{\prime},b^{ \prime}}\enspace, \tag{3.3a}\] \[x_{w^{\prime},c^{\prime}} \mapsto \sum_{(b,b^{\prime})\not=(c,c^{\prime})}x_{i,b}x_{i^{\prime},b^{ \prime}}\enspace,\] (3.3b) \[x_{w,b} \mapsto 0\hskip 113.811024pt\text{(for $c\not=b$),}\] (3.3c) \[x_{w^{\prime},b^{\prime}} \mapsto 0\hskip 113.811024pt\text{(for $c^{\prime}\not=b^{\prime}$),} \tag{3.3d}\]
since \(w\) always gets colour \(c\) and \(w^{\prime}\) always gets colour \(c^{\prime}\) in any colouring \(\chi_{(b,b^{\prime})}\).
**Example 3.6**.: Let us give a concrete example to make clearer how the substitution in (3.2) works. Suppose \(k=3\) and consider an arbitrary (non-pigeon) vertex \(v\) in some gadget \(G_{(i,i^{\prime})\not=(3,1)}\). For every pair of colours \((b,b^{\prime})\in([3]\times[3])\setminus\{(3,1)\}\) for \((i,i^{\prime})\) we have fixed some (legal) colouring of the whole gadget. Suppose that in these fixed colourings it holds for some vertex \(v\) in the gadget that
1. \(v\) takes colour \(1\) for \((b,b^{\prime})\in\{(1,2),(2,2),(3,2),(3,3)\}\),
2. \(v\) takes colour \(2\) for \((b,b^{\prime})\in\{(1,1)\}\) and
3. \(v\) takes colour \(3\) for \((b,b^{\prime})\in\{(1,3),(2,1),(2,3)\}\).
In this case, the substitutions for the variables \(x_{v,j}\) mentioning \(v\) become
\[x_{v,1} \mapsto\ x_{i,1}x_{i^{\prime},2}+x_{i,2}x_{i^{\prime},2}+x_{i,3}x_{i ^{\prime},2}+x_{i,3}x_{i^{\prime},3}\enspace, \tag{3.4a}\] \[x_{v,2} \mapsto\ x_{i,1}x_{i^{\prime},1}\enspace,\] (3.4b) \[x_{v,3} \mapsto\ x_{i,1}x_{i^{\prime},3}+x_{i,2}x_{i^{\prime},1}+x_{i,2}x_ {i^{\prime},3}\enspace. \tag{3.4c}\]
We hope that the substitutions in this example can serve as a guide for following the rest of the proof.
Let us next discuss how the polynomials obtained from (2.2a)-(2.2c) after the substitution (3.2) can be derived in PC from (3.1a)-(3.1c). More precisely we argue that all substituted axioms can be derived from the equations
\[\sum_{b=1}^{k}x_{i,b} =1\enspace,\] (3.5a) \[x_{i,b}x_{i,b^{\prime}} =0\] (for
\[b\neq b^{\prime}\]
), (3.5b) \[\sum_{b^{\prime}=1}^{k}x_{i^{\prime},b^{\prime}} =1\enspace,\] (3.5c) \[x_{i^{\prime},b^{\prime}}x_{i^{\prime},b^{\prime}} =0\] (for
\[b\neq b^{\prime}\]
), (3.5d) \[x_{i,c}x_{i^{\prime},c^{\prime}} =0\enspace,\] (3.5e)
which are just the same, except for variables renaming, as the pigeon axioms (3.1a) and (3.1b) for pigeons \(i\) and \(i^{\prime}\) plus the collision axiom (3.1c) for the hole which is the common neighbour of \(i\) and \(i^{\prime}\). In what follows we will need the equation
\[\sum_{b=1}^{k}\sum_{b^{\prime}=1}^{k}x_{i,b}x_{i^{\prime},b^{\prime}}-x_{i,c}x _{i^{\prime},c^{\prime}}-1=0 \tag{3.6}\]
which has the degree-\(2\) proof
\[\sum_{b=1}^{k}x_{i,b}\left(\sum_{b^{\prime}=1}^{k}x_{i^{\prime},b^{\prime}}-1 \right)+\left(\sum_{b=1}^{k}x_{i,b}-1\right)-x_{i,c}x_{i^{\prime},c^{\prime}}=0 \tag{3.7}\]
from (3.5a)-(3.5e).
We consider first axioms \(\sum_{j=1}^{k}x_{v,j}=1\) as in (2.2a) for vertices \(v\) that are not a pigeon vertex \(i\) or \(i^{\prime}\). It is straightforward to verify that such an axiom after substitution as in (3.2) becomes an equality on the form (3.6).3 If \(v\) is a pigeon vertex \(i\) or \(i^{\prime}\), then no substitution is made and we simply keep the axiom (3.5a) or (3.5c), respectively.
Footnote 3: Consider the example of substitution we saw in Example 3.6. The equation \(x_{v,1}+x_{v,2}+x_{v,3}=1\) after substitution becomes
\[(x_{i,1}x_{i^{\prime},2}+x_{i,2}x_{i^{\prime},2}+x_{i,3}x_{i^{\prime},2}+x_{i, 3}x_{i^{\prime},3})+(x_{i,1}x_{i^{\prime},1})+(x_{i,1}x_{i^{\prime},3}+x_{i,2} x_{i^{\prime},1}+x_{i,2}x_{i^{\prime},3})=1 \tag{3.8}\]
which is equivalent to \[(x_{i,1}+x_{i,2}+x_{i,3})\left(x_{i^{\prime},1}+x_{i^{\prime},2}+x_{i^{\prime },3}\right)-x_{i,3}x_{i^{\prime},1}=1\enspace.\] (3.9)
(3.10)
which is equivalent to \[(x_{i,1}+x_{i,2}+x_{i,3})\left(x_{i^{\prime},1}+x_{i^{\prime},2}+x_{i^{\prime },3}\right)-x_{i,3}x_{i^{\prime},1}=1\enspace.\] (3.11)
Let us finally consider axioms on the form \(x_{u,j}x_{v,j}=0\) for \((u,v)\in E(G)\) as in (2.2c). There is no edge between \(i\) and \(i^{\prime}\) in our constructed graph, so for the size of the intersection between \(\{u,v\}\) and \(\{i,i^{\prime}\}\) it holds that \(0\leq\big{|}\{u,v\}\cap\{i,i^{\prime}\}\big{|}\leq 1\).
If \(\big{|}\{u,v\}\cap\{i,i^{\prime}\}\big{|}=0\), then after substitution the axiom (2.2c) becomes a sum of degree-\(4\) terms of the form \(x_{i,b_{1}}x_{i^{\prime},b^{\prime}_{1}}x_{i,b_{2}}x_{i^{\prime},b^{\prime}_{2}}\). Consider any such term. If either \(b_{1}\neq b_{2}\) or \(b^{\prime}_{1}\neq b^{\prime}_{2}\), then the term can be derived from (3.5b) or (3.5d). We claim that no term can have \(b_{1}=b_{2}\) and \(b^{\prime}_{1}=b^{\prime}_{2}\). To see this, note that this would imply that when performing substitution as in (3.2) the variables \(x_{u,j}\) and \(x_{v,j}\) both get expanded to a sum containing \(x_{i,b_{1}}x_{i^{\prime},b^{\prime}_{1}}\). But this would in turn mean that the colouring \(\chi_{(b_{1},b^{\prime}_{1})}\) that we fixed for the gadget \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) at the start of the proof assigned colours \(\chi_{(b_{1},b^{\prime}_{1})}(u)=\chi_{(b_{1},b^{\prime}_{1})}(v)\), which is impossible since there is an edge between \(u\) and \(v\) and \(\chi_{(b_{1},b^{\prime}_{1})}\) was chosen to be a legal colouring.
The remaining case is when we have intersection size \(\big{|}\{u,v\}\cap\{i,i^{\prime}\}\big{|}=1\). Without loss of generality because of symmetry we can assume that we have an axiom \(x_{u,j}x_{v,j}=0\) for \(u\notin\{i,i^{\prime}\}\) and \(v=i\). The axiom becomes after substitution a sum of terms of the form \(x_{i,b}x_{i^{\prime},b^{\prime}}x_{i,j}\). If for some term we would have \(b=j\), then \(\chi_{(j,b^{\prime})}\) would assign the same colour \(j\) to both \(u\) and \(i\). This is again impossible since \(\chi_{(j,b^{\prime})}\) is a legal colouring of the gadget by construction. Hence we have \(b\neq j\) and it follows that \(x_{i,b}x_{i^{\prime},b^{\prime}}x_{i,j}\) is derivable from (3.5b).
We are now almost done with the proof of Lemma 3.5. We have defined how to substitute variables \(x_{v,j}\) in (2.2a)-(2.2c) and have shown that the equations that we obtain after these substitutions can be derived from Equations (3.1a)-(3.1c) in low degree. The final issue that remains it to get rid of all vertices \(\gamma_{t}\) in the pre-colouring gadget in Figure 2 that are not members of any injectivity constraint gadget \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\). For such variables the substitution is simply an assignment: we let \(x_{\gamma_{t},b}\mapsto 1\) when \(t\equiv b\pmod{k}\) and \(x_{\gamma_{t},b}\mapsto 0\) otherwise.4 This immediately satisfies all axioms (2.2a) and (2.2b) for these vertices, removing these axioms from the refutation. It remains to check the axioms (2.2c) for any pair of connected vertices \(\gamma_{t}\) and \(\gamma_{t^{\prime}}\). But by construction, if \(\gamma_{t}\) and \(\gamma_{t^{\prime}}\) are connected it holds that \(t\not\equiv t^{\prime}\pmod{k}\). Therefore, for every \(b\in[k]\) we have that either \(x_{\gamma_{t},b}\mapsto 0\) or \(x_{\gamma_{t^{\prime}},b}\mapsto 0\) holds, regardless of whether these two vertices are in some gadget \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) or not.
Footnote 4: Note that here the substitution for \(x_{\gamma_{t},b}\) where \(t\equiv b\pmod{k}\) is different from the one used for vertices that are members of some gadget \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) in (3.3a) and (3.3b). For variables \(x_{\gamma_{t},b}\) where \(t\not\equiv b\pmod{k}\) the substitution is the same as in (3.3c) and (3.3d), though.
To summarize what we have done, we started with any arbitrary refutation of (2.2a)-(2.2c) and substituted all variables with degree-\(2\) polynomials over the variables \(x_{i,j}\) for \(i\in[n]\). Then we proved that all these substituted axioms (and therefore the whole refutation) follow from Equations (3.5a)-(3.5c). It is straightforward to verify that, up to variable renaming, these axioms are nothing other than the FPHP axioms in (3.1a)-(3.1c). This concludes the proof of Lemma 3.5. Putting everything together, we can now state and prove our main theorem.
**Theorem 3.7**.: _For any integer \(k\geq 3\) there is an efficiently constructible family of graphs \(\{G_{n}\}_{n\in\mathbb{N}}\) with \(\mathrm{O}(k^{4}n)\) vertices of degree \(\mathrm{O}(k^{2})\) that do not possess \(k\)-colourings, but for which the corresponding system of polynomial equations (2.2a)-(2.2c) require degree \(\Omega(n)\), and hence size \(\exp(\Omega(n))\), to be refuted in polynomial calculus._
Proof.: We need to build a family of bipartite graphs \(\{B_{n}\}_{n\in\mathbb{N}}\) with the properties needed to apply Theorem 3.1 and Proposition 3.4. This would yield a family of graphs \(\{G_{n}\}_{n\in\mathbb{N}}\) as in the theorem statement. Indeed any sublinear degree refutation for \(k\)-colouring of \(G_{n}\) would imply, by Lemma 3.5, a sublinear degree refutation for the functional pigeonhole principle for \(B_{n}\), but this is impossible by a construction of \(B_{n}\) that triggers the lower bound in Theorem 3.1.
For every \(k\geq 3\) the starting point of our construction is the paper by Alon and Capalbo [1], which provide universal constants \(\alpha,\delta\) and simple graphs \(\{H_{n}\}_{n\in\mathbb{N}}\) of degree at most \(k\) so that \(H_{n}\) has \(n\) vertices and for every set \(X\) of vertices in \(H_{n}\), the boundary of \(X\) in \(V(H_{n})\setminus X\) has size at least \(\delta|X|\). Consider the bipartite graph obtained taking two copies of the vertices of \(H_{n}\), one identified with the left side and one with the right side, and adding an edge from the left copy of \(u\) to the right copy of \(v\) for every edge \(\{u,v\}\in E(H_{n})\). Then take an arbitrary vertex \(\hat{u}\in V(H_{n})\) and remove the corresponding right
copy from the graph. The resulting graph, that we define to be \(B_{n}\), has \(n\) vertices on the left side, \(n-1\) vertices on the right side and degree at most \(k\) on both sides. Moreover for every set \(I\) of left vertices of size at most \(\alpha n\) the boundary of \(I\) consists in the right copies of the corresponding boundary in \(H_{n}\), minus vertex \(\hat{u}\). Therefore \(B_{n}\) satisfies the properties needed to apply Theorem 3.1.
## 4 Polynomial-Length Proofs for \(k\)-Colouring Instances in Cutting Planes
Theorem 3.7 tells us that there are non-\(k\)-colourable graphs \(G_{n}\) for which it is impossible for polynomial calculus to certify non-\(k\)-colourability efficiently. As is clear from our reduction, the \(k\)-colouring formulas for these graphs are essentially obfuscated instances of the functional pigeonhole principle.
It is well-known that cutting planes can easily prove that pigeonhole principle formulas are unsatisfiable by just counting the number of pigeons and holes and deduce that the pigeons are too many to fit in the holes [11]. As we show in this section, the instances of \(k\)-colouring obtained via the reduction from FPHP also have short cutting planes refutations. What these refutations do is essentially to "de-obfuscate" the \(k\)-colouring formulas to recover the original functional pigeonhole principle instances, which can then be efficiently refuted.
We are going to describe our cutting planes refutation as a decision tree such that at every leaf we have a cutting planes refutation of the formula restricted by the partial assignment defined by the tree branch reaching that leaf. These refutations of the restricted versions of the formula can then be combined to yield a refutation of the original, unrestricted formula as stated in Lemma 4.1 and Proposition 4.2. The proofs of these statements are fairly routine, but we include them below for completeness.
We recall that as discussed in Section 2 we will use \(\sum_{i}a_{i}x_{i}\leq\gamma\) as an alias for \(\sum_{i}-a_{i}x_{i}\geq-\gamma\) and \(\sum_{i}a_{i}x_{i}=\gamma\) as an alias for the combination of \(\sum_{i}a_{i}x_{i}\leq\gamma\) and \(\sum_{i}a_{i}x_{i}\geq\gamma\). In particular, we will frequently write \(x=b\) for some variable \(x\) and \(b\in\{0,1\}\) as a shorthand for the pair of inequalities \(x\leq b\) and \(-x\leq-b\).
**Lemma 4.1**.: _Let \(b\in\{0,1\}\) and suppose that there exists a cutting planes derivation \((B_{1},\ldots,B_{L})\) in length \(L\) of the inequality \(\sum_{i}a_{i}x_{i}\leq\gamma\) from the system of inequalities \(\mathcal{S}\cup\{x=b\}\). Then for some \(K\in\mathbb{N}\) there is a CP derivation in length \(\mathrm{O}(L)\) of the inequality_
\[(-1)^{1-b}K\cdot(x-b)+\sum_{i}a_{i}x_{i}\leq\gamma \tag{4.1}\]
_from \(\mathcal{S}\)._
Proof.: We only consider the case of \(b=1\), since the case of \(b=0\) is essentially the same. Observe that \(x\leq 1\) is a boolean axiom, and so the only axiom lost when passing from \(\mathcal{S}\cup\{x=b\}\) to \(\mathcal{S}\) is \(-x\leq-1\). The proof is by forward induction over the derivation. We use the notation \(B_{t}=\sum_{i}a_{i}^{(t)}x_{i}\leq\gamma^{(t)}\) for the linear inequalities \(B_{t}\), \(t\in[L]\), in the original derivation. We show how to derive the inequalities
\[K^{(t)}\cdot(x-1)+\sum_{i}a_{i}^{(t)}x_{i}\leq\gamma^{(t)} \tag{4.2}\]
for \(t\in[L]\) with a constant number of rule applications per step, assuming that the preceding inequalities have already been derived. We proceed by cases depending on which rule was used to derive \(B_{t}\) in the original derivation.
If \(B_{t}\) is the axiom \(-x\leq-1\) we can substitute it with axiom \(0\leq 0\), which can be written as \((x-1)-x\leq-1\). Notice that, technically speaking, \(0\leq 0\) is not an axiom of the cutting planes proof system, but it is convenient to allows such lines in our derivation and remove them in a final postprocessing step. If \(B_{t}\) is a variable axiom \(x\geq 0\) or an initial axiom \(A_{j}\in\mathcal{S}\), then it is already on the form (4.2) with \(K^{(t)}=0\).
If \(B_{t}\) is derived as a sum \(B_{t}=B_{t_{1}}+B_{t_{2}}\) for some \(1\leq t_{1}<t_{2}<t\), then by the induction hypothesis we have already derived
\[K^{(t_{1})}(x-1)+\sum_{i}a_{i}^{(t_{1})}x_{i}\leq\gamma^{(t_{1})} \tag{4.3}\]
and
\[K^{(t_{2})}(x-1)+\sum_{i}a_{i}^{(t_{2})}x_{i}\leq\gamma^{(t_{2})} \tag{4.4}\]
and the sum of these two inequalities is already on the form (4.2) with \(K^{(t)}=K^{(t_{1})}+K^{(t_{2})}\).
If \(B_{t}\) is derived by multiplication \(B_{t}=\alpha B_{t^{\prime}}\) for some \(1\leq t^{\prime}<t\), then then by the induction hypothesis we have already derived
\[K^{(t^{\prime})}(x-1)+\sum_{i}a_{i}^{(t^{\prime})}x_{i}\leq\gamma^{(t^{\prime} )}\enspace, \tag{4.5}\]
and so \(B_{t}=\alpha B_{t^{\prime}}\) is on the form (4.2) with \(K^{(t)}=\alpha K^{(t^{\prime})}\).
If \(B_{t}\) is obtained by the application of the division rule to some previously derived inequality \(B_{t^{\prime}}\), then \(B_{t}\) has the form
\[\sum_{i}\frac{a_{i}^{(t^{\prime})}}{c}x_{i}\leq\left\lfloor\frac{\gamma^{(t^{ \prime})}}{c}\right\rfloor\enspace. \tag{4.6}\]
By the induction hypothesis we have already derived
\[K^{(t^{\prime})}(x-1)+\sum_{i}a_{i}^{(t^{\prime})}x_{i}\leq\gamma^{(t^{\prime })} \tag{4.7}\]
and we want to divide this inequality by \(c\). In order to do so, however, we need to ensure that all coefficients in (4.7) are divisible by \(c\). Since by assumption the application of the division rule in (4.6) was legal we have that \(c\) divides all of \(a_{1},\ldots,a_{n}\). Choose the smallest \(K^{\prime}\) divisible by \(c\) such that \(K^{\prime}-K^{t^{\prime}}\geq 0\), multiply \(x\leq 1\), which we can also write as \(x-1\leq 0\), by \(K^{\prime}-K^{t^{\prime}}\) and then add to (4.7) to obtain
\[K^{\prime}(x-1)+\sum_{i}a_{i}^{(t^{\prime})}x_{i}\leq\gamma^{(t^{\prime})}\enspace. \tag{4.8}\]
In order to apply division we have to collect all constant terms on the right-hand side, meaning that we rewrite (4.8) as
\[K^{\prime}x+\sum_{i}a_{i}^{(t^{\prime})}x_{i}\leq\gamma^{(t^{\prime})}+K^{ \prime}\enspace, \tag{4.9}\]
and division now yields
\[\frac{K^{\prime}}{c}x+\sum_{i}\frac{a_{i}^{(t^{\prime})}}{c}x_{i}\leq\left \lfloor\frac{\gamma^{(t^{\prime})}+K^{\prime}}{c}\right\rfloor \tag{4.10}\]
which we can write as
\[\frac{K^{\prime}}{c}(x-1)+\sum_{i}\frac{a_{i}^{(t^{\prime})}}{c}x_{i}\leq \left\lfloor\frac{\gamma^{(t^{\prime})}+K^{\prime}}{c}\right\rfloor=\left\lfloor \frac{\gamma^{(t^{\prime})}}{c}\right\rfloor \tag{4.11}\]
since \(c\) divides \(K^{\prime}\). The inequality (4.11) is on the form (4.2), and so we are done with our analysis of the division step.
By the induction principle we obtain a derivation in lenght \(\operatorname{O}(L)\) of (4.1), as claimed. As the last step we remove all occurrences of lines \(0\leq 0\). It is clear that any addition of such an inequality to another inequality can simply be ignored.
We next show how Lemma 4.1 can be used to piece together refutations of restricted versions of a \(k\)-colouring formula to one refutation of the unrestricted formula.
**Proposition 4.2**.: _Let \(G\) be a graph and \(k\geq 2\) be a positive integer, and let \(\mathcal{S}\) be the set of inequalities (2.13a)-(2.13c) for \(G\) and \(k\). If for a fixed set of vertices \(u_{1},u_{2},\ldots,u_{\ell}\) in \(G\) and every choice of colours \((c_{1},c_{2},\ldots,c_{\ell})\in[k]^{\ell}\) for these vertices there is a CP refutation in length at most \(L\) of the set of inequalities \(\mathcal{S}\cup\{x_{u_{1},c_{1}}=1,x_{u_{2},c_{2}}=1,\ldots,x_{u_{L},c_{\ell}}=1\}\), then there is a CP refutation of \(\mathcal{S}\) in length \(k^{\operatorname{O}(\ell)}\cdot L\)._
Proof.: We prove the claim by induction on \(\ell\). If \(\ell=0\) then the statement is vacuous. For \(\ell>0\) we assume that we can derive \(1\leq 0\) in length at most \(L\) from \(\mathcal{S}\cup\{x_{u_{1},c_{1}}=1,x_{u_{2},c_{2}}=1,\ldots,x_{u_{\ell},c_{\ell}}= 1,x_{u,c}=1\}\) for a fixed vertex \(u\) and every \(c\in[k]\).
For each \(c\in[k]\) we use Lemma 4.1 to construct a CP derivation of the inequality \(K_{c}(x_{u,c}-1)+1\leq 0\) from \(\mathcal{S}\cup\{x_{u_{1},c_{1}}=1,x_{u_{2},c_{2}}=1,\ldots,x_{u_{\ell},c_{ \ell}}=1\}\) in length \(\kappa L\), where \(\kappa\) is a universal constant. By dividing each such inequality by \(K_{c}\) we get \(x_{u,c}\leq 0\) for all \(c\in[k]\). By summing all these inequalities with the initial axiom \(\sum_{c}x_{u,c}\geq 1\) we obtain \(0\geq 1\). The total length of this refutation is \(k\kappa L+2k\). The proposition follows by the induction principle.
We can now state the main result of this section, namely that the hard \(k\)-colouring instances for polynomial calculus constructed in Section 3 are easy for cutting planes.
**Proposition 4.3**.: _Let \(B\) be a left-regular bipartite graph \(B\) with left degree \(k\) and bounded right degree \(\mathrm{O}(k)\), and consider the graph \(G=G(B)\) in Proposition 3.4. Then if there is no complete matching of the left-hand side of \(B\) into the right-hand side, then the set of inequalities (2.13a)-(2.13c) encoding the \(k\)-colouring problem on \(G\) has a cutting planes refutation in length \(k^{\mathrm{O}(k)}\cdot|V(B)|^{\mathrm{O}(1)}\)._
Proof sketch.: Consider the first \(k\) vertices \(\gamma_{1},\ldots,\gamma_{k}\) in the pre-colouring gadget in \(G\) as depicted in Figure 2, which form a \(k\)-clique. For every partial colouring \((c_{1},c_{2},\ldots,c_{k})\in[k]^{k}\) of this \(k\)-clique we build a cutting planes refutation of
\[\mathcal{S}\cup\{x_{\gamma_{1},c_{1}}=1,x_{\gamma_{2},c_{2}}=1,\ldots,x_{ \gamma_{k},c_{k}}=1\}\enspace. \tag{4.12}\]
The result then follows by combining all of these refutations using Proposition 4.2.
Fix a choice of colours \((c_{1},c_{2},\ldots,c_{k})\in[k]^{k}\). Notice that if some colour occurs twice in this tuple, then we can derive contradiction in length \(\mathrm{O}(1)\) from (4.12) since one of the edge axioms (2.13c) is violated. Suppose therefore that \((c_{1},c_{2},\ldots,c_{k})\) is a permutation of \([k]\). We will construct a CP refutation of (4.12) in length \(k^{\mathrm{O}(k)}\cdot|V(B)|^{\mathrm{O}(1)}\).
The system of inequalities \(\mathcal{S}\) is symmetric which respect to the permutation of the colour indices, so without loss of generality we focus on giving a refutation for
\[\mathcal{S}\cup\{x_{\gamma_{1},1}=1,x_{\gamma_{2},2}=1,\ldots,x_{\gamma_{k},k }=1\}\enspace. \tag{4.13}\]
The equations \(\{x_{\gamma_{1},1}=1,x_{\gamma_{2},2}=1,\ldots,x_{\gamma_{k},k}=1\}\) taken together with \(\mathcal{S}\) allow us to efficiently infer \(x_{\gamma_{i},i\bmod k}=1\) for all the vertices \(\gamma_{i}\), \(i\in[M]\), in the gadget in Figure 2 (where we recall from Section 2 that we identify colours \(0\) and \(k\) when convenient). The resulting set of equalities and inequalities \(\mathcal{S}\cup\{x_{\gamma_{i},i\bmod k}=1\mid i\in[M]\}\) is essentially an encoding of the \(k\)-colouring problem for the partially colored graph \(\widehat{G}\) in Lemma 3.3 consisting of the gadgets in Figure 1. Indeed, since the partial assignment \(\{x_{\gamma_{1},1}=1,x_{\gamma_{2},2}=1,\ldots,x_{\gamma_{k},k}=1\}\) forces the colours of all vertices \(\gamma_{i}\), \(i\in[M]\), in Figure 2, this gives us back the pre-coloured vertices in the gadgets in Figure 1.
As argued in (the proof of) Lemma 3.3, \(\widehat{G}\) is the union of at most \(\mathrm{O}\big{(}k^{2}|V(B)|\big{)}\) injectivity constraint gadgets \(G_{(i,i^{\prime})\not=(c,c^{\prime})}\) that forbid pigeons \(i\) and \(i^{\prime}\) taking their \(c\)th and \(c^{\prime}\)th edges, respectively, colliding in some hole \(j\). If we introduce the alias \(p_{i,j}\) for \(x_{i,c}\), where \(j\) is the hole to which the \(c\)th edge from pigeon \(i\) leads, then our goal can be described as deriving the pigeonhole axiom \(p_{i,j}+p_{i^{\prime},j}=x_{i,c}+x_{i^{\prime},c^{\prime}}\leq 1\) from the set of inequalities of the corresponding gadget \(G_{(u,v)\not=(c,c^{\prime})}\). We will see shortly how to do so in length \(\mathrm{O}\big{(}k^{\mathrm{O}(k)}\big{)}\). Once we extract these pigeonhole inequalities we observe that the collection of these inequalities together with the inequalities (2.13a) form a cutting plane encoding
\[\sum_{j\in J(i)}p_{i,j} \geq 1 i\in I, \tag{4.14a}\] \[p_{i,j}+p_{i^{\prime},j} \leq 0 i\neq i^{\prime}\in I,j\in J(i)\cap J(i^{\prime}). \tag{4.14b}\]
of the graph pigeonhole principle on the bipartite graph \(B\) with left-hand side \(I\) and right-hand side \(J\). Such a system of inequalities has a cutting plane refutation in length \(\mathrm{O}\big{(}|V(B)|^{3}\big{)}\)[CCT87].
In order to derive \(x_{i,c}+x_{i^{\prime},c^{\prime}}\leq 1\) we consider the inequalities involving vertices of \(G_{(i,i^{\prime})\neq(c,c^{\prime})}\) plus the equations \(x_{i,c}=1\) and \(x_{i,c^{\prime}}=1\). By Claim 3.2 this is an unsatisfiable system of inequalities of size \(\mathrm{O}(k)\). By the refutational completeness of cutting planes, and using Lemma 4.1 twice, we obtain a derivation of \(K_{1}(x_{i,c}-1)+K_{2}(x_{i^{\prime},c^{\prime}}-1)\leq-1\) in length \(\exp(\mathrm{O}(k))\). Adding multiples of axioms on the form \(x-1\leq 0\) we get the inequality \(K(x_{i,c}-1)+K(x_{i^{\prime},c^{\prime}}-1)\leq-1\) for some positive integer \(K\), and division by \(K\) yields \(x_{i,c}+x_{i^{\prime},c^{\prime}}\leq 1\).
We have shown how to derive contradiction is length \(k^{\mathrm{O}(k)}|V(B)|^{\mathrm{O}(1)}\) for any given colouring of the vertices \(\gamma_{1},\ldots,\gamma_{k}\). We take such refutations for all \(k^{k}\) possible ways of assigning colours to these vertices and joint them together using Proposition 4.2 into a refutation of the original, unrestricted formula. The proposition follows.
## 5 Concluding Remarks
In this work we exhibit explicitly constructible graphs which are non-\(k\)-colourable but which require large degree in polynomial calculus to certify this fact for the canonical encoding of the \(k\)-colouring problem into polynomial equations over \(\{0,1\}\)-valued variables. This, in turn, implies that the size of any polynomial calculus proof of non-\(k\)-colourability for these graphs must be exponential measured in the number of vertices.
Our degree lower bound also applies to a slightly different encoding with primitive \(k\)th roots of unity used in [10, 11] to build \(k\)-colouring algorithms based on Hilbert's Nullstellensatz. These algorithms construct certificates of non-\(k\)-colourability by solving linear systems of equations over the coefficients of all monomials up to a certain degree. The current paper yields explicit instances for which this method needs to consider monomials up to a very large degree, and therefore has to produce a linear system of exponential size. This answers an open question raised in, for example, [10, 11, 12].
This leads to an important observation, however. The degree lower bound applies to both polynomial encodings discussed above, but the size lower bound only applies to the encoding using \(\{0,1\}\)-valued variables. It is still conceivable that proofs of non-\(k\)-colourability in the roots-of-unity encoding can be small although they must have large degree. This raises the following question.
**Open Problem 1**.: _Is there a family of non-\(3\)-colourable graphs such that any polynomial calculus proof of non-\(3\)-colourability using the roots of unity encoding must require large size?_
If the answer to the question is positive, then no matter how we choose the monomials to consider for the linear system construction in [10, 11], the size of the system will have to be large.
To further reduce the size of the linear system, the algorithms in [10, 11] make use of the symmetries in the graphs. It is a natural question how much such an approach could help for our non-\(k\)-colourable instances. It seems plausible that if we apply our construction to a randomly generated bipartite graph with appropriate parameters, then the final graph will not have many symmetries except for the local symmetries inside the gadgets. In that case our lower bound might apply for the improved version of the algorithm as well.
The work in [1] addresses the proof complexity of refuting constraint satisfaction problems (CSP) and show that standard reductions between CSPs preserve hardness, to some extent, and in particular degree. These reductions are able to translate between various encodings, and indeed they may not preserve monomial size.
One limitation of our result is that our hard graphs are very specific, and arguably somewhat artificial. For the weaker resolution proof system an average-case exponential lower bound has been shown for Erdos-Renyi random graphs \(\mathcal{G}(n,p)\) where \(p\) is slightly above the threshold value \(p_{k}(n)\) at which the graph becomes highly likely to be non-\(k\)-colourable [1]. It is natural to ask whether these instances are hard for polynomial calculus too.
**Open Problem 2**.: _Consider a random graph sampled according to \(\mathcal{G}(n,p)\) with \(p>p_{k}(n)\), so that the
graph is non-\(k\)-colourable with high probability. Does polynomial calculus require large degree to certify non-\(k\)-colourability of such graphs with high probability?_
Some progress on this open problem may come from the recent work of [14] where they show degree lower bound in Nullstellensatz for large classes of graphs, relying just on the girth size.
In this paper, we also show that the graph colouring instances that are provably hard for polynomial calculus are very easy for the cutting planes proof system. It does not seem very likely that graph colouring would be an easy problem for cutting planes, however, and so it would be interesting to find explicit candidates for hard instances for cutting planes, even if proving the actual lower bounds may be very hard. This question is also interesting for the Lasserre/Sums-of-Squares proof system. Our instances seem likely to be easy for Lasserre, since they are based on the hardness of the pigeonhole principle and this combinatorial principle is easy for Lasserre.
**Open Problem 3**.: _Find candidates for explicit hard instances of non-\(3\)-colourability for cutting planes and for Lasserre/Sums-of-squares proof systems, and then prove formally that these instances are indeed hard._
A final, intriguing, observation, which is somewhat orthogonal to the rest of this discussion, is that even though the graph colouring instances in our paper are easy for cutting planes, results from the _Pseudo-Boolean Competition 2016_ indicate that they are quite hard in practice for state-of-the-art pseudo-Boolean solvers [13]. This is even more interesting considering that the cutting planes refutations that we construct have small rank (i.e., the maximum number of application of the division rules along any path in the proof graph is small).
## Acknowledgements
We are grateful to Mladen Miksa and Alexander Razborov for stimulating discussions and helpful feedback during various stages of this project. We would also like to thank Jan Elffers for running experiments with pseudo-Boolean solvers on instances obtained from our reduction from functional pigeonhole principle formulas to graph colouring, demonstrating that these formulas are hard in practice. Last but not least, a big thanks to the anonymous CCC reviewers, who helped us catch some typos and bugs that really should not have been there, and whose suggestions helped improve the exposition considerably.
Part of this research was done while the first author was at KTH Royal Institute of Technology funded by the European Research Council (ERC) under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. 279611. Later work at Universitat Politecnica de Catalunya for this project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement ERC-2014-CoG 648276 AU-TAR). The second author was supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. 279611, by the Swedish Research Council grants 621-2012-5645 and 2016-00782, and by the Independent Research Fund Denmark grant 9040-00389B.
|
2301.13840 | Finite temperature spin diffusion in the Hubbard model in the strong
coupling limit | We investigate finite temperature spin transport in one spatial dimension by
considering the spin-spin correlation function of the Hubbard model in the
limiting case of infinitely strong repulsion. We find that in the absence of
bias the transport is diffusive, and derive the spin diffusion constant. Our
approach is based on asymptotic analysis of a Fredholm determinant
representation. The obtained results are in agreement with Generalized
Hydrodynamics approach. | Oleksandr Gamayun, Arthur Hutsalyuk, Balázs Pozsgay, Mikhail B. Zvonarev | 2023-01-31T18:35:49Z | http://arxiv.org/abs/2301.13840v3 | **Finite temperature spin diffusion in the Hubbard model in the strong coupling limit**
## Abstract
**We investigate finite temperature spin transport in one spatial dimension by considering the spin-spin correlation function of the Hubbard model in the limiting case of infinitely strong repulsion. We find that in the absence of bias the transport is diffusive, and derive the spin diffusion constant. Our approach is based on asymptotic analysis of a Fredholm determinant representation, which is entirely analytic and free of phenomenological assumptions of Generalized Hydrodynamics.**
## 1 Introduction
Quantum transport in the integrable systems attracts ever increasing attention of the physics community [1]. Distinctive features of these systems - a completely elastic and factorized (two-body reducible) scattering, and a presence of an infinite number of conservation laws - combined with basic principles of hydrodynamics resulted in the formulation of the Generalized Hydrodynamics (GHD) [2, 3]. In less than a decade, the GHD evolved into a matured field of research [4]. It offers a systematic treatment of ballistic transport in integrable models [2, 3, 5, 6], an example being calculation of finite temperature Drude weights [5, 7, 8], until then requiring case-by-case approach [9, 10, 11]. The analysis of non-ballistic (that is, diffusive) transport, along with computation of diffusion constants, can also be tackled within the GHD framework [12, 13, 14, 15]. This includes treating anomalous diffusion found in systems possessing special nonabelian symmetries, reviewed in Ref. [16]. The use of the GHD for systems quenched far from equilibrium is also possible [17].
The GHD is an asymptotically exact theory aimed at capturing the dynamics at large distances and past a long-time evolution. It is desirable to complement its findings with first-principle microscopic calculations, making use of exact solvability of integrable models. This has been done for current mean values [18, 19, 20] and for the Drude weights in some cases [11, 21]. As a general rule, however, diffusion constants have not been extracted from exact solutions of many-body integrable quantum systems so far. The reason is that the structure of the exact (Bethe-ansatz) wave functions is complicated, and getting closed-form tractable expressions
for dynamical correlation functions requires extremely involved resummation procedures for the matrix elements [22]. For example, there exist expressions for dynamical correlation functions in the Heisenberg spin-1/2 chain [23, 24, 25, 26, 27, 28, 29, 30], but its finite temperature diffusion constants have not yet been found in this manner.
A way to proceed further is to shift the focus to the models having particularly simple Bethe Ansatz solution and yet non-trivial interparticle interactions. In the case of classical cellular automata such selected models include the Rule54 model [31], box-ball systems [32, 33], and a particle hopping model with two-color excitations [34, 35, 36]. A number of physical quantities (relevant also to transport of conserved quantities) were derived exactly in these models, starting from the fundamental equations of motion. In the case of quantum spin chains promising candidates are their large coupling limits. High temperature transport of the Heisenberg spin chain in the large anisotropy limit is described by the folded XXZ model [37, 38, 39]. Another useful model with infinite dimensional local spaces is the infinite coupling limit of the \(q\)-boson model (the phase model), whose real-time dynamics is tractable within the Bethe ansatz approach [40, 41, 42]. Finally, predictions of GHD for the one-dimensional Hubbard model can be tested in the limiting case of infinitely strong repulsion. Studying the spin transport and deriving the exact analytic formulas for diffusion constant in that limiting case free of any assumptions is the subject of our work.
The Hubbard model is one of the basic models in physics. It is exactly solvable in one spatial dimension by the Bethe ansatz [43, 22, 44], providing full information about the many-body excitation spectrum and collective phenomena, such as spin-charge separation. The integrability of the one-dimensional Hubbard model is proven within the Yang-Baxter framework using the \(R\)-matrix of Shastry [45, 46]. The exact solution involves an interplay of fermion and spin degrees of freedom, and is consequently more complicated than those for some other well known integrable models, including the Heisenberg spin-1/2 chain, and the \(q\)-boson model. Tractable analytic results for correlation functions at and far from equilibrium exist for rather particular observables, and initial conditions [47]. The GHD solution of the Hubbard model has been worked out in Refs. [48, 49, 50, 51], and is not yet complemented by the use of the exact solution for dynamical correlations.
The Hubbard model in the limiting case of infinitely strong repulsion, known as the \(t-0\) model or the restricted hopping model, has been discussed extensively in the literature [44, 52] (as well as its bosonic counterpart, the Maassarani-Mathieu spin chain, also known as the \(SU(3)\) XX model [53, 54, 55]). Its coordinate Bethe Ansatz solution has been used to calculate finite temperature correlation functions in Ref. [56]. An alternative representation for its solution, further elaborated in the works [57, 58], has provided grounds for the investigation of real time dynamics in Refs. [59, 60] followed by [61, 62].
In the infinite coupling limit the double occupancies of the Hubbard model are forbidden, they are projected out from the Hilbert space. As an effect the \(t-0\) model has a three dimensional local Hilbert space: the local basis states are the vacuum, and the two different single particle states, corresponding to the original Hubbard fermions with the two different spin orientations. The special dynamical properties of the \(t-0\) model follow from the projection procedure and the allowed hopping terms of the original Hamiltonian: One can easily show that the spatial ordering of the spins of the electrons is not changed during time evolution, and the time evolution of the positions of the electrons does not depend on the spin configuration. These dynamical phenomena were called "single-file property" and "charge inertness" in [63]. These properties underly the exact solvability of real time dynamics the model.
In this work we focus on the finite temperature spin-spin correlation function in the
\(0\) model. We start with the derivation of the exact results using spin-charge separations. The correlation function can be presented as an integral of the Fredholm determinants for which we perform the asymptotic analysis using a heuristic method of the effective form factors [64, 65, 66]. Performing saddle point analysis of the obtained expressions we observed that depending on the initial profile the correlation function in question contains both the ballistic and the diffusive parts. From these expressions we extract value for the Drude weight and the diffusion constant correspondingly. Appendices contain all necessary technical derivations. Our results agree with those obtained from GHD. For the infinite temperature the value of the diffusion constant agrees with the one given in [67]. This is the first time that finite temperature spin diffusion was treated in an interacting lattice model via the exact formulas valid at thermodynamic limit at all times and distances. Also, it is a first quantum mechanical extension of the results of [34, 35, 36] regarding models with the "single-file" property.
## 2 Model and spin-charge separation
In this section we introduce the model and a basis separating spin and charge excitations. This basis is well suited to calculate dynamical correlation functions of the model exactly, which we do in section 3.
We consider the Hubbard model describing interacting spin-\(1/2\) fermions on a one-dimensional lattice. The Hamiltonian reads
\[H=-\sum_{\begin{subarray}{c}j=-\infty\\ \alpha=\uparrow,\downarrow\end{subarray}}^{\infty}\left(\psi_{j\alpha}^{ \dagger}\psi_{j+1\alpha}+\psi_{j+1\alpha}^{\dagger}\psi_{j\alpha}\right)-hN+ 2BS_{z}+U\sum_{j=-\infty}^{\infty}n_{j\uparrow}n_{j\downarrow}. \tag{1}\]
The fermionic creation, \(\psi_{j\alpha}^{\dagger}\), and annihilation, \(\psi_{j\alpha}\), operators (\(\alpha\) is a spin index, \(\alpha=\uparrow,\downarrow\)) satisfy canonical equal-time anti-commutation relations,
\[\psi_{j\alpha}\psi_{j^{\prime}\alpha^{\prime}}^{\dagger}+\psi_{j^{\prime} \alpha^{\prime}}^{\dagger}\psi_{j\alpha}=\delta_{jj^{\prime}}\delta_{\alpha \alpha^{\prime}}, \tag{2}\]
where
\[\delta_{ab}=\begin{cases}1&a=b\\ 0&a\neq b\end{cases} \tag{3}\]
is the Kronecker delta symbol. The operator \(n_{j\alpha}=\psi_{j\alpha}^{\dagger}\psi_{j\alpha}\) is the density operator for the spin-up (\(\alpha=\uparrow\)) and spin-down (\(\alpha=\downarrow\)) fermions, and
\[n_{j}=n_{j\uparrow}+n_{j\downarrow}, \tag{4}\]
counts the total density of fermions on site \(j\). The local spin vector \(\mathbf{s}(j)=(s_{x}(j),s_{y}(j),s_{z}(j))\) is defined as
\[\mathbf{s}(j)=\frac{1}{2}\left(\psi_{j\uparrow}^{\dagger}\quad\psi_{j \downarrow}^{\dagger}\right)\boldsymbol{\sigma}\begin{pmatrix}\psi_{j\uparrow }\\ \psi_{j\downarrow}\end{pmatrix}, \tag{5}\]
where
\[\boldsymbol{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z}) \tag{6}\]
is the vector composed of the three Pauli matrices. In particular, \(s_{z}(j)=(n_{j\uparrow}-n_{j\downarrow})/2\). The spin-ladder operators \(s_{\pm}(j)=s_{x}(j)\pm is_{y}(j)\) flip the \(z\) component of a local spin, and read
\(s_{+}(j)=\psi^{\dagger}_{j\uparrow}\psi_{j\downarrow}\) and \(s_{-}(j)=\psi^{\dagger}_{j\downarrow}\psi_{j\uparrow}\), respectively. The total number of particles,
\[N=\sum_{j=-\infty}^{\infty}n_{j}, \tag{7}\]
and the \(z\) projection of the total spin,
\[S_{z}=\sum_{j=-\infty}^{\infty}s_{z}(j), \tag{8}\]
are conserved quantities.
In the present work, we focus on the infinitely strong repulsion limit, \(U\to\infty\), of the Hubbard model (1). It would cost infinite energy to put two particles on any site in this limit, due to the on-site interaction term \(U\sum_{j}n_{j\uparrow}n_{j\downarrow}\) in Eq. (1). We thus arrive at the no double occupancy (NDO) constraint, which can be fulfilled by applying the projection operator
\[P=\prod_{j=-\infty}^{\infty}(1-n_{j\uparrow}n_{j\downarrow}) \tag{9}\]
to the Hamiltonian (1). This results in the \(t-0\) model [68],
\[H=P\left[-\sum_{\begin{subarray}{c}j=-\infty\\ \alpha=\uparrow,\downarrow\end{subarray}}^{\infty}(\psi^{\dagger}_{j\alpha} \psi_{j+1\alpha}+\psi^{\dagger}_{j+1\alpha}\psi_{j\alpha})-hN+2BS_{z}\right]P. \tag{10}\]
Each site of the lattice can now be either empty, or occupied by one spin-up or one spin-down fermion.
Any eigenstate of the Hamiltonian (1) can be constructed of basis states
\[|\mathbf{j},\boldsymbol{\alpha}\rangle=\psi^{\dagger}_{j_{1},\alpha_{1}} \ldots\psi^{\dagger}_{j_{N},\boldsymbol{\alpha}_{N}}|0\rangle,\qquad j_{1} \leq j_{2}\cdots\leq j_{N}, \tag{11}\]
where \(|0\rangle\) is the vacuum state, which contains no fermions. Only those states satisfying
\[P[\mathbf{j},\boldsymbol{\alpha}\rangle\neq 0, \tag{12}\]
which is equivalent to the NDO constraint
\[j_{1}<j_{2}\cdots<j_{N} \tag{13}\]
can be used to construct the eigenstates of the Hamiltonian (10). Taking the coordinates \(j_{1},\ldots,j_{N}\) and the spin orientations \(\alpha_{1},\ldots,\alpha_{N}\) from a state (11) satisfying the NDO constraint we define the state
\[|f\rangle=c^{\dagger}_{j_{1}}\ldots c^{\dagger}_{j_{N}}|0\rangle \tag{14}\]
made of spinless fermions (\(c^{\dagger}_{j}\) creates, and \(c_{j}\) annihilates a fermion on site \(j\)), and the state
\[|\ell\rangle=|\alpha_{1},\ldots,\alpha_{N}\rangle \tag{15}\]
of a spin-\(1/2\) chain of length \(N\) uniquely. The reverse is also true: having defined \(|f\rangle\neq 0\) by Eq. (14) and \(|\ell\rangle\) by Eq. (15) one can reconstruct \(|{\bf j},\mathbf{\alpha}\rangle\), which will satisfy the NDO constraint. Thus, we can write
\[|{\bf j},\mathbf{\alpha}\rangle=|f\rangle\otimes_{f}|\ell\rangle,\qquad j_{1}<j_{2 }<\cdots<j_{N}. \tag{16}\]
The subscript \(f\) in \(\otimes_{f}\) indicates that the tensor product \(\otimes\) is equipped with a constraint: the number of spinless fermions in the charge part of the wave function, \(|f\rangle\), determines the number of sites of the spin chain in the spin part of the wave function, \(|\ell\rangle\).
The operators \(\psi^{\dagger}_{j\alpha}\) and \(\psi_{j\alpha}\) can be expressed via \(c^{\dagger}_{j}\), \(c_{j}\), and the local spin operators
\[\mathbf{\ell}(m)=1\otimes\cdots\otimes\frac{\mathbf{\sigma}(m)}{2}\otimes\cdots \otimes 1, \tag{17}\]
acting onto \(|\ell\rangle\), where \(\mathbf{\sigma}\) is defined by Eq. (6). The explicit formulas are given in Ref. [52]. The local spin operators (5) preserve the fermion number \(N\), and their representation is consequently simpler [69]. Its key ingredient is the counting operator
\[\mathcal{N}_{j}=\sum_{a=-\infty}^{j}n_{a}. \tag{18}\]
The value of \(\mathcal{N}_{j}\) increases by one each time a lattice site is occupied, when \(j\) runs from minus infinity to infinity. The local density operator (4) expressed via spinless fermion operators read
\[n_{j}=c^{\dagger}_{j}c_{j}. \tag{19}\]
The local spin operator can be represented via \(\mathbf{\ell}(m)\) and \(n_{j}\) using Eq. (18):
\[{\bf s}(j)=n_{j}\sum_{m=-\infty}^{\infty}\mathbf{\ell}(m)\delta_{m,\mathcal{N}_{j}}. \tag{20}\]
Let us illustrate how Eq. (20) works for \(|\Psi\rangle=\psi^{\dagger}_{1,\alpha_{1}}\psi^{\dagger}_{5,\alpha_{2}}|0\rangle\). Following Eq. (16) we write \(|\Psi\rangle=c^{\dagger}_{1}c^{\dagger}_{5}|0\rangle\otimes_{f}|\alpha_{1}, \alpha_{2}\rangle\). Applying \({\bf s}(j)\) to \(|\Psi\rangle\) we get zero for \(j\) other than one and five, because of vanishing \(n_{j}\) (naturally, there are no spins at the lattice sites not occupied by the fermions). We have \(n_{j}=1\) for sites one and five; \(\mathcal{N}_{1}=1\) and \(\mathcal{N}_{5}=2\) imply \({\bf s}(1)=\mathbf{\ell}(1)\) and \({\bf s}(5)=\mathbf{\ell}(2)\), respectively, and the action of the operator \(\mathbf{\ell}\) is defined by Eq. (17).
We rewrite Eq. (20) as
\[{\bf s}(j)=\sum_{m=-\infty}^{\infty}\int_{-\pi}^{\pi}\frac{d\lambda}{2\pi}\,n _{j}e^{i\lambda(\mathcal{N}_{j}-m)}\mathbf{\ell}(m) \tag{21}\]
using the integral representation of the Kronecker delta symbol. The Hamiltonian (10) expressed via \(c^{\dagger}_{j}\), \(c_{j}\), and \(\mathbf{\ell}(m)\) reads
\[H=-\sum_{j=-\infty}^{\infty}(c^{\dagger}_{j}c_{j+1}+c^{\dagger}_{j+1}c_{j})-hN +2BS_{z}, \tag{22}\]
where \(N\) is the number operator (7) written via the spinless fermion density (19). We have
\[S_{z}|\Psi\rangle=|f\rangle\otimes_{f}L_{z}|\ell\rangle,\qquad L_{z}=\sum_{m= 1}^{N}\ell_{z}(m). \tag{23}\]
The eigenbasis of the Hamiltonian (22) is formed by the vectors \(\left|\mathbf{k}\right\rangle\otimes_{f}\left|\ell\right\rangle\), where
\[\left|\mathbf{k}\right\rangle=c_{k_{1}}^{\dagger}\ldots c_{k_{N}}^{\dagger}|0\rangle \tag{24}\]
are the momentum-space components of the vector (14), and
\[c_{j}^{\dagger}=\frac{1}{\sqrt{2\pi}}\sum_{k}e^{-ikj}c_{k}^{\dagger}. \tag{25}\]
Therefore,
\[H\left(\left|\mathbf{k}\right\rangle\otimes_{f}\left|\ell\right\rangle\right) =\left(E+E_{\ell}\right)\left(\left|\mathbf{k}\right\rangle\otimes_{f} \left|\ell\right\rangle\right), \tag{26}\]
where
\[E=\sum_{i=1}^{N}\varepsilon(k_{i}),\qquad\varepsilon(k)=-2\cos(k). \tag{27}\]
and
\[E_{\ell}=-h(N_{\uparrow}+N_{\downarrow})+B(N_{\uparrow}-N_{\downarrow}). \tag{28}\]
Here, \(\left|\ell\right\rangle\) is a state of a spin chain containing \(N_{\uparrow}\) spin-up and \(N_{\downarrow}=N-N_{\uparrow}\) spin-down sites.
The use of the representation (16) for the \(t-0\) model is called the spin-charge separation in some literature [70]. A few words of caution should be mentioned about this terminology. Indeed, \(\left|f\right\rangle\) and \(\left|\ell\right\rangle\) can be chosen independently from each other, with the only constraint defining the length of the spin chain via the fermion number \(N\). A separation can also be seen in the Hamiltonian (22): \(S_{z}\) acts non-trivially onto \(\left|\ell\right\rangle\), Eq. (23), the remaining terms act onto \(\left|f\right\rangle\), and \(L_{z}\) depends on \(N\). However, Eq. (21) and the formulas for \(\psi_{j\alpha}^{\dagger}\) and \(\psi_{j\alpha}\), Ref. [52], cannot be split into a product of operators containing only spin, \(\boldsymbol{\ell}(m)\), and charge, \(c_{j}^{\dagger}\) and \(c_{j}\), parts. Although the bosonization offers splitting of the local operators into spin and charge parts at low energies and momenta (this procedure is also called the spin-charge separation in the literature), it requires the linearity of the excitation spectrum [71, 72]. Thus, the spin-charge separation understood in the sense of the transformation (16)-(22) works far beyond the bosonization in the model (10). It captures, in particular, the polaron [69, 73] and the spin-incoherent [74] physics of the model. A limitation of the transformation (16)-(22) is the need for the NDO constraint, resulting in its failure for the finite \(U\) Hubbard model, Eq. (1), where the bosonization works. There exists a transformation aimed to separate spin and charge degrees of freedom for the finite \(U\) Hubbard model beyond the bosonization paradigm [57, 58, 75, 76, 77, 78, 79], but its analysis lies out of the scope of the present work.
## 3 Dynamical correlation functions
In this section we evaluate the connected, two point dynamical correlation function of the \(z\)-projection of spins,
\[\sigma^{(\mathrm{c})}(j-j^{\prime},t)=\langle s_{z}(j,t)s_{z}(j^{\prime},0) \rangle_{T}-\langle s_{z}(j,t)\rangle_{T}\langle s_{z}(j^{\prime},0)\rangle_{ T}. \tag{29}\]
The average
\[\langle\cdots\rangle_{T}=\frac{1}{Z}\sum_{N=0}^{\infty}\sum_{f,\ell}\Big{(} \langle\ell|\otimes_{f}\langle f|\Big{)}e^{-\beta H}\cdots\Big{(}|f\rangle \otimes_{f}|\ell\Big{)} \tag{30}\]
is computed in the grand canonical ensemble at temperature \(T\), chemical potential \(h\), and magnetic field \(B\). Note that the right hand side of this expression is the trace of the equilibrium density matrix \(e^{-\beta H}/Z\), where \(Z\) is the grand partition function, and \(\beta=1/T\) is the inverse temperature. The trace is invariant with respect to the choice of the basis, therefore \(\sum_{f}\langle f|\cdots|f\rangle\) can be replaced with \(\sum_{\mathbf{k}}\langle\mathbf{k}|\cdots|\mathbf{k}\rangle\), where \(|\mathbf{k}\rangle\) is defined by Eq. (24). We represent the function (29) as a Fredholm determinant of an integrable integral operator. This representation is exact for any value of relative coordinate \(j-j^{\prime}\) and time \(t\).
### Local magnetization
We start evaluating Eq. (29) with considering the local magnetization \(\langle s_{z}(j,t)\rangle_{T}\), which does not depend on time at equilibrium. We substitute the representation (21) into Eq. (30) and calculate \(\sum_{\ell}\langle\ell|\cdots|\ell\rangle\) in the first place:
\[\sum_{\ell}e^{-\beta E_{\ell}}=e^{\beta hN}[2\cosh(\beta B)]^{N} \tag{31}\]
and
\[\sum_{\ell}e^{-\beta E_{\ell}}\langle\ell|\ell_{z}(m)|\ell\rangle=-\frac{1}{ 2}\tanh(\beta B)e^{\beta hN}[2\cosh(\beta B)]^{N}. \tag{32}\]
We see that the right hand side of Eq. (32) is independent of \(m\). Taking into account that
\[\sum_{m=-\infty}^{\infty}e^{i\lambda m}=2\pi\delta(\lambda),\qquad-\pi\leq \lambda\leq\pi \tag{33}\]
we arrive at the sum over spinless fermion states, which we write in the basis (24):
\[\langle s_{z}(j,t)\rangle_{T}=-\frac{\tanh(\beta B)}{2}\frac{\sum_{N=0}^{ \infty}\sum_{\mathbf{k}}e^{-\beta\tilde{E}_{\mathbf{k}}}\langle\mathbf{k}|n_{ j}|\mathbf{k}\rangle}{\sum_{N=0}^{\infty}\sum_{\mathbf{k}}e^{-\beta\tilde{E}_{ \mathbf{k}}}}. \tag{34}\]
The energy \(\tilde{E}_{\mathbf{k}}\) is a sum of single-particle energies \(\tilde{\varepsilon}(k_{i})\) which are shifted relative to \(\varepsilon(k_{i})\) defined by Eq. (27):
\[\tilde{E}_{\mathbf{k}}=\sum_{i=1}^{N}\tilde{\varepsilon}(k_{i}),\qquad\tilde{ \varepsilon}(k)=\varepsilon(k)-h-\frac{\log[2\cosh(\beta B)]}{\beta}. \tag{35}\]
Indeed, such a modification accounts for the charge-dependent prefactors in the spin average (32). After accounting for these subtleties the rest of the computations are performed as in a free Fermi gas and result in the following expression in the thermodynamic limit
\[\langle s_{z}(j,t)\rangle_{T}=-\frac{\tanh(\beta B)}{2}\int\limits_{-\pi}^{ \pi}\frac{dk}{2\pi}\rho(k), \tag{36}\]
where \(\rho(k)\) is a Fermi-Dirac distribution with the modified energies
\[\rho(k)=\frac{1}{e^{\beta\tilde{\varepsilon}(k)}+1}=\frac{2\cosh(\beta B)}{2 \cosh(\beta B)+e^{\beta[\varepsilon(k)-\hbar]}}. \tag{37}\]
### The two-point function
Now we turn to the two-point correlation function
\[\sigma(j-j^{\prime},t)=\langle s_{z}(j,t)s_{z}(j^{\prime},0)\rangle_{T}. \tag{38}\]
Using the same arguments and employing presentation (21) we factorize the average in Eq. (38) into the spin and charge sectors
\[\sigma(j-j^{\prime},t)=\frac{1}{Z}\sum_{N=0}^{\infty}\sum_{\mathbf{k},\ell}\, \sum_{m,m^{\prime}=-\infty}^{\infty}\int_{-\pi}^{\pi}\frac{d\lambda}{2\pi} \frac{d\lambda^{\prime}}{2\pi}e^{-i\lambda m+i\lambda^{\prime}m^{\prime}}e^{- \beta E_{\mathbf{k}}}\mathcal{C}_{p}(\lambda,\lambda^{\prime};j-j^{\prime};t) \mathcal{S}(m,m^{\prime}). \tag{39}\]
Similar separation formulas, though approximate, appear in the desription of the tracer dynamics [80]. The charge part is the correlation function of the free spinless fermions
\[\mathcal{C}_{p}(\lambda,\lambda^{\prime};j-j^{\prime};t)=\langle\mathbf{k}|n_{ j}(t)e^{i\lambda\mathcal{N}_{j}(t)}e^{-i\lambda\mathcal{N}_{j^{\prime}}(0)}n_{ j^{\prime}}(0)|\mathbf{k}\rangle. \tag{40}\]
The spin part formally is defined as
\[\mathcal{S}(m,m^{\prime})=e^{-\beta E_{\ell}}\langle\ell|s_{z}(m)s_{z}(m^{ \prime})|\ell\rangle. \tag{41}\]
Notice that here the time dependence canceled out since \(s_{z}(m)\) does commute with the Hamiltonian. For the chains of length \(N\), similarly to (32) we can write
\[\sum_{\ell}\mathcal{S}(m,m^{\prime})=\mathrm{Tr}\left[s_{z}(m)s_{ z}(m^{\prime})e^{-\beta(2S_{z}B-hN)}\right]\\ =\frac{1}{4}e^{\beta hN}(2\cosh\beta B)^{N}\left(\frac{\delta_{m, m^{\prime}}}{\cosh^{2}\beta B}+\tanh^{2}(\beta B)\right). \tag{42}\]
Using relation (33) we arrive at the following representation for the total correlation function
\[\sigma(j-j^{\prime},t)=\sigma_{0}(j-j^{\prime},t)+\sigma_{1}(j-j^{\prime},t), \tag{43}\]
where
\[\sigma_{0}(j-j^{\prime},t)=\frac{\tanh^{2}(\beta B)}{4Z}\sum_{N=0}^{\infty} \sum_{\mathbf{k}}e^{-\beta\tilde{E}_{\mathbf{k}}}\langle\mathbf{k}|n_{j}(t)n_ {j^{\prime}}(0)|\mathbf{k}\rangle, \tag{44}\]
\[\sigma_{1}(j-j^{\prime},t)=\frac{1}{4\cosh^{2}(\beta B)}\frac{1}{Z}\sum_{N=0} ^{\infty}\sum_{\mathbf{k}}e^{-\beta\tilde{E}_{\mathbf{k}}}\int\limits_{-\pi} ^{\pi}\frac{d\lambda}{2\pi}\langle\mathbf{k}|n_{j}(t)e^{i\lambda\mathcal{N}_ {j}(t)}e^{-i\lambda\mathcal{N}_{j^{\prime}}(0)}n_{j^{\prime}}(0)|\mathbf{k}\rangle. \tag{45}\]
Here as above the energy \(\tilde{E}_{\mathbf{k}}\) is constructed from the quasienergies (35).
The first contribution \(\sigma_{0}\) can be computed immediately by applying the Wick's theorem, but instead we proceed with the computation of \(\sigma_{1}\) and then take limit \(\lambda\to 0\) of the function under the integral. To compute the average in \(\sigma_{1}\) we notice that
\[n_{j}(t)=\frac{e^{i\lambda n_{j}(t)}-1}{e^{i\lambda}-1},\qquad n_{j^{\prime}}( 0)=\frac{e^{-i\lambda n_{j^{\prime}}(0)}-1}{e^{-i\lambda}-1}. \tag{46}\]
This way if we have a string correlator
\[\mathcal{F}_{\lambda}^{(\mathbf{k})}(j-j^{\prime},t)=\langle\mathbf{k}|e^{i \lambda\mathcal{N}_{j}(t)}e^{-i\lambda\mathcal{N}_{j^{\prime}}(0)}|\mathbf{k}\rangle, \tag{47}\]
then
\[\langle\mathbf{k}|n_{j}(t)e^{i\lambda\mathcal{N}_{j}(t)}e^{-i \lambda\mathcal{N}_{j^{\prime}}(0)}n_{j^{\prime}}(0)|\mathbf{k}\rangle\\ =\frac{2\mathcal{F}_{\lambda}^{(\mathbf{k})}(j-j^{\prime},t)- \mathcal{F}_{\lambda}^{(\mathbf{k})}(j-j^{\prime}-1,t)-\mathcal{F}_{\lambda}^ {(\mathbf{k})}(j-j^{\prime}+1,t)}{2(1-\cos\lambda)}. \tag{48}\]
The string correlator \(\mathcal{F}_{\lambda}^{(\mathbf{k})}(j-j^{\prime},t)\) can be expressed as a single determinant, which in the thermodynamic limit takes a form of a Fredholm determinant.
\[\mathcal{F}_{\lambda}(x,t)=\det(1+\hat{\mathcal{U}}). \tag{49}\]
The kernel of the operator \(\hat{\mathcal{U}}\) reads
\[\mathcal{U}(k,q)=\frac{\ell_{+}(x,k)\ell_{-}(x,q)-\ell_{-}(x,k)\ell_{+}(x,q)} {2\pi\sin\frac{k-q}{2}}, \tag{50}\]
where
\[\ell_{+}(x,k)=\sqrt{\rho(k)}\left(\frac{1-\cos\lambda}{2}E_{+}(x,k)+\frac{\sin \lambda}{2}E_{-}^{-1}(x,k)\right) \tag{51}\]
\[\ell_{-}(x,k)=E_{-}(x,k)\sqrt{\rho(k)},\qquad E_{-}(k)=e^{it\varepsilon(k)/2- i\pm k/2} \tag{52}\]
with
\[E_{+}(x,k)=E(x,k)E_{-}(x,k), \tag{53}\]
\[E(x,k)=\int\limits_{-\pi}^{\pi}\frac{dq}{2\pi}\frac{e^{-it\varepsilon(q)+ixq} }{\tan\frac{q-k}{2}}. \tag{54}\]
There integral is taken in the principal value sense. We present the derivation in Appendix (A) (see also [81]).
Taking into account a special structure of the coordinate dependence in (48), it is useful to introduce the shift operator \(\hat{S}\) acting on the functions of the discrete variable \(x\)
\[\hat{S}f(x)=2f(x)-f(x+1)-f(x-1), \tag{55}\]
which is nothing but a discrete analog of the second derivative. This way, \(\sigma_{1}\) in the thermodynamic limit reads
\[\sigma_{1}(x,t)=\frac{1}{4\cosh^{2}(\beta B)}\int\limits_{-\pi}^{\pi}\frac{d \lambda}{2\pi}\frac{\hat{S}\mathcal{F}_{\lambda}(x,t)}{2(1-\cos\lambda)}, \tag{56}\]
and \(\sigma_{0}\) can be presented as
\[\sigma_{0}(x,t)=\frac{\tanh^{2}(\beta B)}{4}\frac{\hat{S}\mathcal{F}_{\lambda} (x,t)}{2(1-\cos\lambda)}\Big{|}_{\lambda=0}. \tag{57}\]
Expanding Fredholm determinants (see Appendix (B)) we obtain the following expressions
\[\sigma_{0}(x,t)\\ =\frac{\tanh^{2}(\beta B)}{4}\left(\left[\int\limits_{-\pi}^{\pi} \frac{dk}{2\pi}\rho(k)\right]^{2}+\int\limits_{-\pi}^{\pi}\frac{dk}{2\pi}\rho(k) e^{it\varepsilon(k)-ikx}\int\limits_{-\pi}^{\pi}\frac{dq}{2\pi}e^{-it\varepsilon(q) +iqx}(1-\rho(q))\right). \tag{58}\]
This way, taking into account (36) the connected correlation function (29) reads
\[\sigma^{(c)}(x,t)=\sigma_{0}^{(c)}(x,t)+\sigma_{1}(x,t) \tag{59}\]
with
\[\sigma_{0}^{(c)}(x,t)=\frac{\tanh^{2}(\beta B)}{4}\int\limits_{-\pi}^{\pi} \frac{dk}{2\pi}\rho(k)e^{it\varepsilon(k)-ikx}\int\limits_{-\pi}^{\pi}\frac{dq }{2\pi}e^{-it\varepsilon(q)+iqx}(1-\rho(q)). \tag{60}\]
Formula (59) is an exact form for the spin-spin correlation function (29) in the thermodynamic limit. Fredholm determinants can be effectively evaluated numerically [82] at any values of \(x\) and \(t\). The universal physical characteristics can be extracted from (59) by studying its asymptotic behavior for large \(x\) and \(t\). We perform this analysis in the next chapter.
## 4 Transport coefficients and comparison with GHD
Having derived the two-point function now we turn to its asymptotic analysis. This way we derive the key transport properties of the model. We show that in the general case the model supports both ballistic and diffusive spin transport, and we derive the characteristic quantities, the Drude weight and the diffusion constant. We use the notations of Ref. [83].
Let us start with the static covariance defined as
\[\mathsf{C}=\sum_{x}\sigma^{(c)}(x,t=0). \tag{61}\]
First, we simplify the kernel of the Fredholm determinant. Integral in (54) can be evaluated exactly
\[E(x,k)\Big{|}_{t=0}=i\text{sgn}(x)e^{ikx}, \tag{62}\]
so the full kernel (50) simplifies into
\[\mathcal{U}(k,q)=\frac{e^{i\lambda\text{sgn}(x)}-1}{2\pi}\sqrt{\rho(k)}\frac{ \sin\frac{|x|(k-q)}{2}}{\sin\frac{k-q}{2}}\sqrt{\rho(q)}. \tag{63}\]
In this form, this kernel is identical to the one of the effective fermions with the constant phase shift \(\lambda\)[65]. The Fredholm determinant \(\mathcal{F}_{\lambda}\) can be considered as a series expansion of the traces of the antisymmetric powers of the \(\hat{\mathcal{U}}\). The first few terms read
\[\mathcal{F}_{\lambda}=1+(e^{i\lambda\text{sgn}(x)}-1)|x|\int\limits_{-\pi}^{ \pi}\frac{dk}{2\pi}\rho(k)+O((e^{i\lambda\text{sgn}(x)}-1)^{2}). \tag{64}\]
Taking into account that for \(n>1\)
\[\frac{(e^{\pm i\lambda}-1)^{n}}{1-\cos\lambda}=-2e^{\pm i\lambda}(e^{\pm i\lambda} -1)^{n-2}, \tag{65}\]
we see that terms in the remainder in (64) vanish after the integration over \(\lambda\). Further, we compute the action of the shift operator on the first two terms.
\[\hat{S}1=0,\qquad\hat{S}(e^{i\lambda\text{sgn}(x)}-1)|x|=2(1-\cos\lambda)\delta _{x,0}. \tag{66}\]
Therefore performing summation over \(x\) we obtain contribution to the static covariance from \(\sigma_{1}\)
\[\sum_{x}\sigma_{1}(x,t=0)=\frac{1}{4\cosh^{2}(\beta B)}\int\limits_{-\pi}^{\pi }\frac{dk}{2\pi}\rho(k). \tag{67}\]
Now let us turn to an evaluation of \(\sigma_{0}^{c}\) part in (59). Using (33) we arrive at
\[\mathsf{C}_{b}\equiv\sum_{x}\sigma_{0}^{c}(x,t=0)=\frac{\tanh^{2}(\beta B)}{4} \int\limits_{-\pi}^{\pi}\frac{dk}{2\pi}\rho(k)(1-\rho(k)). \tag{68}\]
Notice that this evaluation remains valid even at \(t\neq 0\). The same statement can be demonstrated even for \(\sigma_{1}\), using the fact that \(\hat{S}\mathcal{F}\) plays a role of second derivative, so after the summation over \(x\), one has to take into account only boundary terms at large distances for which one can use the asymptotic in the space like regime (see for instance [64, 65, 66]). Overall, we obtain
\[\mathsf{C}=\sum_{x}\sigma^{(c)}(x,t)=\frac{1}{4}\int\limits_{-\pi}^{\pi}\frac{ dk}{2\pi}\rho(k)-\frac{\tanh^{2}(\beta B)}{4}\int\limits_{-\pi}^{\pi}\frac{ dk}{2\pi}\rho(k)^{2}. \tag{69}\]
Further, following [83], we define the Drude weight \(\mathsf{D}\) and Onsager matrix \(\mathfrak{L}\) via the asymptotic at long times of the second moment, namely
\[\frac{1}{2}\sum_{x}\,x^{2}\left(\sigma^{(c)}(x,t)+\sigma^{(c)}(x,-t)\right)= \mathsf{D}t^{2}+\mathfrak{L}t+o(t). \tag{70}\]
To account for the contributions from \(\sigma_{0}\) we use second derivative of the relation (33) to arrive at
\[\frac{1}{2}\sum_{x}\,x^{2}\left(\sigma_{0}^{(c)}(x,t)+\sigma_{0}^{(c)}(x,-t) \right)=\mathsf{D}t^{2}+o(t) \tag{71}\]
with
\[\mathsf{D}=\frac{\tanh^{2}(\beta B)}{4}\int\limits_{-\pi}^{\pi}\frac{dk}{2\pi }\varepsilon^{\prime}(k)^{2}\rho(k)(1-\rho(k)). \tag{72}\]
More specifically, we can describe not only the second moment but the full asymptotic behavior of \(\sigma_{0}(x,t)\) on the ballistic scale \(x,t\to\infty\) and \(x/t=\text{const}\). For \(x>2t\) the integrals vanish exponentially, so
\[\sigma_{0}(x,t)=O(e^{-\#x}), \tag{73}\]
while for \(0<x<2t\) they are dominated by two saddle points \(k_{0}=\arcsin(x/2t)\) and \(k_{1}=\pi-k_{0}\). This way introducing
\[\rho_{\pm}=\frac{2\cosh(\beta B)}{2\cosh(\beta B)+e^{-\beta(h\pm 2\sqrt{1-x^{2} /(2t)^{2}})}}, \tag{74}\]
we obtain
\[\sigma_{0}(x,t)\approx\frac{\tanh^{2}(\beta B)}{4}\sum_{s=\pm}\frac{\rho_{s}(1 -\rho_{s}+(-1)^{x}(1-\rho_{-s})e^{-2is\Phi})}{\sqrt{(2\pi)((2t)^{2}-x^{2})}} \tag{75}\]
with
\[\Phi=\sqrt{(2t)^{2}-x^{2}}+xk_{0}-\frac{i\pi}{4}. \tag{76}\]
The integral of the Fredholm determinants in \(\sigma_{1}\) are expected to produce diffusive terms in the region \(x\sim\sqrt{t}\). To proceed with the asymptotic of the determinant we notice that the kernel (50) also appears in the correlation function of one-dimensional impenetrable anyons upon the identification \(\gamma\theta(k)=n_{F}(k)\)[84, 85, 64]. Moreover, this kernel is nothing but a generalized sine-kernel on a lattice so its asymptotic behavior can be found rigorously by solving the corresponding Riemann-Hilbert problem [84, 86], or obtained heuristically, by using the effective form factors approach [64, 65, 66]. The result for \(x<2t\) reads
\[\mathcal{F}_{\lambda}(x,t)\approx\frac{C(x/t)}{t^{(\delta\nu)^{2}}}\exp\left( \int_{-\pi}^{\pi}\frac{dq}{2\pi}|x-\varepsilon^{\prime}(q)t|\log(1+\rho(q)(e^{ i\lambda\text{sgn}(x-\varepsilon^{\prime}(q)t)}-1))\right). \tag{77}\]
Here
\[\delta\nu\sim\frac{\log(1+\rho(k_{*})(e^{i\lambda}-1))}{2\pi i}-\left(-\frac{ \log(1+\rho(k_{*})(e^{-i\lambda}-1))}{2\pi i}\right), \tag{78}\]
with \(k_{*}\) is one of the critical points \(k_{0}\) or \(\pi-k_{0}\) introduced after Eq. (73). In principle, we have to sum over all these points, however further we will see that the integral is dominated by \(\lambda\sim x/t\sim 1/\sqrt{t}\), therefore the power-law prefactors are of the order \(t^{-(\delta\nu)^{2}}\sim\exp\left(O((\log t)/\sqrt{t})\right)\), so we can regard them to be constant as well as the prefactor \(C(x/t)\approx C(0)\). We are going to compute integral in (56) by means of the saddle point methods. For this let us expand expression in the exponential for small \(\lambda\)
\[\log\mathcal{F}_{\lambda}(x,t)\approx i\lambda x\int\frac{dq}{2\pi}\,\rho(q)- \frac{\lambda^{2}}{2}t\int\frac{dq}{2\pi}\,|\varepsilon^{\prime}(q)|\rho(q)( 1-\rho(q)). \tag{79}\]
Here we assume that \(x\sim\sqrt{t}\) or less. We also assume that due to the symmetric properties of \(\varepsilon(q)\) and \(\rho(q)\) (see (37)), we have
\[\int\varepsilon^{\prime}(q)\rho(q)dq=0. \tag{80}\]
So after integration over \(\lambda\) we obtain
\[\sigma_{1}(x,t)=\frac{C(0)\int\limits_{-\pi}^{\pi}\frac{dq}{2\pi}\rho(q)}{4 \cosh^{2}(\beta B)}\frac{e^{-x^{2}/(2\mathcal{D}t)}}{\sqrt{2\pi\mathcal{D}t}} \tag{81}\]
with
\[\mathcal{D}=\frac{\int\limits_{-\pi}^{\pi}|\varepsilon^{\prime}(q)|\rho(q)(1-\rho (q))\frac{dq}{2\pi}}{\left[\int\limits_{-\pi}^{\pi}\rho(q)\frac{dq}{2\pi}\right]^ {2}}. \tag{82}\]
In Fig. 1 we compare theoretical predictions (81) with numerical results obtained from the exact expression (56) using numerical methods described in [82]. This allows us to compute non only the diffusion constant \(\mathcal{D}\) but also conclude that the constant \(C(0)\approx 1\) for various regimes.
For \(B=0\), or in the case of infinite temperature, the spin-spin correlation function is given only by \(\sigma_{1}\) and has a diffusive shape. Then the condition \(C(0)=1\) comes naturally via the connection with the initial profile. For infinite temperature \(\rho(q)=\rho\) and
\[\mathcal{D}=2(\rho^{-1}-1)/\pi. \tag{83}\]
In the absence of magnetic fields we have \(\rho=2/3\), thus
\[\mathcal{D}=1/\pi. \tag{84}\]
This coincides with the results obtained with the tracer dynamics in [80]. Note that the normalization of the Hamiltonian in [80] includes an extra factor of \(1/2\) (see eq. (48) in that work), therefore their diffusion constant differs from ours also in a factor of \(1/2\).
Figure 1: Coordinate dependence of the diffusive part of the spin-spin correlation function. Solid lines show analytic answer (81) and dots correspond to numeric evaluation of (56) for the density given by (37) with \(h=2\), \(B=1\), \(T=2\), for times shown in legends. Inset shows the diffusion constant \(\mathcal{D}\) after fitting results of (56), for \(B=0\), \(h=2\) and temperatures according to the legend. Dashed lines show analytic answer (82).
Notice that if we formally replace summation into integration with the profile (81) and put \(C(0)=1\) we recover the static correlation result (67). Similarly, we can compute the Onsager matrix \(\mathfrak{L}\) in (70)
\[\mathfrak{L}=\frac{\int\limits_{-\pi}^{\pi}|\varepsilon^{\prime}(q)|\rho(q)(1- \rho(q))\frac{dq}{2\pi}}{4\cosh^{2}(\beta B)\int\limits_{-\pi}^{\pi}\rho(q) \frac{dq}{2\pi}}. \tag{85}\]
We observe that the diffusion constant \(\mathfrak{D}=\mathfrak{L}/\mathsf{C}\) coincides with \(\mathcal{D}\) only when the ballistic part is absent (i.e. for \(B=0\)).
In our approach, we did not have to introduce the Euler and the diffusive scales, but they appear naturally from the exact expressions. Nevertheless, we can also compare our results with predictions of the generalized hydrodynamics. We start with the general expression for the Drude weight [83] (for the Hubbard model see also [48])
\[\mathsf{D}^{\text{GHD}}=\int dk\rho_{p}(k)(1-n(k))(v^{\text{eff}}(k))^{2}(m^{ \text{dr}}(k))^{2}. \tag{86}\]
Identifying the dressed velocity \(v^{\text{eff}}\), magnetization \(m^{\text{dr}}\) and root density \(n(k)\) with the corresponding undressed quantities
\[\rho_{p}(k)\leftrightarrow\frac{\rho(k)}{2\pi},\quad v^{\text{eff}}(k) \leftrightarrow\varepsilon^{\prime}(k),\qquad n(k)\leftrightarrow\rho(k), \qquad m^{\text{dr}}(k)\leftrightarrow-\tanh(\beta B)/2. \tag{87}\]
we recover the Drude weight (72).
Let us elaborate more on how matching (87) happens on the microscopic level. For this, we require Bethe equations for the Hamiltonian (10). In notations of [56] every eigenstate is parameterized by \(N\) unequal quasimomenta \(k_{1},\ldots k_{N}\), and by the set of \(M\) auxiliary momenta \(\lambda_{1}\ldots\lambda_{M}\), satisfying
\[e^{ik_{a}L}= e^{i\Lambda},\quad a=1,\ldots N \tag{88}\] \[e^{i\lambda_{b}N}= (-1)^{M+1},\quad b=1,\ldots M,\qquad\Lambda=\sum_{b=1}^{M}\lambda _{b}. \tag{89}\]
The corresponding state has \(M\) spins down and \(N-M\) spins up. To formulate these equations as Thermodynamic Bethe Ansatz, we introduce the corresponding densities of the quasiparticles
\[\rho_{p}(k_{i})=\frac{1}{L(k_{i+1}-k_{i})},\qquad\sigma_{p}(\lambda_{j})= \frac{1}{L(\lambda_{j+1}-\lambda_{j})}. \tag{90}\]
Then the corresponding energy density reads
\[\frac{E}{L}=\int\limits_{-\pi}^{\pi}(\varepsilon(k)-h)\rho_{p}(k)dk+B\left( \int\limits_{-\pi}^{\pi}\rho_{p}(k)dk-2\int\limits_{-\pi}^{\pi}\sigma_{p}( \lambda)d\lambda)\right) \tag{91}\]
The total densities are constant
\[\rho_{t}(k)=\frac{1}{2\pi},\qquad\sigma_{t}(\lambda)=\int\limits_{-\pi}^{\pi} \rho_{p}(k)\frac{dk}{2\pi}. \tag{92}\]
Notice that the only term describing interaction between quasimomenta and auxiliary momenta comes in the normalization for the latter. In other aspects both these particles can be considered as fermions, so the free energy takes the following form
\[F=LE-TLs(\rho_{t},\rho_{p})-TLs(\sigma_{t},\sigma_{p}) \tag{93}\]
with
\[s(\rho_{t},\rho_{p})=\int\limits_{-\pi}^{\pi}dk(\rho_{t}\log\rho_{t}-\rho_{p} \log\rho_{p}-\bar{\rho}_{p}\log\bar{\rho}_{p}),\qquad\bar{\rho}_{p}=\rho_{t}- \rho_{p} \tag{94}\]
and identically for \(s(\sigma_{t},\sigma_{p})\). To describe thermodynamic equilibrium we compute variations over \(\rho_{p}\) and \(\sigma_{p}\), which leads to the following equations, correspondingly
\[\varepsilon(k)-h-B-T\log\frac{\rho_{p}}{\rho_{t}-\rho_{p}(k)}-\frac{T}{2\pi} \int\limits_{-\pi}^{\pi}\log\frac{\sigma_{t}}{\sigma_{t}-\sigma_{p}(\lambda)} d\lambda=0, \tag{95}\]
\[2B+T\log\frac{\sigma_{p}(\lambda)}{\sigma_{t}-\sigma_{p}(\lambda)}=0, \tag{96}\]
which leads to
\[\frac{\sigma_{p}(\lambda)}{\sigma_{t}}=1+e^{-2B/T},\qquad\frac{\rho_{p}(k)}{ \rho_{t}}=\frac{2\cosh(\beta B)}{2\cosh(\beta B)+e^{\beta(\varepsilon(k)-h)}}. \tag{97}\]
The last expression is identical to (37). The relation \(n(k)=\rho_{p}(k)/\rho_{t}(k)\) explains the matching of densities in (87). Moreover, we can define the \(Y\) function via the relation \(n(k)=1/(1+Y(k))\). The derivative of this function for the thermal ensemble defines the dressed magnetization and the effective velocity
\[m^{\rm dr}=\beta\frac{\partial\log Y}{\partial(2B)}=-\tanh(\beta B)/2,\qquad v ^{\rm eff}=\beta\frac{\partial\log Y}{\partial k}=\varepsilon^{\prime}(k), \tag{98}\]
which finish matching in (87).
We see that these quantities do not get any dressing due to interactions. Apart from the Drude weight, we can also reproduce universal formulas for the average spin (36) and the ballistic part of the static covariance (see [83]). Namely,
\[\langle s_{z}\rangle=\int dk\rho_{p}(k)m^{\rm dr}(k),\qquad{\sf C}_{b}=\int dk \rho_{p}(k)(1-n(k))(v^{\rm eff}(k))^{2}(m^{\rm dr}(k))^{2}. \tag{99}\]
The expression for the diffusion constant is available at \(B=0\)[51] (see also [87, 15])
\[{\cal D}^{\rm GHD}=\int dk\,\rho_{p}(k)(1-n(k))|v^{\rm eff}(k)|\frac{\partial _{B}^{2}(m^{\rm dr}(k))^{2}}{16\chi_{B}^{2}}\Big{|}_{B=0}, \tag{100}\]
here \(\chi_{B}\) is the spin susceptibility, which can be computed from Eq. (36) \(\chi_{B}=\partial_{B}\langle s_{z}\rangle\). Computing this expression at \(B=0\) and comparing with (82) we obtain
\[{\cal D}^{\rm GHD}={\cal D}/8. \tag{101}\]
We see that this discrepancy cannot be removed via the rescaling of the magnetic field, but only with the appropriate change of the time scale. This might affect both definitions of the diffusion constant via Eq. (81) and the effective velocity. We hope to clarify this minor mismatch in future.
Summary and outlook
In this work we computed the key physical properties of spin transport in the t-0 model. Our computations are based on the exact presentation of the correlation functions in the thermodynamic limit in terms of the Fredholm determinants with their subsequent asymptotic analysis. This way we provide the first rigorous computation of spin diffusion for interacting quantum lattice systems. The results confirm the diffusion constant obtained earlier by semi-classical methods for infinite temperature [80], as well as the formula suggested by the generalized hydrodynamic [51] (up to numerical prefactor).
In closely related deterministic models it was found that the fluctuations of spin transport are anomalous, even though the mean transport is still be diffusive [33, 63, 88, 89, 90]. Therefore, it would be interesting to consider the full counting statistics also in our quantum mechanical model.
Furthermore, it would be interesting to extend the present methods to spin diffusion in the folded XXZ model, which describes the infinite temperature dynamics of the XXZ model in the large anisotropy limit.
We hope to return to these questions in future work.
## Acknowledgements
We are thankful to Johannes Feldmeier, Jacopo De Nardis, Benjamin Doyon, Enej Ilievski, Milosz Panfil for useful discussions. O.G. acknowledges support from the Polish National Agency for Academic Exchange (NAWA) through the Grant No. PPN/ULM/2020/1/00247.
## Appendix A Correlation functions of spinless fermions: Fredholm determinant representation
In this section we revisit derivation of the Fredholm determinant obtained in [81] with the "universal" use of the Wick's theorem according to [91]. We use finite lattice regularization, and perform minimal generalization of the string correlator (47) to consider the following correlation function of vertex operators
\[D_{\lambda\mu}(m,n;t)=\langle V^{(m)}_{\mu}(t)V^{(n)}_{\lambda}(0)\rangle \tag{102}\]
where
\[V^{(m)}_{\mu}(0)\equiv V^{(m)}_{\mu}=\exp\left(i\mu\sum_{l=-L}^{m-1}c^{+}_{l}c _{l}\right). \tag{103}\]
The lattice fermions are normalized as usual
\[\{c^{+}_{m},c_{n}\}=\delta_{nm}. \tag{104}\]
The Fourier-transformed fermions \(C_{k}\) defined as
\[C_{k}=\frac{1}{\sqrt{2L}}\sum_{m=-L}^{L}e^{-ikm}c_{m},\ \ \ \ c_{m}=\frac{1}{ \sqrt{2L}}\sum_{k}e^{ikm}C_{k} \tag{105}\]
makes the Hamiltonian diagonal \(H=\sum_{k}\varepsilon(k)C_{k}^{+}C_{k}\). Summation over momenta is taken over the Brillouin zone, meaning that
\[k=\frac{2\pi}{2L}n,\quad n\in\mathds{Z},\quad-\pi\leq k<\pi. \tag{106}\]
For now, we assume that the average is computed over the vector that is given by
\[|\mathbf{q}\rangle\equiv|q_{1}\ldots q_{n}\rangle=C_{q_{1}}^{+}\ldots C_{q_{n} }^{+}|0\rangle. \tag{107}\]
The vertex operator defined in (103) is a particular case of the group-like element \(G(B)\)[91], which can be roughly defined as
\[G(B)=:e^{\sum_{k,p}C_{p}^{+}B_{pk}C_{k}}: \tag{108}\]
where the averaging is taken with respect to the mathematical vacuum \(|0\rangle\). The matrix \(B\) can be extracted from the action on the individual fermion
\[G(B)C_{k}^{+}=\sum_{p}(\delta_{pk}+B_{pk})C_{p}^{+}G(B). \tag{109}\]
In fact, \(G(B)\) can be defined via relation (109), which is valid also for the non-invertible group-like elements. The "group" property if reflected in the composition law
\[G(B^{\prime})G(B)=G(B^{\prime}+B+B^{\prime}B), \tag{110}\]
which readily follows from (109). Finally, to evaluate (102) we will need the following corollary of Wick's theorem regarding the average of the group-like element on the state (107)
\[\langle\mathbf{q}|G(B)|\mathbf{q}\rangle=\det_{q\in\mathbf{q},q^{\prime}\in \mathbf{q}}(\delta_{qq^{\prime}}+B_{qq^{\prime}}). \tag{111}\]
Now let us compute the corresponding \(B\) matrix for the vertex (103). Commuting it with the fermion creation operator, we obtain
\[V_{\mu}^{(m)}c_{a}^{+}=\left[1+(e^{i\mu}-1)\theta(a<m)\right]c_{a}^{+}V_{\mu} ^{(m)}, \tag{112}\]
or for the Fourier modes
\[V_{\mu}^{(m)}C_{k}^{+}=\sum_{p}(\delta_{pk}+B_{pk})C_{p}^{+}V_{\mu}^{(m)}. \tag{113}\]
where
\[[B_{\mu}^{(m)}]_{pk}=\frac{e^{i\mu}-1}{2L}\frac{e^{im(k-p)}-e^{-iL(k-p)}}{e^{ i(k-p)}-1} \tag{114}\]
for \(p\neq k\), while diagonal components are given by
\[[B_{\mu}^{(m)}]_{kk}=\frac{e^{i\mu}-1}{2L}(L+m). \tag{115}\]
Note that if we keep \(L\) dependence explicitly then the diagonal part comes as l'Hopital rule.
Time dependence can be easily included as well
\[[B_{\mu}^{(m)}]_{pk}(t)=[B_{\mu}^{(m)}]_{pk}e^{it(\varepsilon(p)-\varepsilon( k))}. \tag{116}\]
Now employing (110) and (111) we arrive at
\[D_{\lambda\mu}(m,n;t)=\det\mathcal{A} \tag{117}\]
with
\[\mathcal{A}_{ij}=\delta_{q_{i}q_{j}}+[B^{(m)}_{\mu}]_{q_{i}q_{j}}e^ {i(c(q_{i})-\varepsilon(q_{j}))t/2}+[B^{(n)}_{\lambda}]_{q_{i}q_{j}}e^{i( \varepsilon(q_{i})-\varepsilon(q_{j}))t/2}\\ +e^{i\varepsilon(q_{i})t/2}\sum_{k}[B^{(m)}_{\mu}]_{q_{ik}}e^{- i\varepsilon(q_{k})t}[B^{(n)}_{\lambda}]_{kq_{j}}e^{i\varepsilon(q_{j})t/2}. \tag{118}\]
Let us evaluate the sum in this expression treating \(L\) as a large parameter. First, we rewrite the sum identically
\[\sum_{k}[B^{(m)}_{\mu}]_{q_{i}k}e^{-i\varepsilon(q_{k})t}[B^{(n)}_{\lambda}]_ {kq_{j}}=\frac{(e^{i\lambda}-1)(e^{i\mu}-1)}{(2L)^{2}}\sum_{k}\frac{\Theta(q_{ i},q_{j};k)}{(e^{ik}-e^{iq_{i}})(e^{iq_{j}}-e^{ik})}e^{-it\varepsilon(q_{k})+iq_{ i}+ik} \tag{119}\]
with
\[\Theta(q_{i},q_{j};k)=e^{i(q_{j}n-q_{i}m+k(m-n))}+e^{iL(q_{i}-q_{j})}-e^{i(nq_ {j}+Lq_{i}-(L+n)k)}-e^{i((L+m)k-mq_{i}-Lq_{j})}. \tag{120}\]
For \(q_{i}\neq q_{j}\) we present this expression as
\[\sum_{k}[B^{(m)}_{\mu}]_{q_{i}k}e^{-i\varepsilon(k)t}[B^{(n)}_{\lambda}]_{kq_ {j}}=\frac{(e^{i\lambda}-1)(e^{i\mu}-1)}{2L}\frac{V-W}{e^{iq_{i}}-e^{iq_{j}}}, \tag{121}\]
with
\[V=\frac{L+m}{2L}(e^{iL(q_{i}-q_{j})}-e^{in(q_{j}-q_{i})})e^{-it\varepsilon(q_ {i})+iq_{i}}+\frac{1}{2L}\sum_{k\neq q_{i}}\frac{\Theta(q_{i},q_{j};k)}{e^{iq_ {i}}-e^{ik}}e^{-it\varepsilon(k)+iq_{i}+ik}, \tag{122}\]
\[W=\frac{L+n}{2L}(e^{im(q_{j}-q_{i})}-e^{iL(q_{i}-q_{j})})e^{-it\varepsilon(q_ {j})+iq_{i}}+\frac{1}{2L}\sum_{k\neq q_{j}}\frac{\Theta(q_{i},q_{j};k)}{e^{iq_ {j}}-e^{ik}}e^{-it\varepsilon(k)+iq_{i}+ik}. \tag{123}\]
After these preparations let us evaluate the limit of these sums as Riemann integral. The highly oscillating terms can be thrown away and the corresponding superficial divergences should be treated as principal value (as we demonstrate in Appendix (A.1))
\[\frac{1}{2L}\sum_{k\neq q}\frac{e^{iL(k-q)}-1}{e^{iq}-e^{ik}}f_{k}=-\frac{ \text{v.p.}}{2\pi}\int_{-\pi}^{\pi}dk\frac{f_{k}}{e^{iq}-e^{ik}}=-\frac{1}{2 \pi}\int_{-\pi}^{\pi}dk\frac{f_{k}-f_{q}}{e^{iq}-e^{ik}}. \tag{124}\]
For \(q_{i}=q_{j}\) we can formally compute l'Hopital's limit to obtain
\[\mathcal{A}_{ii}=\frac{1+e^{i\lambda+i\mu}}{2}+O(1/L) \tag{125}\]
which means that
\[\det\mathcal{A}\sim(\cos(\lambda+\mu))^{2L}e^{iL(\lambda+\mu)}. \tag{126}\]
So we have to demand \(\mu=-\lambda\) to obtain non-zero answer as \(L\to\infty\). Once this condition is assumed we can get rid of terms proportional to \(e^{i(q_{i}-q_{j})L}\) in all expressions in Eq. (118), similarly to (124). Further assuming that \(m,n\ll L\), we present
\[V=-\frac{e^{-it\varepsilon(q_{i})+iq_{i}+in(q_{j}-q_{i})}}{2}-e^{inq_{j}-i(m-1) q_{i}}\hat{E}(q_{i}) \tag{127}\]
\[W=\frac{e^{im(q_{j}-q_{i})-it\varepsilon(q_{j})+iq_{i}}}{2}-e^{inq_{j}-i(m-1) q_{i}}\hat{E}(q_{j}) \tag{128}\]
with
\[\hat{E}(q)=\frac{\text{v.p.}}{2\pi}\int_{-\pi}^{\pi}dk\frac{e^{i(m-n+1)k-it \varepsilon(k)}}{e^{iq}-e^{ik}}, \tag{129}\]
and finally
\[\mathcal{A}_{ij}=-\frac{|e^{i\lambda}-1|^{2}}{2L}\frac{\hat{E}(q_ {i})-\hat{E}(q_{j})}{e^{iq_{i}}-e^{iq_{j}}}e^{inq_{j}-i(m-1)q_{i}+i( \varepsilon(q_{i})+\varepsilon(q_{j}))t/2}-\\ i\sin(\lambda)\frac{e^{im(q_{j}-q_{i})+iq_{i}+i(\varepsilon(q_ {i})-\varepsilon(q_{j}))t/2}-e^{i(\varepsilon(q_{j})-\varepsilon(q_{i}))t/2 +iq_{i}+in(q_{j}-q_{i})}}{2L(e^{iq_{i}}-e^{iq_{j}})}. \tag{130}\]
To literally reproduce results of [81] we would need the following relation
\[\frac{e^{iq_{i}}}{e^{iq_{i}}-e^{iq_{j}}}=\frac{1}{2}\frac{e^{iq_{i}}+e^{iq_{j }}}{e^{iq_{i}}-e^{iq_{j}}}+\frac{1}{2}=\frac{1}{2i\tan((q_{i}-q_{j})/2)}+\frac {1}{2}. \tag{131}\]
Moreover, using this relation we can connect \(\hat{E}(q)\) with \(E(m-n,q)\) defined in (54), we have
\[\hat{E}(q)=-\frac{E(m-n,q)}{2i}+\frac{G(m-n)}{2}, \tag{132}\]
where \(G(x)\) is defined as
\[G(x)=\int\limits_{-\pi}^{\pi}\frac{dq}{2\pi}e^{-it\varepsilon(q)+ixq}=i^{x}J_ {x}(2t). \tag{133}\]
Since \(G(x)\) does not depend on \(q\) it does not contribute to matrix elements \(\mathcal{A}_{ij}\) (130). Finally, assuming \(x=m-n\) and using notations (52) and (53) we obtain
\[\mathcal{A}_{ij}=e^{-i(m+n+1)q_{i}/2}\left(\delta_{ij}+\frac{1}{2L}\frac{E_{+ }(x,q_{i})E_{-}(x,q_{j})-E_{+}(x,q_{j})E_{-}(x,q_{i})}{\sin\frac{q_{i}-q_{j}}{ 2}}\right)e^{i(m+n+1)q_{j}/2}. \tag{134}\]
The conjugation factors will cancel in the determinant \(\det\mathcal{A}\). Further, taking into account the level spacing (106) and introducing density of states \(\rho(k)\) in \(L\to\infty\) limit we recover (50). For the formal proof of the validity of injection of the density distribution after averaging over the thermal ensemble see, for instance, appendix A in [92].
### Proof of the lemma
Here we present some comments on transformation of the sum (124) into integrals. First we notice the following identity
\[\frac{1}{2L}\sum_{k}=\frac{1}{2\pi}\oint_{C}\frac{dk}{e^{2ikL}-1} \tag{135}\]
where counterclockwise contour \(C\) encircles solution of \(e^{2ikL}-1=0\) that are inside the first Brillouin zone (\(-\pi<k\leq\pi\)). For summation of the smoothing varying function on these interval we can present \(C\) as the combination of the contours above and below the real axis
\[C=\gamma_{1}\cup\gamma_{2} \tag{136}\]
\[\gamma_{1}=\{k+i\epsilon|k\in[\pi,-\pi]\},\ \ \ \ \gamma_{2}=\{k-i\epsilon|k\in[- \pi,\pi]\} \tag{137}\]
where \(\epsilon\ll 1\ll L\epsilon\). This way we may ignore contribution from the contour \(\gamma_{2}\) while contribution from \(\gamma_{1}\) actually gives normal Riemann integral
\[\frac{1}{2L}\sum_{k}f_{k}=\frac{1}{2\pi}\int_{\gamma_{1}}dk\frac{f_{k}}{-1}= \int\limits_{-\pi}^{\pi}\frac{dk}{2\pi}f_{k} \tag{138}\]
Let us consider
\[S_{q}\equiv\frac{1}{2L}\sum_{k\neq q}\frac{e^{iL(k-q)}-1}{e^{iq}-e^{ik}}f_{k} =\frac{1}{2\pi}\oint_{C_{q}}\frac{dk}{e^{2ikL}-1}\frac{e^{iL(k-q)}-1}{e^{iq}- e^{ik}}f_{k} \tag{139}\]
where in \(C_{q}\) we emphasize that point \(k=q\) is not encircled. Taking into account that \(e^{2iqL}=1\) we can present
\[S_{q}=\frac{1}{2\pi}\oint_{C_{q}}\frac{dk}{e^{2i(k-q)L}-1}\frac{e^{iL(k-q)}-1} {e^{iq}-e^{ik}}f_{k}=\frac{1}{2\pi}\oint_{C_{q}}\frac{dk}{e^{i(k-q)L}+1}\frac{ f_{k}}{e^{iq}-e^{ik}}, \tag{140}\]
or including residue, and transforming as in the regular case we get
\[S_{q}=\frac{1}{2\pi}\oint_{C}\frac{dk}{e^{i(k-q)L}+1}\frac{f_{k}}{e^{iq}-e^{ ik}}+\frac{e^{-iq}}{2}=-\frac{1}{2\pi}\int_{-\pi+i\epsilon}^{\pi+i\epsilon} dk\frac{f_{k}}{e^{iq}-e^{ik}}+\frac{e^{-iq}}{2} \tag{141}\]
transforming further we can present
\[S_{q}=-\frac{1}{2\pi}\int_{-\pi}^{\pi}dk\frac{f_{k}}{q-k-i\epsilon}\frac{q-k} {e^{iq}-e^{ik}}+\frac{e^{-iq}}{2}=-\frac{\text{v.p.}}{2\pi}\int_{-\pi}^{\pi} dk\frac{f_{k}}{e^{iq}-e^{ik}}, \tag{142}\]
which basically means that you can throw away \(e^{ikL}\) from the integration if you treat everything in a primary value sense. At the final step we use the following identity
\[\frac{\text{v.p.}}{2\pi}\int_{-\pi}^{\pi}dk\frac{1}{e^{iq}-e^{ik}}=0 \tag{143}\]
leading to Eq. (124).
Series expansion
Let us expand the string correlator \(\mathcal{F}_{\lambda}(x,t)\) in(49) at \(\lambda=0\). Taking into account the following expansion of the determinant
\[\det(1+R)=1+\mathrm{Tr}R+\frac{(\mathrm{Tr}R)^{2}-\mathrm{Tr}R^{2}}{2}+O(R^{3}), \tag{144}\]
we obtain
\[\mathcal{F}_{\lambda}(x,t)=1-i\lambda\int\limits_{-\pi}^{\pi} \frac{dk}{2\pi}\rho(k)(t\varepsilon^{\prime}(k)-x)\\ +\frac{\lambda^{2}}{2}\int\limits_{-\pi}^{\pi}\frac{dk}{2\pi}\rho (k)e^{it\varepsilon(k)}\partial_{k}[e^{-ikx}E(k)]+i\frac{\lambda^{2}}{2}\int \limits_{-\pi}^{\pi}\frac{dk}{2\pi}\rho(k)xe^{it\varepsilon(k)-ikx}E(k)\\ +\frac{\lambda^{2}}{2}\int\limits_{-\pi}^{\pi}\frac{dk}{2\pi}\int \limits_{-\pi}^{\pi}\frac{dq}{2\pi}\rho(k)\rho(q)\left(\frac{\sin\left[\frac{t }{2}(\varepsilon(k)-\varepsilon(q))-\frac{x(k-q)}{2}\right]}{\sin\frac{k-q}{2 }}\right)^{2}\\ -\frac{\lambda^{2}}{2}\int\limits_{-\pi}^{\pi}\frac{dk}{2\pi} \rho(k)(t\varepsilon^{\prime}(k)-x)\int\limits_{-\pi}^{\pi}\frac{dq}{2\pi} \rho(q)(t\varepsilon^{\prime}(q)-x). \tag{145}\]
This way,
\[\hat{S}\mathcal{F}_{\lambda}(x,t)=\lambda^{2}\int\limits_{-\pi}^ {\pi}\frac{dk}{2\pi}\rho(k)\int\limits_{-\pi}^{\pi}\frac{dq}{2\pi}\rho(q)\\ +\lambda^{2}\int\limits_{-\pi}^{\pi}\frac{dk}{2\pi}\rho(k)\int \limits_{-\pi}^{\pi}\frac{dq}{2\pi}e^{it(\varepsilon(k)-\varepsilon(q))-ix(k-q )}(1-\rho(q))+O(\lambda^{3}). \tag{146}\]
## Appendix C Kernels
In this chapter we compare our answers with those in Ref. [56]. To do so we have to introduce one more kernel
\[\mathcal{Q}(x,\lambda|k,q)=\frac{\ell_{+}(x,k)\ell_{-}(x,q)-\ell_{-}(x,k)\ell _{+}(x,q)}{2\pi\tan\frac{k-q}{2}}-\frac{1}{2}\frac{1-\cos\lambda}{2\pi}G(x) \ell_{-}(x,k)\ell_{-}(x,q). \tag{147}\]
where
\[G(x)=\int\limits_{-\pi}^{\pi}\frac{dq}{2\pi}e^{-it\varepsilon(q)+ixq}=i^{x}J_ {x}(2t), \tag{148}\]
with \(\varepsilon(q)=-2\cos(q)\). Further one can notice that
\[E(x+1,k)=e^{ik}E(x,k)+ie^{ik}G(x)+iG(x+1) \tag{149}\]
or
\[E(x-1,k)=e^{-ik}E(x,k)-ie^{-ik}G(x)-iG(x-1). \tag{150}\]
This leads to
\[\ell_{+}(x+1,k)=e^{ik/2}\ell_{+}(x,k)+\sqrt{\rho(k)}\frac{1-\cos\lambda}{2}iE_{- }(x,k)\left(e^{ik/2}G(x)+e^{-ik/2}G(x+1)\right) \tag{151}\]
or
\[\ell_{+}(x+1,k)=e^{ik/2}\ell_{+}(x,k)+\frac{1-\cos\lambda}{2}i\ell_{-}(x,k)e^{ ik/2}G(x)+\frac{1-\cos\lambda}{2}i\ell_{-}(x+1,k)G(x+1). \tag{152}\]
This way
\[\mathcal{U}(x+1,\lambda|k,q)=\mathcal{Q}(x,\lambda|k,q)\\ +\frac{i\ell_{+}(x,k)\ell_{-}(x,q)+i\ell_{+}(x,q)\ell_{-}(x,k)}{2 \pi}-\frac{1}{2}\frac{1-\cos\lambda}{2\pi}G(x)\ell_{-}(x,k)\ell_{-}(x,q), \tag{153}\]
\[\mathcal{U}(x-1,\lambda|k,q)=\mathcal{Q}(x,\lambda|k,q)\\ -\frac{i\ell_{+}(x,k)\ell_{-}(x,q)+i\ell_{+}(x,q)\ell_{-}(x,k)}{2 \pi}-\frac{1}{2}\frac{1-\cos\lambda}{2\pi}G(x)\ell_{-}(x,k)\ell_{-}(x,q). \tag{154}\]
Additionally, we can present
\[e^{i(k-q)/2}\mathcal{U}(x,\lambda|k,q)=\mathcal{Q}(x,\lambda|k, q)+\frac{i\ell_{+}(x,k)\ell_{-}(x,q)-i\ell_{+}(x,q)\ell_{-}(x,k)}{2\pi}\\ +\frac{1}{2}\frac{1-\cos\lambda}{2\pi}G(x)\ell_{-}(x,k)\ell_{-}(x,q). \tag{155}\]
Let us introduce three rank -one operators
\[R_{1}(k,q)=\frac{i}{2\pi}\left(\frac{1}{1+\mathcal{Q}}l_{+}\right)(k)\left( \frac{1}{1+\mathcal{Q}}l_{-}\right)^{T}(q), \tag{156}\]
\[R_{2}(k,q)=\frac{i}{2\pi}\left(\frac{1}{1+\mathcal{Q}}l_{-}\right)(k)\left( \frac{1}{1+\mathcal{Q}}l_{+}\right)^{T}(q), \tag{157}\]
\[R_{3}(k,q)=\frac{1-\cos\lambda}{4\pi}G(x)\left(\frac{1}{1+\mathcal{Q}}l_{-} \right)(k)\left(\frac{1}{1+\mathcal{Q}}l_{-}\right)^{T}(q). \tag{158}\]
Than taking into account that
\[\det(1+e^{i(k-q)/2}\mathcal{U}(x,\lambda|k,q))=\det(1+\mathcal{U}(x,\lambda| k,q))=\mathcal{D}(x,t) \tag{159}\]
We obtain
\[\frac{\mathcal{D}(x+1,t)}{\det(1+\mathcal{Q})}=\det(1+R_{1}+R_{2}-R_{3}), \tag{160}\]
\[\frac{\mathcal{D}(x-1,t)}{\det(1+\mathcal{Q})}=\det(1-R_{1}-R_{2}-R_{3}), \tag{161}\]
\[\frac{\mathcal{D}(x,t)}{\det(1+\mathcal{Q})}=\det(1-R_{1}+R_{2}+R_{3}). \tag{162}\]
Further, taking into account that linear combination of \(R_{1}\) (\(R_{2}\)) and \(R_{3}\) is a rank-one operator, we obtain
\[\frac{\mathcal{D}(x+1,t)}{\det(1+\mathcal{Q})}=1+\mathrm{Tr}(R_{1}+R_{2}-R_{3} )+\mathrm{Tr}(R_{1}-R_{3})\mathrm{Tr}R_{2}-\mathrm{Tr}(R_{1}-R_{3})R_{2}, \tag{163}\]
\[\frac{\mathcal{D}(x-1,t)}{\det(1+\mathcal{Q})}=1-\mathrm{Tr}(R_{1}+R_{2}+R_{3} )+\mathrm{Tr}(R_{1}+R_{3})\mathrm{Tr}R_{2}-\mathrm{Tr}(R_{1}+R_{3})R_{2}, \tag{164}\]
\[\frac{\mathcal{D}(x,t)}{\det(1+\mathcal{Q})}=1+\mathrm{Tr}(R_{2}+R_{3}-R_{1} )+\mathrm{Tr}(R_{3}-R_{1})\mathrm{Tr}R_{2}-\mathrm{Tr}(R_{3}-R_{1})R_{2}. \tag{165}\]
This way,
\[\frac{\mathcal{D}(x+1,t)+\mathcal{D}(x-1,t)+2\mathcal{D}(x,t)}{\det(1+ \mathcal{Q})}=4. \tag{166}\]
Or in other words
\[\det(1+\tilde{\mathcal{U}}(\lambda))-\det(1+\mathcal{Q}(\lambda))=\frac{2 \mathcal{D}(x,t)-\mathcal{D}(x-1,t)-\mathcal{D}(x+1,t)}{4}. \tag{167}\]
This statement is enough to prove the equivalence of our results to those in [56].
|
2310.05963 | CFDBench: A Large-Scale Benchmark for Machine Learning Methods in Fluid
Dynamics | In recent years, applying deep learning to solve physics problems has
attracted much attention. Data-driven deep learning methods produce fast
numerical operators that can learn approximate solutions to the whole system of
partial differential equations (i.e., surrogate modeling). Although these
neural networks may have lower accuracy than traditional numerical methods,
they, once trained, are orders of magnitude faster at inference. Hence, one
crucial feature is that these operators can generalize to unseen PDE parameters
without expensive re-training.In this paper, we construct CFDBench, a benchmark
tailored for evaluating the generalization ability of neural operators after
training in computational fluid dynamics (CFD) problems. It features four
classic CFD problems: lid-driven cavity flow, laminar boundary layer flow in
circular tubes, dam flows through the steps, and periodic Karman vortex street.
The data contains a total of 302K frames of velocity and pressure fields,
involving 739 cases with different operating condition parameters, generated
with numerical methods. We evaluate the effectiveness of popular neural
operators including feed-forward networks, DeepONet, FNO, U-Net, etc. on
CFDBnech by predicting flows with non-periodic boundary conditions, fluid
properties, and flow domain shapes that are not seen during training.
Appropriate modifications were made to apply popular deep neural networks to
CFDBench and enable the accommodation of more changing inputs. Empirical
results on CFDBench show many baseline models have errors as high as 300% in
some problems, and severe error accumulation when performing autoregressive
inference. CFDBench facilitates a more comprehensive comparison between
different neural operators for CFD compared to existing benchmarks. | Yining Luo, Yingfa Chen, Zhen Zhang | 2023-09-13T06:30:08Z | http://arxiv.org/abs/2310.05963v2 | # CFDBench: A Comprehensive Benchmark for Machine Learning Methods in Fluid Dynamics
###### Abstract
In recent years, applying deep learning to solve physics problems has attracted much attention. Data-driven deep learning methods produce operators that can learn solutions to the whole system of partial differential equations. However, the existing methods are only evaluated on simple flow equations (e.g., Burger's equation), and only consider the generalization ability on different initial conditions. In this paper, we construct CFDBench, a benchmark with four classic problems in computational fluid dynamics (CFD): lid-driven cavity flow, laminar boundary layer flow in circular tubes, dam flows through the steps, and periodic Karman vortex street. Each flow problem includes data with different boundary conditions, fluid physical properties, and domain geometry. Compared to existing datasets, the advantages of CFDBench are (1) comprehensive. It contains common physical parameters such as velocity, pressure, and cavity fraction. (2) realistic. It is very suitable for deep learning solutions of fluid mechanics equations. (3) challenging. It has a certain learning difficulty, prompting to find models with strong learning ability. (4) standardized. CFDBench facilitates a comprehensive and fair comparison of different deep learning methods for CFD. We make appropriate modifications to popular deep neural networks to apply them to CFDBench and enable the accommodation of more changing inputs. The evaluation on CFDBench reveals some new shortcomings of existing works and we propose possible directions for solving such problems.1
Footnote 1: The code and datasets can be found at: [https://www.github.com/luo-yining/CFDBench](https://www.github.com/luo-yining/CFDBench)
## 1 Introduction
Recent advances in deep learning have enabled neural networks to approximate highly complex and abstract mappings [28]. As a result, neural networks have been employed to solve partial differential equations (PDEs) and have shown some promising results. [27, 29, 32, 3].
One application of PDE solvers is computational fluid dynamics (CFD), which is a well-studied and important field with many practical applications. Therefore, the last few years saw many new attempts at developing better CFD methods with the help of deep neural networks [26]. There are multiple reasons for adopting deep learning methods over traditional numerical methods. One advantage is mesh-independence. Numerical methods operate on meshes, and the mesh construction process is time-consuming and requires much expert knowledge to ensure convergence and good accuracy. Another advantage of deep learning methods is that they can be several orders of magnitude faster than numerical methods [36]. Additionally, some neural models have been able to surpass traditional numerical methods in accuracy in some problems in fluid dynamics [43, 3].
Most existing attempts to use neural networks to solve CFD problems are limited to simple, unrealistic, and artificial dummy problems, rarely study the typical phenomena of real flows, and do not comprehensively test the generalization ability of neural networks in real-world scenarios [32, 33, 27]. It is important that neural models can generalize to unseen PDE parameters (e.g., different BCs, physical properties, domain geometry, etc.) without retraining because retraining the models is prohibitively
expensive and requires recollecting data. However, existing works only evaluate the generalization to unseen initial conditions (ICs).
In this work, we construct CFDBench, a large-scale and comprehensive dataset for better evaluating the generalization ability of data-driven neural networks in CFD. It includes four classic CFD problems: the flow in a lid-driven cavity, the flow in a circle tube, the flow over a dam, and the flow around a cylinder problem. In contrast to existing work, we condition the neural networks on different BCs, fluid physical properties, and fluid domain geometry, and evaluate their generalization effectiveness to unseen conditions.
Our main contributions are as follows.
1. We construct and release the first benchmark for CFD data-driven deep learning, covering four classic CFD problems with different BCs, fluid properties, and domain geometry.
2. Some neural networks cannot be directly applied to CFDBench, and we demonstrate how to modify them to effectively apply to the problems in CFDBench.
3. We evaluate some popular neural networks on CFDBench, and show that it is more challenging than the virtual problems used in previous work, revealing some problems that need to be solved before these operators can replace traditional solvers.
## 2 Related works
Numerical MethodsNumerical methods have been widely used to solve CFD problems. The basic idea is to divide the original continuous solution area into a grid or unit sub-area, which sets limited discrete points (called nodes). Then, using different discrete methods, the control equations (which are typically PDEs) will be reduced to algebraic equations called discrete equations. Solving these discrete equations gives us the values of the nodes. Some common discrete methods include finite difference methods, finite element methods, spectral methods, and lattice Boltzmann methods (LBMs) [6].
The main idea of the finite difference method (FDM) [44] is to replace differentiation with finite difference. Its advantage is high accuracy, but not flexible enough for complex grid processing. The finite volume method (FVM) [48] divides the calculation area into non-repeated control volumes, and the physical quantity of each control volume is approximated according to certain rules to form a discrete equation. The finite element method (FEM) [54] is based on the classical variational method (Ritz method [38] or Galerkin method [10]), which first establishes the units connected by the nodes, and then approaches the true solution in the unit with a linear combination of the product of the value of the node function and the basis function. The advantages of the FVM and FEM are good conservation and good adaptability to complex grids, while the disadvantage is high computing consumption and high correlation between convergence and mesh quality. The spectral method [11] uses the characteristics of the Fourier series to transform the nonlinear problem into a linear problem. Its advantages include high accuracy and great applicability to problems with periodic BCs, but it has considerable limitations such as divergence on discontinuous functions. LBM is a new method based on the thin (mesoscopic) scale model and Boltzmann gas molecular motion theory, with the advantage of fast solution speed, the disadvantage is low accuracy. However, these numerical methods have very large computational costs. Although there has been much research on reducing such computational costs, development has been relatively slow in recent years.
Neural NetworksIn the last decade, neural networks have demonstrated impressive capabilities in various computer vision and natural language processing tasks [28, 17, 18, 4, 8]. A neural network consists of a large number of neurons. It can approximate any arbitrary mapping by automatically minimizing a loss function that is differentiable with respect to the model parameters. By iterating through a large set of input-output pairs, the model parameters are updated by gradient descent. Some common types of neural networks include feed-forward neural networks (FFNs), recurrent neural networks (RNNs) [19], and generative adversarial networks (GANs) [15], convolutional neural networks (CNNs) [12] etc.
Regarding CFD problems, we generally want to model a flow field, which can be seen as a kind of condition generation task. This is one common objective of many applications of deep learning. More concretely, forward propagators such as numerical methods can be regarded as a conditional image-to-image translation task [23]. Some notable works include [40, 39, 53]. Of concern is ResNet [18] and U-Net [40]. The former adds a _residual connection_ which makes the model predict the shift from the input instead of the output directly, which empirically improves the performance and stability of image processing. U-Net shrinks the hidden representation in the middle of the ResNet, reducing the number of parameters and improving the globality of feature dependencies.
Neural Operators for Solving PDEsThere have been a great number of research works on applying neural networks to solve PDEs. In summary, they fall into two categories, approximating the solution function and approximating the solution operator.
The former category is pioneered by physics-informed neural networks (PINNs) [36], a deep learning framework for solving PDEs in physics. The framework uses an FFN to approximate the solution to PDEs by learning the distribution of training data while minimizing the loss function that enforces constraints based on physics laws. A series of improvements to PINNs have been proposed. These include dividing the solution domain to speed up the convergence [25, 24, 21], combining the numerical derivative and adaptive derivative reverse propagation to improve accuracy [7]. Some works focus on improving the neural architecture [34], by adopting convolutional layers instead of fully connected layers, such as PhyGeoNet [13], PhyCRNet [37], etc. However, these methods have limited applicability, and only a few of them are evaluated on complex flow equations. Moreover, since PINNs approximate one solution function, they have to be retrained for every new input function or condition.
The second category learns a whole family of solutions by learning the mapping from input functions to output functions. [30] have proved that the neural operator has the ability to solve nonlinear equations. Some notable neural operators include FNO [29], LNO [5], and KNO [51], etc. These operators are forward propagators similar to numerical methods but learn in other domains to achieve mesh-independence. Another series of neural operators is the DeepONet [32], which encodes the query location and the input functions independently and aggregates them to produce the prediction at the query location. Many improvements based on DeepONet have been proposed [30, 31, 47, 52, 16, 50, 49].
## 3 CFDBench
It has been proved that neural networks can be used to solve nonlinear PDEs [20], including the classical Navier-Stokes equation in fluid mechanics, but little work has been done to train and test real flow problems [45, 22]. CFDBench is designed for training and testing existing neural models for flow problems under different operating conditions based on solving the N-S equation of incompressible fluid.
We first give a formal definition of the flow problems in this work, and then we list the four flow problems included in our benchmark, the parameters we used, and the considerations we had during dataset construction. For each problem, we generate flows with different _operating parameters_, which is the term we use to refer to the combination of the three kinds of condition: (1) the BC, (2) the fluid physical property (PROP), and (3) the geometry of the field (GEO). Each kind of operating parameter corresponds to one subset. In each subset, the corresponding operating conditions are varied while
Figure 1: Some examples of the velocity field in the four problems in CFDBench. From left to right: cavity flow, tube flow, dam flow, and cylinder flow.
other parameters remain constant. The goal is to evaluate the ability of the data-driven deep learning methods to generalize to unseen operating conditions. Figure 1 shows an example snapshot of each of the problems in our dataset.
### The Definition of Flow Problems
The Navier-Stokes equations can be formalized as follows.
\[\begin{cases}\nabla\cdot(\rho\mathbf{u})=0\\ \frac{\partial}{\partial t}(\rho\mathbf{u})+\nabla\cdot(\rho\mathbf{u}\mathbf{ u})=-\nabla p+\nabla\cdot\mu[\nabla\mathbf{u}+(\nabla\mathbf{u})^{\top}],\end{cases} \tag{1}\]
where \(\rho\) is the density and \(\mu\) is the dynamic viscosity, \(\mathbf{u}=(u,v)^{\top}\) is the velocity field, and \(p\) is the pressure.
Suppose the fluid is incompressible (\(\rho=const\)) and the fluid is a Newtonian fluid (\(\tau=\mu\frac{du}{dy}\)). Combining the continuum hypothesis and Stokes' law, we get the following equations inside the flow domain (when \((x,y,t)\in\mathcal{D}\)).
\[\begin{cases}\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0\\ \frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}+v\frac{\partial u }{\partial y}=-\frac{1}{\rho}\frac{\partial p}{\partial x}+\frac{\mu}{\rho}( \frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y})\\ \frac{\partial v}{\partial t}+u\frac{\partial^{2}v}{\partial x}+v\frac{\partial v }{\partial y}=-\frac{1}{\rho}\frac{\partial p}{\partial y}+\frac{\mu}{\rho}( \frac{\partial^{2}v}{\partial x^{2}}+\frac{\partial^{2}v}{\partial y^{2}}), \end{cases} \tag{2}\]
and \((u,v)\) are constant on the boundaries (\(\partial\mathcal{D}\)).
In this work, we consider four important and representative fluid problems that can comprehensively evaluate different methods' capabilities in different problems. They are (1) the flow in the lid-driven cavity, (2) the flow into the circular tube, (3) the flow in the breaking dam, and (4) the flow around the cylinder. These flow problems cover most of the common flow phenomena. They have both open and closed systems and vary in shape. The system boundaries include both moving/stationary boundaries and velocity/pressure inlet and outlet boundaries. They include vertical flows within gravity and plane flows without gravity. Their flow characteristics include the formation of a viscous boundary layer, the formation and shedding of vortexes, and the formation of jets. They have both single-phase flow and two-phase flow, both laminar flow and turbulent flow. However, in order to ensure the cleanliness of the data, that is, to ensure that the data fully satisfy the above equation, we regard the flow as the flow of incompressible Newtonian flow, ignoring the mass transfer at the two-phase interface and the energy dissipation during the flow process.
For simplicity, we will refer to the four problems as **(1) cavity flow**, **(2) tube flow**, **(3) dam flow**, and **(4) cylinder flow**. For each problem, we use different operating parameters and generate the flow fields using numerical methods.
### Cavity Flow
Cavity flow refers to a flow in a square container with a moving upper wall surface (i.e., the lid) and three stationary walls. Due to viscosity, the moving wall drives the fluid in proximity to move in the same direction until the stationary wall forms a jet impacting the lower wall and then forms a secondary vortex. On the one hand, the lid-driven cavity flow has a wide range of applications in the industry, such as the transient coating (short dwell coating) process [2], the ocean flow affected by the wind, and so on. On the other hand, the special case is that the BC is discontinuous [42] at the connection of the moving wall and the stationary side wall, which makes it judge the convergence of numerical methods. Thus, it is widely used to verify the accuracy of computational fluid mechanics software or numerical methods [14]. Therefore, the construction of the top lid-driven cavity flow data set is beneficial to study the ability of the neural network model to solve the flow problem.
In the dataset with the cavity flow, the baseline conditions are \(\rho=1kg/m^{3}\), \(\mu=10^{-5}Pa\cdot s\), \(l=d=0.01m\), \(u_{\mathrm{top}}=10m/s\), where \(\rho\) and \(\mu\) are the density and viscosity of the fluid, \(l\) and \(d\) are the length and width of the cavity, and \(u_{\mathrm{top}}\) is the top wall movement velocity. 50 different cases are generated by varying \(u_{\mathrm{top}}\) from \(1m/s\) to \(50m/s\) with a constant step size. 84 cases are generated
varying the physical properties of the working fluid, with 12 different values of density and 7 values of viscosity. For the cases with different geometries, we choose different combinations of length and width from \(\{0.01,0.02,0.03,0.04,0.05\}\). To have an appropriate scale of difference between the frames, we set the time step size to \(\Delta t=0.1s\).
### Tube Flow
The tube flow refers to a water-air two-phase flow into the circular tube filling with air The boundary layer in the circular tube is one of the most common flows, which means that the viscosity resistance of the fluid on the near-wall surface is greater than the fluid in the bulk flow region. When the water flows into the round tube filled with air, we can clearly see that the flow is slow near the wall and fast in the center. Therefore, the construction of water-air laminar flow in a circular tube is beneficial to study the ability of the neural network structure to capture the two-phase interface and to learn the laminar boundary layer theory.
In the dataset of the tube flow, the baseline conditions are \(\rho=100kg/m^{3}\), \(\mu=0.1Pa\cdot s\), \(u_{in}=1m/s\), \(d=0.1m\), \(l=1m\), where \(\rho\) and \(\mu\) are the density and viscosity of the fluid, \(u_{in}\) is the inlet velocity (from the left), \(d\) and \(l\) is the diameter and the length of the circular tube. 50 cases were generated for different BCs, increasing the inlet velocity from \(0.1m/s\) to \(5m/s\), with increments of \(0.1m/s\). 100 cases with different physical properties of the working fluid are generated, and the two-dimensional space of different densities and dynamic viscosity are shown in Table 3, where the density increases from \(10kg/m^{3}\) to \(1000kg/m^{3}\) with increments of \(110kg/m^{3}\), and viscosity increases from to and the viscosity increases from \(0.01Pa\cdot s\) to \(1Pa\cdot s\) with increments of \(0.11Pa\cdot s\). For different geometries, the diameter of the circular tube is taken from \(\{0.01,0.05,0.1,0.3,0.5\}\), and we choose five different ratios of diameter and length by making sure the length satisfies \(0.1\leq l\leq 10\). This results in 25 different geometries. To have an appropriate scale of difference between the frames, we set the time step size to \(\Delta t=0.01s\).
### Dam Flow
A dam is a barrier across flowing water that obstructs, directs, or slows down the flow. Meanwhile, sudden, rapid, and uncontrolled release of impounded water quickly causes a dam to burst [1]. To further understand the flow of water over the dam, we simplified it to the flow of water over a vertical
\begin{table}
\begin{tabular}{c|l} \hline \hline BC & \(u_{B}\in\{1,2,3,\cdots,50\}\) \\ \hline Property & \(\rho\in\{0.1,0.5,1,2,3,\cdots,10\}\) \\ & \(\mu\in\{10^{-5},5\times 10^{-5},\cdots,5\times 10^{-3},\ 10^{-2}\}\) \\ \hline Geometry & \(l,w\in\{0.01,0.02,0.03,0.04,0.05\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Operating parameters of the subset in the cavity flow problem.
\begin{table}
\begin{tabular}{c|l} \hline \hline BC & \(u_{\mathcal{B}}\in\{0.1,0.2,0.3,\cdots,5\}\) \\ \hline Property & \(\rho\in\{10,120,320,\cdots,1000\}\) \\ & \(\mu\in\{0.01,\ 0.12,\ 0.23,\cdots,1\}\) \\ \hline Geometry & \(l\in\{0.01,0.05,0.1,0.3,0.5\}\) \\ & \(d/l\in\{1,2,5,7.5,10,15,20,50,75,100\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Operating parameters of the subset in the tube flow problem.
\begin{table}
\begin{tabular}{c|l} \hline \hline BC & \(u_{\mathcal{B}}\in\{0.05,0.1,\cdots,1\}\cup\{1.02,1.04,\cdots 2\}\) \\ \hline Property & \(\rho\in\{0.1,0.5,1,2,3,\cdots,10\}\) \\ & \(\mu\in\{10^{-5},5\times 10^{-5},\cdots,5\times 10^{-3},\ 10^{-2}\}\) \\ \hline Geometry & \(h\in\{0.11,0.12,0.13,0.14,0.15\}\) \\ & \(w\in\{0.01,0.02,\cdots,0.08,0.09\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Operating parameters of the subset in the dam flow problem.
obstacle. When the Reynolds number is low, the fluid is dominated by the viscous force and will flow vertically down the wall as it flows through the dam [35]. As the speed increases, the fluid is more affected by the inertial force, and a jet will be formed. Then the fluid falls to the boundary because of gravity and the collision with the boundary makes more reverse flow, which will hit the dam with a bigger velocity than the inlet. Therefore, the dam flow dataset is helpful in studying the learning ability of the model for flows subject to different viscous and inertial forces.
In the dataset of dam flow, the baseline conditions are \(\rho=100kg/m^{3}\), \(\mu=0.1Pa\cdot s\), \(u_{in}=1m/s\), \(h=0.1m\), \(w=0.05m\), where \(\rho\) and \(\mu\) are the density and viscosity of the fluid, \(u_{in}\) is the inlet velocity (from the left), \(h\) and \(w\) is the height and the width of the dam obstacle. The entire fluid domain is \(1.5\)m long and \(0.4\)m high. The inlet velocity boundary is close to the ground, with a total length of \(0.1\)m, and \(0.3\)m above it is the inlet pressure boundary. The barrier is located \(0.5\)m from the entrance. \(70\) cases were generated for different BCs, increasing the inlet velocity from \(0.05m/s\) to \(1m/s\) with increments of \(0.05m/s\) and from \(1m/s\) to \(2m/s\) with increments of \(0.02m/s\). \(100\) cases with different physical properties of the working fluid are generated, and the two-dimensional space of different densities and dynamic viscosity are shown in Table 3, where the density increases from \(10kg/m^{3}\) to \(1000kg/m^{3}\) with increments of \(110kg/m^{3}\), and viscosity increases from to and the viscosity increases from \(0.01Pa\cdot s\) to \(1Pa\cdot s\) with increments of \(0.11Pa\cdot s\). \(50\) cases with different geometries are generated, increasing the height from \(0.11m\) to \(0.15m\) with increments of \(0.01m\) and width from \(0.01m\) to \(0.09m\) with increments of \(0.01m\) of dam obstacle. To have an appropriate scale of difference between the frames, we set the time step size to \(\Delta t=0.1s\).
### Cylinder Flow
A flow around a cylinder is a typical boundary layer flow, which is commonly seen in the industry where water flows through bridges, the wind blows through towers, etc [41]. When the fluid with a large flow rate passes around the cylinder, the boundary layer fluid separates to form the reverse zone due to the combined effect of reverse pressure gradient and wall viscous force retardation. At a specific Reynolds number, the two sides of the cylinder periodically generate a double row of vortexes with opposite rotational directions and are arranged in a regular pattern. Through nonlinear interactions, these vortexes form a Karman vortex street. after nonlinear action. Therefore, the cylindrical flow dataset is important for examining the capability of neural networks in modeling periodic flows with obstacles.
In the dataset of the cylinder flow, the baseline conditions are \(\rho=10kg/m^{3}\), \(\mu=0.001Pa\cdot s\), \(u_{in}=1m/s\), \(d=0.02m\), \(x_{1}=y_{1}=y_{2}=0.06m\), \(x_{2}=0.16m\), where \(\rho\) and \(\mu\) are the density and viscosity of the fluid, \(u_{in}\) is the inlet velocity (from the left), d is the diameter of the cylinder, \(x_{1},x_{2},y_{1},y_{2}\) is the distance between the center of the cylinder and the left, right, top and bottom boundaries, respectively. \(50\) cases are generated for different BCs, increasing the inlet speed from \(0.1m/s\) to \(5m/s\) with increments of \(0.1m/s\). \(115\) cases are generated for the different physical properties of the fluid so that the Reynolds numbers are in the range of \([20,1000]\). Table 4 shows some values of density and viscosity, but not all combinations are used because that results in Reynolds numbers outside of the target range. For different geometries, the distance from the cylinder to the upper and lower boundaries and the entrance is taken from \(\{0.02,0.04,0.06,0.08,0.1\}\), the distance from the cylinder to the exit boundary is taken from \(\{0.12,0.14,0,16,0.18,0.2\}\), and the radius of the cylinder is taken from \(\{0.01,0.02,0.03,0.04,0.05\}\). \(20\) cases are generated. To ensure an appropriate scale of difference between the frames, we set the time step size to \(\Delta t=0.001s\).
The problem datasets and the number of cases under different operating conditions are summarized in Table 5.
### Data Generation
All the data in this paper are generated by ANSYS Fluent 2021R1. In order to calculate the viscosity term accurately, the laminar model is used for laminar flow and SST \(k-\omega\) model for turbulent flow. All solvers used are based on pressure. We choose a Coupled Scheme for single-phase flow and SIMPLE for two-phase flow as a pressure-velocity coupling algorithm. The pressure equation uses the second-order interpolation method (the VOF model uses the PRESTO! Interpolation method), and the momentum
\begin{table}
\begin{tabular}{c|c} \hline BC & \(u_{\mathcal{B}}\in\{0.1,0.2,0.3,\cdots,5\}\) \\ \hline \multirow{4}{*}{Property} & \(\rho\in\{0.1,0.2,\cdots,1\}\cup\{1.5,2.5,\cdots,4.5,5\}\cup\{6,7,\cdots,9,10\}\) \\ & \(\cup\{20,30,40,\cdots,250\}\cup\{300,400,500\}\) \\ & \(\mu\in\left\{10^{-4},5\times 10^{-4},10^{-3},5\times 10^{-3},10^{-2}\right\}\) \\ \hline Geometry & \(d\in\{0.01,0.02,0.03,0.04,0.05\}\) \\ & \(x_{1},y_{1},y_{2}\) \(\in\) \\ & \(\{0.02,0.04,0.06,0.08,0.1\}\) \\ & \(x_{2}\in\{0.12,0.14,0.16,0.18,0.2\}\) \\ \hline \end{tabular}
\end{table}
Table 4: Operating parameters of the subset in the cylinder flow problem.
\begin{table}
\begin{tabular}{c|c c c|c|c c c} \hline \hline & \multicolumn{4}{c|}{**Number of cases**} & \\ \hline
**Problem** & **BC** & **PROP** & **GEO** & **Total** & **\# frames** & **File size / frame** & **Gen. Time (s)** \\ \hline Cavity & 50 & 84 & 25 & 159 & 34,582 & 5,169KB & 0.92 \\ Tube & 50 & 100 & 25 & 175 & 39,553 & 4,794KB & 1.08 \\ Dam & 70 & 100 & 50 & 220 & 21,916 & 1,999KB & 3.98 \\ Cylinder & 50 & 115 & 20 & 185 & 205,620 & 4,375KB & 1.18 \\ \hline Sum & 220 & 399 & 120 & 739 & 301,671 & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Breakdown of the number of cases in each problem (the rows) and the corresponding subsets (the columns) in CFDBench. Each problem contains three subsets, each with one type of operating condition parameter that is varied.
equation adopts the second-order upwind method. The time term adopts the first-order implicit format and interpolation uses the least squares method. To capture the phenomenon of boundary layer separation at the near-wall surface, the size of the first layer mesh in the near-wall surface is encrypted to \(10^{-5}m\). To ensure the accuracy of the computational model and results, all computational models underwent grid-independent validation.
After discretizing the governing equations, the conservation equation of the universal variable(\(\Phi_{P}\)) at the grid element \(P\) can be expressed as:
\[a_{P}\Phi_{P}=\sum_{nb}a_{nb}\Phi_{nb}+b \tag{3}\]
in which \(a_{P}\) is coefficient of the node of element \(P\), \(a_{nb}\) is coefficients of neighbor nodes and \(b\) is the coefficient generated by constant term, source term and boundary condition. It defines the global scaling residual as:
\[R^{\Phi}=\frac{\sum_{cells}|\sum_{nb}a_{nb}\Phi_{nb}+b-a_{P}\Phi_{P}|}{\sum_{ cells}|a_{P}\Phi_{P}|} \tag{4}\]
The residual represents the relative size of the total unbalance term in the computational domain, and is generally used to judge the convergence of the solution. The smaller the residual, the better the convergence. In this paper, the residual convergence condition of all terms is set to \(10^{-9}\), and the residuals in the final calculation results are shown as Figure 2. The residuals of the velocity terms are all at least \(10^{-6}\).
All generations are run with 30 solver processes on a CPU of AMD Ryzen Threadripper 3990X. The final generated data was interpolated to a grid size of \(64\times 64\).
#### 3.6.1 Data Splitting
Each subset of data is split into training, validation, and test sets with a ratio of 8:1:1. The splitting unit is a case to ensure that the operating parameters in one set never appear in other sets.
Figure 2: The residuals of each flow problems in this paper. (a) cavity flow, (b) tube flow, (c) dam flow, (d) cylinder flow.
Experiments
After generating the benchmark data, we use it to train popular data-driven neural networks that can be used for approximating the solutions to PDEs. To keep the number of experiments manageable, in the following discussions, unless stated otherwise, we have the models predict the velocity field. We believe that modeling other properties or components of the flow should not be too different.
We first define the learning objective of the neural network. Then, we give a brief description of the baselines we experimented on. After that, we explain the loss functions and hyperparameters used in the experiments.
### Training Objectives
Most flow problems focus on solving the distribution of flow fields in the domain. Therefore, the objective of the neural networks is to approximate the following mapping within the domain \(\mathcal{D}=\{(x,y,t)\mid x\in[a,b],y\in[c,d],t\in[0,T]\}\):
\[G:(\Sigma,\Omega)\mapsto u \tag{5}\]
where \(\Omega=(u_{\mathcal{B}},\rho,\mu,d,l,w)\) is the operating parameters, which include the BC \((u_{\mathcal{B}})\), the physical properties \((\rho,\mu)\), and the geometry \(S\). \(\Sigma\) is the input function, which can be either the velocity field at a certain time (in autoregressive generation) or the spatiotemporal coordinate vector \((x,y,t)\) (in the non-autoregressive model). \(u\) is the output function, which is the velocity field.
When using a neural network \(f_{\theta}\) with parameters \(\theta\) to approximate \(G\), there are two approaches: non-autoregressive and autoregressive modeling.
Non-Autoregressive ModelingIn non-autoregressive modeling, the input function \(\Sigma\) is a _query location_\((x,y,t)\) and the model directly outputs the solution at that position:
\[\hat{u}\left(x,y,t\right)=f_{\theta}((x,y,t),\ \Omega)\in\mathbb{R} \tag{6}\]
Autoregressive ModelingAutoregressive modeling, which is similar to traditional numerical methods, learns the mapping of a flow field from the current time step to the next time step. Therefore, it predicts the distribution of flow fields at each moment according to the temporal order:
\[\hat{u}\left(t\right)=f_{\theta}\left(u\left(t-\Delta t\right),\Omega\right) \in\mathbb{R}^{n\times m} \tag{7}\]
where \(\hat{u}\) is the predicted value at time \(t\), \(n\) and \(m\) are the height and width of the domain. In other words, the input function is \(\Sigma=u\left(t-\Delta t\right)\).
The learning goal is to find one \(\theta^{*}\) that minimizes the loss function \(\mathcal{L}\) on the training data \(\mathcal{T}\).
\[\theta^{*}=\arg\max_{\theta}\mathcal{L}(u\left(x,y,t\right),\hat{u}(x,y,t)) \quad\forall x,y,t\in\ \mathcal{D},u\in\mathcal{T} \tag{8}\]
### Baselines
We evaluate on CFDBench some popular and performant neural networks that have been applied to solve PDE in existing works. Although CFDBench can be used to evaluate both data-driven and physics-informed methods, our experiments are limited to the former. This is because most physics-informed methods enforce operating conditions through loss functions, and therefore require retraining on unseen conditions.
We can generally predict the flow in two manners: non-autoregressively or autoregressively. The former directly predicts the output function value at a query location specified in the input. The latter predicts the field at the next time step given the field at the current time step. The two kinds are not directly comparable, so we discuss them separately.
From the perspective of the model architecture, we can categorize them into three types: (1) FFNs, (2) the DeepONet family, and (3) image-to-image models. The first category simply concatenates all inputs into one vector and maps that to the prediction space with an FFN. The second category
includes all variants of DeepONet [32]. The essence of this architecture is that the query location is independently encoded by a _trunk net_. This makes it possible to encode the input functions and other conditions without being limited to the shape or mesh of the output function domain and reuse that encoding to query the value of the output function at any location. The third category contains ResNet, U-Net, and FNO. They are the models that accept a \(n\)-dimensional array and output another \(n\)-dimensional array, which is the architecture that is commonly used for image-to-image tasks. Thus, we name this category image-to-image models. Table 6 compares all the baselines that we consider in this paper and Figure 3 figuratively illustrates the types and shapes of the input and output of each model.2
Footnote 2: In this paper, we regard all models that accept a query location, and have an independent network (i.e., the trunk net) for encoding query locations, and predict values at those locations. However, the differences between the DeepONet family and image-to-image models are subtle, and may be viewed as some variants of one another.
### Non-Autoregressive Baselines
In non-autoregressive modeling, we refer to the operating condition \(\Omega\) as the input function.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**Method** & **Auto.** & **Inp. Shape** & **Outp. Shape** & **Inputs** & **Outputs** \\ \hline FFN & No & Any & Any & \((x,y,t),\Omega\) & \(\hat{u}(x,y,t)\) \\ DeepONet & No & Any & Any & \((x,y,t),\Omega\) & \(\hat{u}(x,y,t)\) \\ Auto-FFN & Yes & Any & Any & \(u_{\text{sample}}(t-\Delta t),\Omega,(x,y)\) & \(\hat{u}(x,y,t)\) \\ Auto-DeepONet & Yes & Any & Any & \(u_{\text{sample}}(t-\Delta t),\Omega,(x,y)\) & \(\hat{u}(x,y,t)\) \\ Auto-EDeepONet & Yes & Any & Any & \(u_{\text{sample}}(t-\Delta t),\Omega,(x,y)\) & \(\hat{u}(x,y,t)\) \\ Auto-DeepONetCNN & Yes & Grid & Any & \(u(t-\Delta t),\Omega,(x,y)\) & \(\hat{u}(x,y,t)\) \\ ResNet & Yes & Grid & Grid & \(u(t-\Delta t),\Omega\) & \(\hat{u}(t)\) \\ U-Net & Yes & Grid & Grid & \(u(t-\Delta t),\Omega\) & \(\hat{u}(t)\) \\ FNO & Yes & Grid & Grid & \(u(t-\Delta t),\Omega\) & \(\hat{u}(t)\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Overview of the different baseline models we consider. **“Auto.”** refers to whether the method is autoregressive. \(u_{\text{sample}}\) is a list of samples points from \(u\).
Figure 3: Overview of the input and output types and shapes of each baseline model.
#### 4.3.1 Fnn
FNN is the simplest form of non-autoregressive modeling. The coordinates of the query location and the input function are simply concatenated into one vector, and fed to a chain of fully connected layers. Thus, the prediction is
\[\hat{u}(x,y,t)=f_{\theta}(\Omega||(x,y,t)),\]
where \(||\) is the concatenation operator. This model is depicted in Figure 3(a) and it can be regarded as the data-driven version of PINN [36].
#### 4.3.2 DeepONet
[32] have shown that by separating the encoding process of the input function and the query location can reduce error. They are encoded by two separate FFNs, the branch net and the trunk net. The outputs are aggregated by dot product to produce the final prediction:
\[\hat{u}(x,y,t)=f_{B}(\Omega)\cdot f_{T}(x,y,t)+b, \tag{9}\]
where \(f_{B}\) and \(f_{T}\) are the branch and trunk net, and \(b\in\mathbb{R}\) is a trainable scalar that acts as the bias term. In other words, DeepONet is a specific case of FFN where each linear layer is cut in half, and each neuron can only see the operating parameters \(\Omega\) or only the query coordinates \((x,y,t)\).
Furthermore, to improve the training speed of DeepONet, we can reuse the output of the branch net within each mini-batch. We sample \(k=1000\) points in each frame as labels. \(f_{B}(\Omega)\) is computed
Figure 4: The structure of each baseline model in this paper.
once, and each of the 1000 points \((x,y,t)\) are dotted with \(f_{B}(\Omega)\) before updating the model weights. Figure 3(b) illustrates the structure of DeepONet.
### Autoregressive Baselines
Autoregressive is arguably more similar to traditional numerical solvers, where the model predicts the flow state at the next time step given the previous time step, i.e., \(f_{\theta}:(u(t-\Delta t),\Omega)\mapsto u(t)\). Image-to-image models directly model \(f_{\theta}\), and different image-to-image models differ only in the implementation of \(f_{\theta}\).
#### 4.4.1 Autoregressive FFN
The autoregressive FFN is similar to the non-autoregressive version. The input field, operating conditions, and the query location are all concatenated and fed to an FFN, which predicts the current field at the query location:
\[\hat{u}(x,y,t)=f_{\theta}\left(u_{\text{sample}}||\Omega||(x,y)\right), \tag{10}\]
where \(u_{\text{sample}}\) refers to a list of field values sampled from \(u\). This can be seen as a completely data-driven version of PINN [36]. Figure 3(a) depicts the structure of Auto-FFN.
#### 4.4.2 Autoregressive DeepONet
We also consider modifying DeepONet to allow it to generate the solution autoregressively, and we name this model **Auto-DeepONet**.The structure is shown in Figure 3(c). The input to the branch net (i.e., the input function) is \((u(t-\Delta t),\Omega)\) where \(u(t-\Delta t)\) is the last predicted velocity field and \(\Omega\) is the operating condition parameters. The input to the trunk net is the spatial coordinates \((x,y)\) of the query location, while the target output of the model is the value of the velocity field in the next time frame at \((x,y)\), i.e., \(u(x,y,t)\). The model is formulated as follows.
\[\hat{u}(x,y,t)=f_{B}\left(u_{\text{sample}}(t-\Delta t)||\Omega \right)\cdot f_{T}(x,y)+b \tag{11}\]
#### 4.4.3 Autoregressive EDeepONet
EDeepONet (Enhanced DeepONet) [46] extends DeepONet's architecture to consider multiple input functions. EDeepONet has one branch net for encoding each input function independently, and the branch outputs are aggregated by element-wise product. Since in autoregression, the DeepONet conditions on two inputs, \(u(t-\Delta t)\) and \(\Omega\), we also evaluate the autoregressive version of, **Auto-EDeepONet**. The prediction is modeled as follows.
\[\hat{u}(x,y,t)=\left[f_{B1}\left(u_{\text{sample}}(t-\Delta t) \right)\odot f_{B2}\left(\Omega\right)\right]\cdot f_{T}(x,y)+b \tag{12}\]
where \(\odot\) denotes the element-wise product.
In other words, EDeepONet is a specific case of DeepONet, where the branch net is split into two parts, each responsible for one input functions) and the neural links between each piece are removed (or deactivated by setting them to zero). This structure is illustrated in Figure 3(d).
We do not evaluate the non-autoregressive version of EDeepONet because our preliminary experiments show that splitting \(\Omega\) has no significant impact on the ability of the neural network. However, in autoregression, the input includes \(u_{\text{sample}}(t-\Delta t)\), which is much larger than \(\Omega\), and simply concatenating the two vectors may cause the neural to fail to learn dependence on \(\Omega\).
#### 4.4.4 Autoregressive DeepONetCNN
We also experimented with CNN as the feature extractor for the input field called **Auto-DeepONetCNN**. This is almost the same as Auto-DeepONet, but the \(f_{B}\) is implemented with a CNN, because CNN may be better at extracting features from a lattice of a field. Since CNN requires a cuboid input, the
input to the branch net needs to be \(u(t-\Delta t)\) instead of \(u_{\text{sample}}(t-\Delta t)\). Similar to ResNet, U-Net, and FNO, \(\Omega\) is appended to \(u(t-\Delta t)\) as additional channels. The formulation is as follows.
\[\hat{u}(t)=\text{CNN}(u(t-\Delta t),\Omega)\odot f_{T}(x,t)+b \tag{13}\]
#### 4.4.5 ResNet
A residual neural network (ResNet) is a CNN with residual connections proposed by [18], and it has shown excellent performance on many computer vision tasks. Residual connectivity can effectively alleviate the degradation problem of the neural network when the depth increases, thus enhancing the learning ability of the model.3 The model can be formalized as follows.4
Footnote 3: Interestingly, a ResNet block adds the input to the output of a CNN block, which is \(x+f(x)\) and is similar to many iterative numerical methods.
Footnote 4: \(f\circ g\) denotes the composite of \(f\) and \(g\).
\[\hat{u}(t) =\text{ResNetBlock}_{l}\circ\cdots\circ\text{ResNetBlock}_{1}(x) x=f_{in}(u(t-\Delta t),\Omega) \tag{14}\] \[\text{ResNetBlock}_{i}(x) =x+\text{CNN}_{i}(x)\qquad i=1,\ldots,l \tag{15}\]
where \(\text{CNN}(\cdot)\) is a CNN network.
ResNet has many possible ways to put the ResNet blocks together, and this paper uses a string of residual blocks of the same size.
#### 4.4.6 U-Net
U-Net [40] is a CNN with an encoder-decoder structure, which performs very well in numerous image segmentation and image-to-image translation tasks. The encoder realizes feature extraction and parameter reduction through down-sampling methods such as convolution (with larger striding) and pooling, and the decoder uses the feature encodings to produce an image through up-sampling and channel splicing, so as to achieve the purpose of image generation or segmentation. Compared to ResNet, the down-sampling of U-Net can reduce the number of parameters. Up-sampling can improve the globality of the convolution kernel because after up-sampling, a signal affects a larger region than without up-sampling. The structure of the U-Net used in this paper is illustrated in Figure 4f.
\[\hat{u}(t) =\text{UNet}(u(t-\Delta t),\Omega) \hat{y} =f_{out}(y_{0}) \tag{16}\] \[x_{0} =f_{\text{in}}(u(t-\Delta t),\Omega) y_{0} =\text{UpConv}_{1}(y_{1},x_{0})\] (17) \[x_{1} =\text{DownConv}_{1}(x_{0}) y_{1} =\text{UpConv}_{2}(y_{2},x_{1})\] (18) \[\vdots \vdots\] (19) \[x_{l} =\text{DownConv}_{l-1}(x_{l-1}) y_{l} =\text{UpConv}_{l}(y_{l+1},x_{l})\] (20) \[x_{l+1} =\text{DownConv}_{l}(x_{l}) y_{l+1} =x_{l+1} \tag{21}\]
where
\[\text{DownConv}(x_{i})) =\text{Conv}(\text{DownSample}(x_{i-1})) \tag{22}\] \[\text{UpConv}(y_{i},x_{j}) =\text{Conv}(\text{UpSample}[y_{i+1}]||x_{i}) i=1,\cdots,l, \tag{23}\]
\(\text{Conv}_{i}(\cdot)\) denotes a CNN, \(\text{UpSample}(\cdot)\) and \(\text{DownSample}(\cdot)\) denotes the up-sampling and down-sampling functions, \(l\) denotes the number of U-Net blocks, and \(f_{in}(\cdot)\) and \(f_{out}(\cdot)\) denotes two trainable mappings. This architecture is shown in Figure 4f.
#### 4.4.7 Fno
Fourier neural operator (FNO) [29] is a neural network that parameterizes the convolution kernel in Fourier space. It can learn the mapping of high-dimensional space and especially performs well in the problem of turbulent pulsation. The Fourier neural operator first raises the input function to a high-dimensional space through a shallow fully connected network and then approaches the target transform through the Fourier layer containing the Fourier transform and the inverse transform. The FNO has better globality than an ordinary CNN because any signal in the Fourier space affects the output on the entire spatial domain. Figure 4e shows the structure of FNO, and it can be formalized as follows.
\[\hat{u}(t) =Q(h_{l+1}) \tag{24}\] \[h_{i+1} =\text{FourierBlock}_{i}(h_{i})=\mathcal{F}^{-1}[R_{i}(\mathcal{F }[h_{i}])]+W_{i}(h_{i})+h_{i} i=1,\dots,l\] (25) \[h_{1} =P(u(t-\Delta t),\Omega) \tag{26}\]
where \(\mathcal{F}(\cdot)\) denotes the Fourier transform, \(P(\cdot)\), \(Q(\cdot)\), \(W_{i}(\cdot)\) are ordinary convolutional layers, and \(R_{i}(\cdot)\) is a \(1\times 1\) convolutional layer.
It is worth mentioning that, in the original paper of FNO [29], the input includes multiple time steps before the current time step, which provides additional information about the flow's state and may make inference easier. However, this limits the usability of the method. Therefore, in this work, we only consider the scenario where the input contains no more than one frame.
### Conditioning on Operating Parameters
Most existing works on neural operators keep the operating parameters (\(\Omega\)) constant, and the input function, which is the IC, is the only input to the operator. In contrast, CFDBench considers varying the operating parameters while keeping the IC constant. Consequently, we need to make appropriate modifications to existing neural models for PDEs such that the predictions can be conditioned on the operating parameters.
For the autoregressive models, we treat the problem as a conditional image-to-image translation task, where the velocity field at the previous moment \(u(x,y,t-\Delta t)\) is the input image, the velocity field at the current moment \(u(x,y,t)\) is the target image, and the operating condition parameters \(\Omega\) are the condition. For simplicity, we add \(\Omega\) to the input as additional channels, one channel for each parameter. In this work, there are 5 parameters in \(\Omega\), so the input at position \((x,y)\) is \((u(x,y),u_{\mathbf{S}},\rho,\mu,h,w)\) where \(h,w\) are the height and width. For the flow around a cylinder, the model also needs to know the location and shape of the obstacle. To this end, we add a _mask_ channel where 0 indicates obstacles at that position and 1 indicates no obstacles.
### Loss Functions
During training, we use the normalized mean squared error (the NMSE defined below) as the training loss function to ensure that the model would prioritize minimizing the difference for labels with smaller absolute values.5 For evaluating, we also report the following three kinds of error values for comprehensiveness. We denote the label value with \(\mathbf{Y}\) and the predicted value with \(\hat{\mathbf{Y}}\).
Footnote 5: Preliminary experiments show that using a different loss function for training (e.g., using MSE instead of NMSE) does not impact the primary conclusions about the behaviors of the models that are drawn from the results. The only significant behavior change is that the loss function used for training will be smaller on the test data. Thus, this work only train the baseline models using NMSE.
#### Mean Square Error (MSE)
\[\text{MSE}=\frac{1}{n}\sum_{i=1}^{n}(\mathbf{Y}_{i}-\hat{\mathbf{Y}}_{i})^{2} \tag{27}\]
#### Normalized Mean Square Error (NMSE)
\[\text{NMSE}=\frac{\sum_{i=1}^{n}(\mathbf{Y}_{i}-\hat{\mathbf{Y}}_{i})^{2}}{\sum_ {i=1}^{n}\mathbf{Y}_{i}^{2}} \tag{28}\]
#### Mean Absolute Error (MAE)
\[\text{MAE}=\frac{1}{n}\sum_{i=1}^{n}|\mathbf{Y}_{i}-\hat{\mathbf{Y}}_{i}| \tag{29}\]
As we will show with experiments in Section 5, one method may perform better than another method in terms of one metric, but perform worse in terms of another metric. Therefore, it is important for practitioners to select one or multiple metrics that best reflect their interests.
### Hyperparameter Search
The performance of the methods is dependent on the hyperparameters such as learning rate, number of training epochs, etc. Because our problem setting is significantly different from existing works, the optimal hyperparameters of each baseline model are likely very different from the ones found by the authors. We perform a hyperparameter search of the baseline models using the PROP subset of the cavity flow problem (84 different flows).
A more detailed description of the hyperparameter search process can be found in Appendix B. In summary, to make the methods comparable, we generally want to keep the number of parameters to be roughly the same.6 For ResNet, U-Net, and FNO, we try different depths and numbers of hidden channels. We also experiment with new ways to inject operating parameters. For FNN and variants of DeepONets, we try different widths and depths of the hidden linear layers. Additionally, the learning rate is selected individually for each method based on the validation loss, and we always train until convergence.
Footnote 6: An alternative is to align the computational cost (for training and testing) of the models, which is another important practical concern in the application of CFD modeling methods.
#### 4.7.1 ResNet
For ResNet, we conducted a hyperparameter search on the depth \(d\) (i.e., the number of residual blocks) and hidden dimension \(h\) (i.e., the number of channels of the output of each residual block). We found that ResNet's ability to learn from flow problems is poor, and it quickly becomes unable to converge when \(d\) and \(h\) increase7. The setting with the lowest validation loss is \(d=4\) and \(h=16\), which we used to train on the data of flow in the tube, and the test loss is shown in Table 7. The result shows that ResNet's performance is generally slightly worse than the identity transformation. One plausible explanation for this is that ResNet is poor at modeling global dependencies, i.e., the input signal at any point after one convolution layer with a \(k\times k\) kernel can only spread around the original position within its neighboring \(k\times k\) range. Therefore, we do not consider ResNet in further discussions below.
Footnote 7: We regard situations where the prediction is worse than the identity transformation as failure to converge.
### Other Details
For autoregressive models, we always train the model on one forward propagation, while for non-autoregressive models to train on randomly sampled query points on the entire spatiotemporal domain. We tune the learning rate on the cavity PROP subset, and always have it decay by a factor of 0.9 every 20 epochs, which we empirically found to be effective. One may get better performance by tuning more hyperparameters, such as trying different learning rate schedulers and tuning them on the entire dataset. However, that is prohibitively expensive considering the size of the dataset.
All methods were implemented using PyTorch deep learning framework, and all experiments were executed on one local computer with one RTX 3060 GPU. Most results are the average of three runs with different random seeds.
Figure 5: Prediction of the velocity field by the baseline models on the **cavity** and **tube** flow problems. This is the result of one forward propagation step.
Figure 6: Prediction of the velocity field by the baseline models on the **dam** and **cylinder** flow problems. This is the result of one forward propagation step.
## 5 Results
In this section, we offer an exposition of the predictive results generated by the baseline models. Our analysis commences with the prediction of the flow field distribution at a singular time step, subsequently progressing to the forecasting of multiple sequential time steps. To evaluate the predictive capabilities, we conduct a comparative assessment of both non-autoregressive and autoregressive models. Additionally, we provide a comparative analysis of the computational power consumption associated with each of these models.
### Single Step Prediction
Figure 5 and Figure 6 show the predicted velocity field of all baseline models on the three subsets of the four flow problems in CFDBench. From top to bottom, the first row is the input, the second row is the label, and the following are the predictions of non-autoregressive and autoregressive models. We find, in general, that the baseline models perform relatively well on cavity flow and dam flow while struggling on tube flow and cylinder flow, especially for non-autoregressive models.
It is important to recognize the difference between autoregressive and non-autoregressive models when analyzing the result. The task of the non-autoregressive model is to directly produce the value of the output function at a designated query location in the entire spatiotemporal domain. This should be significantly more difficult than the autoregressive model, which only needs to learn the mapping from the field at the previous time frame to the field at the current time frame.
Also, the autoregressive models require that the input and output functions be represented with a grid, which limits their flexibility and may result in loss of information on regions where the field value changes sharply for small spatial changes. Furthermore, the non-autoregressive model has better mesh-independence, because the model can output the predicted value of the output function at any location.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline Method & (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline \multicolumn{8}{c}{NMSE} \\ \hline Identity & 0.100 & **0.108** & **0.076** & **0.108** & 0.097 & **0.112** & 0.111 \\ ResNet & **0.065** & 0.147 & 0.863 & 0.200 & **0.094** & 0.156 & **0.080** \\ \hline \multicolumn{8}{c}{MSE} \\ \hline Identity & 0.343 & **0.031** & **0.029** & 0.149 & 0.028 & **0.347** & 0.164 \\ ResNet & **0.065** & 0.044 & 0.500 & **0.119** & **0.027** & 0.339 & **0.058** \\ \hline \multicolumn{8}{c}{MAE} \\ \hline Identity & 0.166 & **0.076** & **0.057** & **0.120** & **0.067** & **0.167** & **0.119** \\ ResNet & **0.112** & 0.146 & 0.624 & 0.166 & 0.098 & 0.296 & 0.130 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The validation loss of ResNet and the identity transformation for the 7 subsets (see Section 5) in the cavity flow problem. The better result is highlighted in **bold**.
Figure 7: Summary of the performance of autoregressive baseline methods on the four problems (with all cases). FNO’s result on the dam flow problem is removed because the error is too large and including it would make the plot less intelligible. The bar chart of the cylinder flow (d) is in logarithmic scale.
This has great significance for the study of many topographic complex problems. In addition, non-autoregressive inference may be much more efficient 8 because it can predict values at any time frame while autoregressive models need to propagate one time step at a time. In summary, the autoregressive and non-autoregressive models cannot be directly compared against each other, and non-autoregressive inference is generally much faster at long-range prediction and is significantly more difficult.
Footnote 8: Efficient in terms of inference speed, compared with autoregressive models of roughly the same size.
#### 5.1.1 Non-Autoregressive Modeling
Table 8 shows the test results of FFN and DeepONet on the four problems and their corresponding seven subsets in CFDBench. Contrary to the observations by [32], we find that FNN generally has a better generalization ability compared to DeepONet in most cases. In some cases, the error of FNN is several orders of magnitude smaller than that of DeepONet. In contrast, in DeepONet's best cases, it still only has a marginal accuracy gain over FFN. However, we believe this is not surprising because DeepONet is one specific case of FFN.
We also observe the PROP subset is generally easier than other subsets. This is likely because physical properties affect the velocity less than other operating parameters, making train-test domain gap smaller. With varying BCs and geometries, DeepONet suffers from severe overfitting, producing fields with little resemblance to the labels. With varying BCs, it is prone to show the velocity distribution in a steady state while with varying geometries, it tends to behave as identity transformations.
\begin{table}
\begin{tabular}{l|l|c c|c c|c c} \hline \hline \multirow{2}{*}{Problem} & \multirow{2}{*}{Subset} & \multicolumn{2}{c|}{NMSE} & \multicolumn{2}{c|}{MSE} & \multicolumn{2}{c}{MAE} \\ & & FFN & DeepONet & FFN & DeepONet & FFN & DeepONet \\ \hline \multirow{8}{*}{Cavity} & **(1) PROP** & **0.0099592** & 0.0291865 & **0.0473762** & 0.1693782 & **0.1111506** & 0.5680576 \\ & **(2) BC** & **0.0023445** & 0.1110036 & **0.2437581** & 13.2787848 & **0.2882956** & 1.7553163 \\ & **(3) GEO** & 0.6239881 & **0.5806319** & **1.8697622** & 1.9919338 & **0.9227801** & 0.9277460 \\ & **(4) P+BC** & **0.0084082** & 0.0799971 & **0.2003079** & 3.3371139 & **0.2580179** & 0.8954092 \\ & **(5) P+G** & 0.1242569 & **0.0892609** & 0.3536154 & **0.3205143** & **0.2056559** & 0.2610952 \\ & **(6) BC+G** & **0.1161350** & 0.2098982 & **1.1426576** & 10.0195319 & **0.5412614** & 1.5829833 \\ & **(7) All** & **0.0221644** & 0.0646872 & **0.7857684** & 4.0079125 & **0.4356092** & 1.0548950 \\ \hline \multirow{8}{*}{Tube} & **(1) PROP** & **0.0004197** & 0.0039994 & **0.0002522** & 0.0026946 & **0.0078708** & 0.0315951 \\ & **(2) BC** & **19.2247428** & 25.8505255 & 1.4869019 & **1.3689488** & 0.7780905 & **0.7297134** \\ & **(3) GEO** & **0.1674582** & 0.1681485 & 0.1735853 & **0.1708268** & **0.2698841** & 0.3242756 \\ & **(4) P+BC** & **3.7321264** & 5.8203221 & 0.5254855 & **0.4897547** & 0.3910940 & **0.3403379** \\ & **(5) P+G** & 0.6573232 & **0.5961287** & **0.112503** & 0.1187899 & **0.1520187** & 0.1924667 \\ & **(6) BC+G** & 0.6412595 & **0.5119403** & **1.7430317** & 2.1646790 & **0.8040325** & 0.867828 \\ & **(7) All** & 3.0935680 & **0.3437079** & 0.5307588 & **0.4553377** & 0.3303886 & **0.3416610** \\ \hline \multirow{8}{*}{Dam} & **(1) PROP** & **0.0004000** & 0.0205820 & **0.0002025** & 0.0104605 & **0.0080647** & 0.0602617 \\ & **(2) BC** & 0.3882083 & **0.0145171** & 0.1656223 & **0.0098575** & 0.2962269 & **0.0512015** \\ & **(3) GEO** & **0.0408200** & 0.0438950 & **0.0101389** & 0.0109773 & **0.0449910** & 0.0517545 \\ \cline{1-1} & **(4) P+BC** & 0.3830672 & **0.0048370** & 0.0522189 & **0.0019914** & 0.1178206 & **0.0223580** \\ \cline{1-1} & **(5) P+G** & **0.0190430** & 0.0567282 & **0.0060118** & 0.0231003 & **0.0319056** & 0.0772180 \\ \cline{1-1} & **(6) BC+G** & **0.0982004** & 0.3650784 & **0.0431119** & 0.1924977 & **0.1442159** & 0.3337956 \\ \cline{1-1} & **(7) All** & 0.1694195 & **0.0686736** & 0.0705092 & **0.0270029** & 0.1362603 & **0.1007961** \\ \hline \multirow{8}{*}{Cylinder} & **(1) PROP** & **0.0007879** & 0.0021776 & **0.0008786** & 0.0024212 & **0.0141193** & 0.0254937 \\ \cline{1-1} & **(2) BC** & **0.0108285** & 9.7361195 & **0.0682347** & 4.0023353 & **0.1358656** & 1.5573399 \\ \cline{1-1} & **(3) GEO** & **0.1405526** & 108.5875535 & **0.1648840** & 119.6764528 & **0.2541922** & 5.7167007 \\ \cline{1-1} & **(4) P+BC** & 0.8656293 & **0.2141155** & 0.9652876 & **0.4134728** & **0.2702630** & 0.4543390 \\ \cline{1-1} & **(5) P+G** & **0.0249946** & 0.1252280 & **0.0290181** & 0.1412260 & **0.0731633** & 0.2759877 \\ \cline{1-1} & **(6) BC+G** & **0.0560368** & 0.0570367 & **0.0899771** & 0.0966049 & 0.1741357 & **0.1707578** \\ \cline{1-1} & **(7) All** & **0.0281058** & 2.3627123 & **0.047288** & 3.0325988 & **0.1155052** & 1.2104118 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Main test results of non-autoregressive methods.
#### 5.1.2 Autoregressive Modeling
Figure 7 shows the test NMSE of the autoregressive models on the four flow problems (with all cases), this serves as a comprehensive summary of the performance of the autoregressive baselines. The complete result of our experiments is listed in Table 6, which contains the test NMSE, MSE, and MAE of each autoregressive model on each of the seven subsets of the four problems in CFDBench.
In general, Auto-FFN and autoregressive models from the DeepONet family are at best slightly better than the identity transformation, which means they often learn to output the input as their predictions.
In cavity flow and tube flow, U-Net demonstrates superior performance due to its encoding-decoding structure in the spatial domain, which enables it to capture sharp changes in the velocity field more effectively. On the other hand, the MSE of U-Net and FNO is small while the MAE is large. This is because the velocities have generally small absolute values (\(u<1\)), and the relative error is large when the absolute error is small.
In dam flow prediction, the DeepONet family generally prevails while the non-convergence phenomenon is observed in FNO (FNO's result is excluded from the bar chart because the error is too large). The presence of gravity as a dominant physical force in dam flow suggests that the DeepONet family may be more effective in handling PDEs with source terms.
Both image-to-image models perform the best in the cylinder flow (MSE \(\sim 10^{-5}\), MAE \(\sim 3\times 10^{-3}\)), and in this dataset, FNO is better than U-Net. We conjecture this is because FNO is endowed with an ability to extract the characteristics of the periodic vortex more effectively by learning in the frequency domain.
For the tube flow problem, U-Net's predictions have horizontal stripe noises while FNO manifests vertical pattern noise at \(t=0\). For the cylinder flow problem, we can see from the prediction that although FNO's test loss is very low, it produces visible noises. This is because, in FNO, high frequencies are discarded to reduce the computational cost, as a result, it struggles to model flat regions and sharp changes. This also implies that the loss functions we have considered (which are also used in many previous works) may not be good at capturing all the artifacts of various methods.
### Multiple Time Step Extrapolation
One important characteristic of traditional numerical methods is that they can extrapolate to any time points through an arbitrary number of forward propagation steps (provided that the iterative process converges). Consequently, it is desirable for data-driven deep learning methods to effectively generalize to time steps beyond those encountered during the training phase. For non-autoregressive models, we can simply query points beyond the temporal range of the training distribution, but for autoregressive models, since predictions depend on the previous predictions, errors may accumulate over multiple forward propagation steps [3].
Figure 8 illustrates the error dynamics of the baseline models concerning the number of propagation steps. As expected, the errors of non-autoregressive models are stable with respect to the time step. In fact, with the exception of DeepONet in tube flow, non-autoregressive models generally have lower errors at later time points. This is because the change in the velocity field in earlier time steps is more drastic.
Concerning autoregressive models, in every problem, certain models exhibit significant error accumulation. One illustrative example is observed in the tube flow problem, where FNO's error increases by 100 times within just 15 forward propagation steps. This adheres to our intuition. Perhaps surprisingly, in some cases, the errors of autoregressive models can decrease over time, which means that the prediction may have a lower error than the input itself. In other words, some autoregressive models are able to utilize the operating conditions to correct themselves.
To address the issue of error accumulation, as suggested by [3], one approach is to train multiple models with varying step sizes. An alternative mitigation strategy involves imposing physical constraints on the model, effectively rendering it "physics-informed". The details of incorporating physics principles into neural networks fall beyond the scope of this study.
### Computational Cost
For more complex flow problems, traditional numerical methods can be very expensive in terms of computational cost, often requiring days and even months to run simulations. It has been shown that deep learning methods can be multiple orders of magnitude faster than numerical methods [29, 32, 3], which is one of the primary advantages of data-driven methods [36].
Different from traditional numerical methods, deep learning methods also involve a training procedure, which can be very time-consuming, and a set of parameters that can be very memory-consuming. Thus, we need to consider these two aspects in addition to the inference time. We measured the computational cost of each baseline model in terms of time and memory usage during training and inference time. The result is listed in Table 9. The models are implemented with PyTorch and executed using GPU. The statistics for training are measured with a batch size of 32, and for inference, we use a batch size of 1. The experiment was conducted with one computed with one i7-12700F CPU and one RTX 3060 GPU.
From the result, we see that different models have very different computational costs, especially during training. Auto-FFN is around 21 times slower than Auto-DeepONet in training, despite having only double the number of parameters and no significant difference in prediction error. This is intuitive because as mentioned in Section 4.3.2, by reusing the output of the branch net within one mini-batch, DeepONet can significantly improve the training efficiency. Another important observation from this result is that autoregressive models generally have many more parameters compared to non
Figure 8: The error of different baselines as a function of time steps, given only the operating parameters \(\Omega\) and the initial conditions. The \(y\)-axis is on a logarithmic scale. ResNet is not included because its error is too high and including its line would make the figure less intelligible.
autoregressive models, but the two kinds of models have comparable training computational costs. This is because autoregressive baselines predict the entire output function with one forward pass, while non-autoregressive baselines predict each data point of the output function one by one.
On the other hand, during inference, the time needed for the models to perform one forward propagation (or one query for non-autoregressive models) is very similar, all within the range of 5 to 10 ms. This is much faster than the numerical method employed for the generation of this dataset, which takes around 1 second for every frame.
## 6 Conclusions
We have constructed CFDBench, the first benchmark for the training and evaluation of data-driven deep learning methods for CFD. In contrast to existing works, the dataset contains multiple BCs, fluid physical properties, and domain geometries, covering four classical flow problems. We evaluate the ability of popular neural operators to solve PDEs on this benchmark, and the result shows that although these methods perform well on simple dummy problems, they exhibit limited generalization ability on more challenging dimensions, and thus there is still much room for improvement. We are convinced that our dataset provides an important first step towards better designing data-driven operators for CFD.
\begin{table}
\begin{tabular}{l|r|r r|r r} \hline \hline & & \multicolumn{2}{c|}{**Training**} & \multicolumn{2}{c}{**Inference**} \\
**Method** & **\# Param.** & **Time (min)** & **Mem. (MB)** & **Time (ms)** & **Mem. (MB)** \\ \hline \multicolumn{5}{c}{Non-Autoregressive Models} \\ \hline FFN & 72K & 30 & 443 & 6.4 & 7.2 \\ DeepONet & 143K & 28 & 355.1 & 7.2 & 10.3 \\ \hline \multicolumn{5}{c}{Autoregressive Models} \\ \hline Auto-FFN & 1,102K & 313 & 4,127 & 7.5 & 135.9 \\ Auto-DeepONet & 552K & 15 & 151 & 6.0 & 5.6 \\ Auto-EDeepONet & 623K & 16 & 153 & 6.0 & 5.9 \\ Auto-DeepONetCNN & 743K & 100 & 1,367 & 10.3 & 38.9 \\ ResNet & 522K & 105 & 1,073 & 10.5 & 5.5 \\ FNO & 1,189K & 43 & 475 & 9.7 & 13.6 \\ U-Net & 1,095K & 33 & 224 & 11.0 & 46.1 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Computational cost of the different baseline models on the cavity flow data, PROP subset. training time refers to the time required for training before using the model, inference time refers to the time required for computing one forward propagation (for autoregressive models) or the prediction on one query point (for non-autoregressive models).
\begin{table}
\begin{tabular}{l|c c c c c c c} \multicolumn{1}{c}{ Problem 1: Cavity Flow} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline
**Method** & **(1) PROP** & **(2) BC** & **(3) GEO** & **(4) P+B** & **(5) P+G** & **(6) B+G** & **(7) All** \\ \hline \multicolumn{10}{c}{Test NMSE} \\ \hline Identity & 0.0008949 & 0.0006532 & 0.0073354 & 0.0026440 & 0.0012782 & 0.0019072 & 0.0014130 \\ Auto-FNN & 0.0008947 & 0.0006536 & 0.0073358 & 0.0026441 & 0.0012785 & 0.0019086 & 0.0014138 \\ Auto-DeepONet & 0.0008465 & 0.0006478 & 0.0071480 & 0.0025767 & 0.0012198 & 0.0019119 & 0.0013954 \\ Auto-CleepONet & 0.0008953 & 0.0006539 & 0.0239769 & 0.0026405 & 0.0050122 & 0.0096511 & 0.0014756 \\ Auto-DeepONetCNN & 0.0007973 & 0.0006152 & **0.0033303** & 0.0016539 & 0.0009240 & 0.0011091 & 0.0010203 \\ FNO & 0.0004622 & 0.0006068 & 0.0097830 & 0.0006725 & 0.0015670 & 0.0019072 & 0.0005058 \\ U-Net & **0.0002815** & **0.0001159** & 0.0056645 & **0.0001383** & **0.0008825** & **0.0009481** & **0.0004166** \\ \hline \multicolumn{10}{c}{Test MSE} \\ \hline Identity & 0.0044942 & 0.0546373 & 0.0180946 & 0.0990002 & 0.0045866 & 0.0714307 & 0.0641640 \\ Auto-FNN & 0.0044936 & 0.0546386 & 0.0180943 & 0.0990127 & 0.0045873 & 0.0714420 & 0.0641692 \\ Auto-DeepONet & 0.0042624 & 0.0536015 & 0.0179865 & 0.0976493 & 0.0044118 & 0.0710621 & 0.0638475 \\ Auto-CleepONet & 0.0044994 & 0.0547824 & 0.0669539 & 0.0989851 & 0.0156476 & 0.0950095 & 0.0644919 \\ Auto-DeepONetCNN & 0.0043076 & 0.0531823 & 0.0125515 & 0.0920075 & 0.0043748 & 0.0709266 & 0.0632696 \\ FNO & **0.0021805** & 0.0144506 & 0.0248771 & 0.0212877 & 0.0039921 & 0.0517444 & 0.0176914 \\ U-Net & 0.0044942 & **0.0083046** & **0.0118261** & **0.0064567** & **0.0017226** & **0.0210355** & **0.0158059** \\ \hline \multicolumn{10}{c}{Test MAE} \\ \hline Identity & 0.0181955 & 0.0506039 & 0.0297850 & 0.0850075 & 0.0181359 & 0.0564395 & 0.0546747 \\ Auto-FNN & 0.0182054 & 0.0507490 & 0.0298784 & 0.0854521 & 0.0183282 & 0.0570768 & 0.0552039 \\ Auto-DeepONet & 0.0192833 & 0.0540527 & 0.0327312 & 0.0814217 & 0.0198544 & 0.0663280 & 0.0566274 \\ Auto-CleepONet & 0.0200762 & 0.0591863 & 0.1748586 & 0.0869751 & 0.0522350 & 0.0814745 & 0.0600705 \\ Auto-DeepONetCNN & 0.0210971 & 0.0542496 & **0.0222548** & 0.0792672 & 0.0201459 & 0.0571482 & 0.0584715 \\ FNO & 0.0164622 & 0.0503310 & 0.0570261 & 0.0512820 & 0.0272561 & 0.0941030 & 0.0569002 \\ U-Net & **0.0103330** & **0.0001159** & 0.0328206 & **0.0422648** & **0.0155698** & **0.0325585** & **0.0319145** \\ \hline \multicolumn{10}{c}{Problem 2: Tube Flow} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multicolumn{10}{c}{Test MSE} \\ \hline Identity & 0.0181580 & 0.1001696 & 0.0763603 & 0.1089607 & 0.0976491 & 0.1122125 & 0.1111430 \\ Auto-FNN & 0.0926980 & 0.1363334 & 0.0712057 & 0.1032522 & 0.0912989 & 0.1062881 & 0.1056823 \\ Auto-DeepONet & 0.0579279 & 0.0587133 & 0.0582056 & 0.0627424 & 0.0642253 & 0.0652362 & 0.0647747 \\ Auto-CleepONet & 0.0523948 & 0.0849620 & 0.0577905 & 0.0838474 & 0.0641345 & 0.0860665 & 0.0778912 \\ Auto-DeepONetCNN & 0.0366433 & 0.0588061 & 0.0327204 & 0.0559905 & 0.0394940 & 0.0696541 & 0.0548516 \\ FNO & **0.0003789** & **0.0374976** & 0.0295622 & **0.0053018** & 0.0272909 & 0.0207228 & 0.0053062 \\ U-Net & 0.0018705 & 5.0228938 & **0.0291472** & 0.0111089 & **0.0118453** & **0.0190382** & **0.0031894** \\ \hline \multicolumn{10}{c}{Test MSE} \\ \hline Identity & 0.0317068 & 0.3432079 & 0.0298840 & 0.1495200 & 0.0287833 & 0.3478090 & 0.1642216 \\ Auto-FNN & 0.0279299 & 0.3017316 & 0.02085062 & 0.1298233 & 0.0259540 & 0.3374500 & 0.1554814 \\ Auto-DeepONet & 0.0169327 & 0.1229224 & 0.022923 & 0.0635457 & 0.0189828 & 0.1395774 & 0.0723492 \\ Auto-CleepONet & 0.0165697 & 0.2007642 & 0.0209080 & 0.0929376 & 0.0175065 & 0.2476731 & 0.0973665 \\ Auto-DeepONetCNN & 0.0268636 & 0.2177070 & 0.0266211 & 0.1133375 & 0.0248359 & 0.2599603 & 0.1031608 \\ FNO & **0.0001121** & **0.0025932** & **0.0120422** & **0.0007641** & 0.0057142 & **0.0123058** &
## Data Availability
Data will be made available upon request.
|
2307.16485 | Parameter Inference for Degenerate Diffusion Processes | We study parametric inference for ergodic diffusion processes with a
degenerate diffusion matrix. Existing research focuses on a particular class of
hypo-elliptic SDEs, with components split into `rough'/`smooth' and noise from
rough components propagating directly onto smooth ones, but some critical model
classes arising in applications have yet to be explored. We aim to cover this
gap, thus analyse the highly degenerate class of SDEs, where components split
into further sub-groups. Such models include e.g. the notable case of
generalised Langevin equations. We propose a tailored time-discretisation
scheme and provide asymptotic results supporting our scheme in the context of
high-frequency, full observations. The proposed discretisation scheme is
applicable in much more general data regimes and is shown to overcome biases
via simulation studies also in the practical case when only a smooth component
is observed. Joint consideration of our study for highly degenerate SDEs and
existing research provides a general `recipe' for the development of
time-discretisation schemes to be used within statistical methods for general
classes of hypo-elliptic SDEs. | Yuga Iguchi, Alexandros Beskos, Matthew Graham | 2023-07-31T08:30:43Z | http://arxiv.org/abs/2307.16485v3 | # Parameter Inference for Degenerate Diffusion Processes
###### Abstract
We study parametric inference for hypo-elliptic Stochastic Differential Equations (SDEs). Existing research focuses on a particular class of hypo-elliptic SDEs, with components split into 'rough'/'smooth' and noise from rough components propagating directly onto smooth ones, but some critical model classes arising in applications have yet to be explored. We aim to cover this gap, thus analyse the _highly degenerate_ class of SDEs, where components split into further sub-groups. Such models include e.g. the notable case of generalised Langevin equations. We propose a tailored time-discretisation scheme and provide asymptotic results supporting our scheme in the context of high-frequency, full observations. The proposed discretisation scheme is applicable in much more general data regimes and is shown to overcome biases via simulation studies also in the practical case when only a smooth component is observed. Joint consideration of our study for highly degenerate SDEs and existing research provides a general'recipe' for the development of time-discretisation schemes to be used within statistical methods for general classes of hypo-elliptic SDEs.
Stochastic Differential Equation; hypo-elliptic Diffusion; Hormander's Condition; Partial Observations; Generalised Langevin Equation.
## 1 Introduction
This work addresses the statistical calibration of a wide class of hypo-elliptic diffusions. Stochastic Differential Equations (SDEs) are widely used as an effective tool to describe dynamics of the time evolution of phenomena of interest across a multitude of disciplines. Consider SDE models of the following general form:
\[dX_{t}=V_{0}(X_{t},\theta)dt+\sum_{j=1}^{d}V_{j}(X_{t},\theta)dB_{j,t},\qquad X _{0}=x\in\mathbb{R}^{N}, \tag{1}\]
with \(V_{j}(\cdot,\theta):\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\), \(0\leq j\leq d\), for parameter \(\theta\), driven by the \(d\)-dimensional standard Brownian motion \(B=(B_{1,t},\ldots,B_{d,t})\), \(t\geq 0\), defined upon the filtered probability space \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P})\), with \(d,N\geq 1\). Several theoretical results about parameter inference for SDEs have been established under positive definiteness conditions on the diffusion matrix \(a=VV^{\top}\in\mathbb{R}^{N\times N}\), with \(V=[V_{1},\ldots,V_{d}]\). In such a case, the solution of (1) is referred to as an _elliptic_ diffusion. However, many important applications give rise to diffusion processes that allow matrix \(a\) to be degenerate. We give below examples for such classes of SDEs. Under the _Hormander's condition_, discussed later in this work, the process defined via the SDE (1) with degenerate diffusion matrix \(a\) permits the existense of a density with respect to (w.r.t.) the Lebesgue measure for its transition dynamics, and is referred to as a _hypo-elliptic_ diffusion.
###### Abstract
We consider the SDE models for the flow-elliptic flows of type-elliptic flows of type-elliptic flows of type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type type-elliptic flows of type-elliptic flows of type type-elliptic flows of type-elliptic flows of type type-elliptic flows of type
#### 1.2. A Motivating Class of Models
The _non-Markovian Langevin equation_ (or _generalised Langevin equation_ (GLE)) is used in a wide range of applications due to its effectiveness in describing complex stochastic systems with memory effects (thus, of non-Markovian structure). Examples include dynamics observed in protein folding (Ayaz et al., 2021), cancer cells (Mitterwallner et al., 2020), flocks of birds (Ferretti et al., 2020), molecules (Ness et al., 2015) and coarse-grained systems (Kalliadasis et al., 2015; Li et al., 2017). For simplicity, we consider here a one-dimensional particle with unit mass, and denote its position and momentum, respectively, by \((q,p)\). Then, a GLE describes the particle dynamics as follows:
\[\dot{q}_{t} =p_{t};\] (GLE) \[\dot{p}_{t} =-U^{\prime}(q_{t})-\int_{0}^{t}K(t-s)p_{s}ds+\eta_{t},\]
where \(U:\mathbb{R}\rightarrow\mathbb{R}\) is an appropriate potential function, \(K:[0,\infty)\rightarrow\mathbb{R}\) is the _memory kernel_, and \(\eta_{t}\) is a zero-mean stationary Gaussian noise with auto-correlation specified via a fluctuation-dissipation relation in equilibrium, i.e., \(\mathbb{E}[\eta_{t}\eta_{t}]=K(t-s)\), \(s,t>0\), given a unit temperature. Due to the presence of \(K(\cdot)\), particle dynamics will depend on the full state history, with such a property being quite desirable in applications, see e.g. the references given above. However, the cost of generating the dynamics of model (GLE) can be overly expensive. Thus, a standard approach followed in practice is to introduce a parametrisation for the memory kernel \(K(\cdot)\) and represent the non-Markovian system (GLE) as a Markovian one on an extended space, the latter system being referred to as _Quasi-Markovian Generalised Langevin Equation (QGLE)_. Such parametrisation is extremely rich, thus being able to accurately capture the behaviour of systems with general true kernel \(K(\cdot)\). In particular, a common parametrisation of the memory kernel is the following:
\[K(t)=\alpha\delta(t)-\langle e^{-tA}\lambda,\lambda\rangle,\qquad\alpha>0, \quad\lambda\in\mathbb{R}^{m},\quad A\in\mathbb{R}^{m\times m},\quad m\geq 1,\]
with \(\delta=\delta(\cdot)\) the Dirac function. In this case, the original system in (GLE) can be equivalently re-written as the following Markovian one:
\[\begin{bmatrix}dq_{t}\\ dp_{t}\\ ds_{t}\end{bmatrix} =\begin{bmatrix}p_{t}\\ -\nabla U(q_{t})-\alpha p_{t}-\langle\lambda,s_{t}\rangle\\ -p_{t}\lambda-As_{t}\end{bmatrix}dt+\sum_{j=1}^{m+1}\begin{bmatrix}0\\ \sigma_{j}\end{bmatrix}dB_{j,t},\] (QGLE-I) \[s_{0} \sim\mathscr{N}\left(\mathbf{0}_{m},\,I_{m}\right),\]
with \(s_{t}\in\mathbb{R}^{m}\) an auxiliary component and \(\sigma_{j}\in\mathbb{R}^{m+1}_{+}\), \(1\leq j\leq m\). Another typical choice for the memory kernel is the following:
\[K(t)=\langle e^{-tA}\lambda,\lambda\rangle,\]
in which case the equivalent QGLE writes as:
\[\begin{bmatrix}dq_{t}\\ dp_{t}\\ ds_{t}\end{bmatrix} =\begin{bmatrix}p_{t}\\ -\nabla U(q_{t})+\langle\lambda,s_{t}\rangle\\ -p_{t}\lambda-As_{t}\end{bmatrix}dt+\sum_{j=1}^{m}\begin{bmatrix}0\\ 0\\ \sigma_{j}\end{bmatrix}dB_{j,t},\] (QGLE-II) \[s_{0} \sim\mathscr{N}\left(\mathbf{0}_{m},\,I_{m}\right),\]
with \(\sigma_{j}\in\mathbb{R}^{m}_{+}\). Class (QGLE-I) is investigated, e.g., in Ceriotti et al. (2010). Then, class (QGLE-II) is popular, e.g., in thermodynamics modelling, see Pavliotis (2014); Leimkuhler and Matthews (2015). Class
(QGLE-I) belongs in (Hypo-I), with the rough component comprised of \(p_{t}\), \(s_{t}\). For class (QGLE-II), the rough component consists only of \(s_{t}\), with \(q_{t}\) depending on the smooth component \(p_{t}\) and not on \(s_{t}\). Thus, (QGLE-II) lies within class (Hypo-II). Recently, parametric inference for GLEs within the QGLE setting, under discrete-time observations of the smooth component \(q_{t}\), has been of interest for applications, see e.g. Ferretti et al. (2020); Vroylandt et al. (2022).
### Related Works and Objectives
In this paper we investigate parameter estimation for the two classes of degenerate diffusion processes, (Hypo-I) and (Hypo-II), given discrete-time observations obtained at instances \(0\leq t_{0}<t_{1}<\cdots<t_{n}\), \(n\in\mathbb{N}\), with equidistant observation intervals \(\Delta_{n}:=t_{i}-t_{i-1}\), \(1\leq i\leq n\). In particular, we consider the following scenarios for the observations:
1. _Complete observation regime_, i.e., with all \(N\) co-ordinates of \(X_{t}\) being observed.
2. _Partial observation regime_, i.e., with a strict subset of co-ordinates being observed. In agreement with applications, in this setting only the upper-most smooth component is assumed to be observed.
Within class (Hypo-I), and for the complete observation regime considered in the produced asymptotics, Ditlevsen and Samson (2019) and Gloter and Yoshida (2020) develop judicious discrete-time (conditionally) Gaussian approximations for the transition distribution of an SDE. Such a proxy provides contrast estimators proven to be asymptotically normal in a high-frequency observation setting, i.e., \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), with a requirement that the step-size scales as \(\Delta_{n}=o(n^{-1/2})\). In Iguchi et al. (2022), such a condition on the step-size to obtain asymptotic normality is weakened to \(\Delta_{n}=o(n^{-1/3})\). In the partial observation regime, with the upper-most smooth component being observed, the missing components must be carefully imputed given the available observations. For SDEs in class (Hypo-I), it is often the case that the dynamics of the smooth component is determined as \(dX_{S,t}=X_{R,t}dt\). Such a remark also applies for class (Hypo-II), with the role of \(X_{S,t}\) taken up by the upper-most smooth component, and the one of \(X_{R,t}\) by the second smooth component. We keep the discussion within class (Hypo-I), as this is the context typically looked at in earlier literature. For the described setting, it is tempting and, indeed, widely used in practice, to recover the hidden rough component via finite-differences, using the observations \(\{X_{S,t_{i+1}}\}_{i}\), i.e. via \(X_{R,t_{i}}=(X_{S,t_{i+1}}-X_{S,t_{i}})/\Delta_{n}\), if the step-size \(\Delta_{n}\) is small enough. However, Pokern et al. (2009) and Samson and Thieullen (2012) show that, in the context of bivariate models within class (Hypo-I), such an approach delivers (asymptotically) biased estimates of diffusion parameters. Pokern et al. (2009); Ditlevsen and Samson (2019) argue against applying finite-differences and, instead, consider appropriate Ito-Taylor schemes leading to non-degenerate conditionally Gaussian approximations for the SDE transition density. Such proxies are then embedded within MCMC Gibbs samplers or Monte-Carlo Expectation-Maximisation (MC-EM) methods to impute the missing components conditionally on observations. Nevertheless, Pokern et al. (2009) illustrate empirically that the scheme they develop then leads to a biased estimation for the drift parameter, \(\beta_{R}\), of the rough component. Subsequent analytical works (Ditlevsen and Samson, 2019; Gloter and Yoshida, 2020; Iguchi et al., 2022) illustrated that the bias occurs due to omission of drift terms of size \(\mathcal{O}(\Delta_{n}^{2})\) from the Ito-Taylor expansion of the smooth component within class (Hypo-I), as such terms are needed to counterbalance the noise terms of size \(\mathcal{O}(\Delta_{n}^{3/2})\) arising in such an expansion.
A main conclusion arising from the above discussion is that a recipe for accurate estimation of both hidden components and parameters is the development of a conditionally Gaussian approximation for the full co-ordinates (as such Gaussianity allows for access to computationally effective inference methodologies) obtained via careful inclusion of higher-order terms from the relevant Ito-Taylor expansion. Such an insight for the design of a 'correct' discretisation scheme for the purposes of statistical inference has, arguably, not been clearly spelled out in the literature.
The objective of this work is to provide a comprehensive study of statistical calibration for a wide class of degenerate diffusion models. For this purpose, we review previous works for class (Hypo-I), and,
then, we establish new analytical results for class (Hypo-II). The main contributions of our work can be summarised as follows:
1. For the highly degenerate class (Hypo-II), we construct a conditionally Gaussian time-discretisation scheme. The corresponding transition density is well-defined (i.e. non-degenerate) under a suitable assumption on functionals \(\{V_{S_{1},\sigma},V_{S_{2},\sigma},V_{R_{0},\ldots,\Gamma},V_{R,d}\}\) motivated both by modelling considerations and by adherence to Hormander's condition. We refer to the new proxy as the 'locally Gaussian scheme' in agreement with the name assigned by Gloter and Yoshida (2021) to a conditionally Gaussian scheme developed for class (Hypo-I).
2. For class (Hypo-II), we define a contrast estimator based on the transition density of the locally Gaussian scheme. Then, we show that the estimator is asymptotically normal in the complete observation regime, under a high-frequency observation setting, i.e., for \(n\to\infty\), \(\Delta_{n}\to 0\), \(n\Delta_{n}\to\infty\), with the additional condition that the step-size must scale as \(\Delta_{n}=o(n^{-1/2})\).
3. Under the partial observation regime often encountered in practical applications, we show via analytical consideration of some case studies that use of a finite-difference method for estimation of hidden components leads to asymptotically biased estimation of the diffusion parameter \(\sigma\) for class (Hypo-II). Thus, we put forward the developed locally Gaussian scheme for (Hypo-II) as an effective tool to impute hidden components and estimate parameters.
4. By reviewing the methodology already produced in the literature for class (Hypo-I) and examining the new one produced in this work for (Hypo-II), we can provide a complete guideline for the development of a discretisation scheme for general degenerate diffusion processes so that the corresponding contrast function does not introduce bias in parameter estimation procedures.
The rest of the paper is organised as follows. Section 2 specifies the class of hypo-elliptic SDEs of relevance for this work, with reference to Hormander's condition. Section 3 revisits the correct (in terms of its statistical properties) discretisation scheme for class (Hypo-I) and introduces the one for (Hypo-II). Section 4 provides our core analytical results of asymptotic consistency and normality for the statistical estimates obtained via the new scheme, in a complete observation setting. All proofs are collected in an Appendix. We present case studies showcasing emergence of bias when standard alternative schemes are called upon or when finite-differences are used to impute unobserved components (a common practice in applications). Under the correct schemes shown here for classes (Hypo-I) and (Hypo-II), we set up a simple Kalman filter for fitting a non-linear sub-class of models commonly arising in applications (we term these _conditional Gaussian non-linear systems_) in the practical partial observation setting. Section 5 presents numerical studies, for the partial observation regime, both for simple models and ones relevant to real applications, within class (Hypo-II). The code used in the numerical studies is available at [https://github.com/Yugalgu/calibration-hypoSDEs](https://github.com/Yugalgu/calibration-hypoSDEs). We finish with some conclusions in Section 6.
**Notation.** For the highly degenerate class (Hypo-II), to establish a common notation with (Hypo-I), we use the argument \(x_{S}=(X_{S_{1}},x_{S_{2}})\) and also set:
\[X_{S,t} =\left[X_{S_{1},t}^{\top},X_{S_{2},t}^{\top}\right]^{\top}\in \mathbb{R}^{N_{S}},\quad N_{S}=N_{S_{1}}+N_{S_{2}};\] \[\beta_{S} =\left[\beta_{S_{1}}^{\top},\beta_{S_{2}}^{\top}\right]^{\top} \in\Theta_{\beta_{S}},\quad\Theta_{\beta_{S}}=\Theta_{\beta_{S_{1}}}\times \Theta_{\beta_{S_{2}}},\quad N_{S_{2}}=N_{S_{1}}+N_{\beta_{S_{2}}};\] \[V_{S,0}(x,\beta_{S}) =\left[V_{S,1}(x_{S},\beta_{S_{1}})^{\top},\,V_{S_{2},0}(x,\beta_ {S_{2}})^{\top}\right]^{\top}.\]
For \(x\in\mathbb{R}^{N}\) and \(\theta=(\beta_{S},\beta_{R},\sigma)\in\Theta\), we write
\[V_{0}(x,\theta) =\left[V_{S,0}(x,\beta_{S})^{\top},\,V_{R,0}(x,\beta_{R})^{\top} \right]^{\top}; \tag{3}\] \[V_{j}(x,\theta) =\left[\mathbf{0}_{N_{S^{\prime}}}^{\top},\,V_{R,j}(x,\sigma)^{ \top}\right]^{\top},\qquad 1\leq j\leq d.\]
For \(\varphi(\cdot,\theta):\mathbb{R}^{N}\to\mathbb{R}\), \(\theta\in\Theta\), bounded up to 2nd order derivatives, we define the differential operators \(\mathcal{L}\) and \(\mathcal{L}_{j}\), \(1\leq j\leq d\), as:
\[\mathcal{L}\varphi(x,\theta)=\sum_{i=1}^{N}V_{0}^{i}(x,\theta) \frac{\partial\varphi}{\partial x_{i}}(x,\theta)+\frac{1}{2}\sum_{i_{1},i_{2}= 1}^{N}\sum_{k=1}^{d}V_{k}^{i_{1}}(x,\theta)V_{k}^{i_{2}}(x,\theta)\frac{ \partial^{2}\varphi}{\partial x_{i_{1}}\partial x_{i_{2}}}(x,\theta);\] \[\mathcal{L}_{j}\varphi(x,\theta)=\sum_{i=1}^{N}V_{j}^{j}(x, \theta)\frac{\partial\varphi}{\partial x_{i}}(x,\theta),\ \ 1\leq j\leq d,\]
for \((x,\theta)\in\mathbb{R}^{N}\times\Theta\). Application of the above differential operators is extended to vector-valued functions in the apparent way, via separate consideration of each scalar component. Denote by \(C_{0}^{\infty}(\mathbb{R}^{n_{1}},\mathbb{R}^{n_{2}})\), \(n_{1},n_{2}\in\mathbb{N}\), the space of smooth functions \(f:\mathbb{R}^{n_{1}}\to\mathbb{R}^{n_{2}}\) with bounded partial derivatives of every order. We denote the probability law of the process \(\{X_{t}\}_{t\geq 0}\) under a parameter \(\theta\in\Theta\) as \(\mathbb{P}_{\theta}\), and we write
\[\xrightarrow[]{\mathbb{P}_{\theta^{\dagger}}},\ \ \xrightarrow[]{\mathcal{L}_{ \theta^{\dagger}}}\]
for convergence in probability and distribution, respectively, under the true parameter \(\theta^{\dagger}\). We write the expectation under the probability law \(\mathbb{P}_{\theta}\) as \(\mathbb{E}_{\theta}\) to emphasise the dependence on \(\theta\in\Theta\). For \(u\in\mathbb{R}^{n}\), \(n\in\mathbb{N}\) and the multi-index \(\alpha\in\{1,\ldots,n\}^{1}\), \(l\in\mathbb{N}\), we define
\[\partial_{\alpha}^{u}=\frac{\partial^{l}}{\partial u_{\alpha_{1}}\cdots \partial u_{\alpha_{l}}},\]
i.e. an operator acting on maps \(\mathbb{R}^{n}\to\mathbb{R}\), and then extended, by separate application on each co-ordinate, on maps \(\mathbb{R}^{n}\to\mathbb{R}^{m}\), \(m\in\mathbb{N}\).
## 2 Hypo-Elliptic SDEs
We fully specify the classes of SDEs of interest in (Hypo-I) and (Hypo-II), by providing, in each case, appropriate conditions on the collection of functionals \(\{V_{0},V_{1},\ldots,V_{d}\}\), motivated by modelling considerations and the existence of a Lebesgue density for the SDE transition dynamics. We illustrate later on that the imposed conditions suffice so that the locally Gaussian scheme for \(X_{t}\) in (Hypo-II) we put forward in this paper is non-degenerate.
### Hormander's Condition
We quickly review the definition of Hormander's condition. Consider the class of SDEs with the general form in (1). We define
\[\tilde{V}_{0}(x,\theta)=V_{0}(x,\theta)-\tfrac{1}{2}\sum_{k=1}^{d}\mathcal{L} _{k}V_{k}(x,\theta),\quad(x,\theta)\in\mathbb{R}^{N}\times\Theta.\]
From standard properties of Ito's processes, \(\tilde{V}_{0}\) is the drift function of (1) when written as a Stratonovich-type SDE. The functionals \(\{\tilde{V}_{0},\ldots,V_{d}\}\) of the Stratonovich SDE can be corresponded to differential operators, the latter applying on mappings on \(\mathbb{R}^{N}\mapsto\mathbb{R}^{N}\) and giving as outcome mappings, again, on the same spaces. In particular, we have:
\[\tilde{V}_{0}\mapsto\sum_{i=1}^{N}\tilde{V}_{0}^{i}(x)\partial_{x_{i}},\qquad V _{k}\mapsto\sum_{i=1}^{N}V_{k}^{i}(x)\partial_{x_{i}},\quad 1\leq k\leq d.\]
Without confusion, we use the same notation both for the SDE functionals and the corresponding differential operators. Parameter \(\theta\) is removed from the expressions for simplicity. For two functionals
(equivalently, differential operators) as above, \(W=\sum_{i=1}^{N}W^{i}(x)\partial_{x_{i}}\) and \(Z=\sum_{i=1}^{N}Z^{i}(x)\partial_{x_{i}}\), the Lie bracket is defined as
\[[W,Z]=W\,Z-Z\,W,\]
that is, for a given \(x\in\mathbb{R}^{N}\),
\[[W,Z](x)=\sum_{i=1}^{N}\bigl{\{}W^{i}(x)\partial_{x_{i}}Z(x)-Z^{i}(x)\partial_{ x_{i}}W(x)\bigr{\}}\in\mathbb{R}^{N}.\]
We introduce the collections of functionals
\[\mathscr{H}_{0}=\bigl{\{}V_{1},\ldots,V_{t}\bigr{\}},\qquad\mathscr{H}_{k}= \Bigl{\{}\bigl{\{}\bigl{[}\widetilde{V}_{0},V\bigr{]},\bigl{[}V_{r},V\bigr{]} \bigr{\}}:V\in\mathscr{H}_{k-1},\,1\leq r\leq d\Bigr{\}},\quad k\geq 1,\]
\[\widetilde{\mathscr{H}_{m}}=\bigcup_{k=0}^{m}\mathscr{H}_{k},\quad m\geq 1.\]
Then, Hormander's condition is stated as follows:
**Definition 1** (Hormander's Condition): There exists \(M\geq 1\) such that
\[\operatorname{span}\bigl{\{}V(x):V\in\widetilde{\mathscr{H}_{M}}\bigr{\}}= \mathbb{R}^{N},\]
for all \(x\in\mathbb{R}^{N}\).
Hormander's condition implies that for any \(t>0\) and any initial condition \(X_{0}=x\in\mathbb{R}^{N}\), the law of \(X_{t}\) is absolutely continuous w.r.t. the Lebesgue measure. Also, if the coefficients of the SDE are infinitely-times differentiable, with partial derivatives of all orders being bounded, then the Lebesgue density is smooth, see, e.g., Nualart (2006); Pavliotis (2014).
### Diffusion Classes (Hypo-I) and (Hypo-II)
We now set up separate conditions for the SDEs in classes (Hypo-I) and (Hypo-II). These will make use of the drift function, \(\widetilde{V}_{0}=\widetilde{V}_{0}(x,\theta)\), of the Stratonovich version of the SDEs. For \(1\leq j\leq k\leq N\), we define the projection operator \(\operatorname{proj}_{j,k}:\mathbb{R}^{N}\to\mathbb{R}^{k-j+1}\) as
\[x=\bigl{[}x_{1},\ldots,x_{N}\bigr{]}^{\top}\mapsto\operatorname{proj}_{j,k}( x)=\bigl{[}x_{j},\ldots,x_{k}\bigr{]}^{\top}.\]
For (Hypo-I) and (Hypo-II), we assign the following conditions to fully specify the structure of the corresponding degenerate system of SDEs.
**Condition (H)** (Classes of SDEs) I.: For class (Hypo-I), for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\), it holds that:
\[\operatorname{span}\Bigl{\{}V_{R,k}(x,\sigma),\,1\leq k\leq d\Bigr{\}}= \mathbb{R}^{N_{R}};\]
\[\operatorname{span}\Bigl{\{}\bigl{\{}\,V_{k}(x,\sigma),\,[\widetilde{V}_{0},V _{k}](x,\theta)\,\bigr{\}},\,1\leq k\leq d\Bigr{\}}=\mathbb{R}^{N}. \tag{4}\]
II.: In the case of class (Hypo-II), for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\), it holds that:
\[\operatorname{span}\Bigl{\{}V_{R,k}(x,\sigma),\,1\leq k\leq d \Bigr{\}}=\mathbb{R}^{N_{R}};\] \[\operatorname{span}\Bigl{\{}\operatorname{proj}_{N_{R_{1}}+1,N} \bigl{\{}V_{k}(x,\sigma)\bigr{\}},\,\operatorname{proj}_{N_{R_{1}}+1,N}\bigl{\{} \,[\widetilde{V}_{0},V_{k}](x,\theta)\,\bigr{\}},\,1\leq k\leq d\Bigr{\}}= \mathbb{R}^{N_{S_{2}}+N_{R}};\] \[\operatorname{span}\Bigl{\{}\bigl{\{}\,V_{k}(x,\sigma),\,[ \widetilde{V}_{0},V_{k}](x,\theta)\,,\bigl{[}\widetilde{V}_{0},[\widetilde{V}_ {0},V_{k}]](x,\theta)\,\bigr{\}},\,1\leq k\leq d\Bigr{\}}=\mathbb{R}^{N}. \tag{5}\]
Note that Hormander's condition holds for (Hypo-I) and (Hypo-II) under (H)-I and (H)-II (for \(M=1\) and \(M=2\)) respectively.
Remark 1_Conditions (H)-I and (H)-II separate classes (Hypo-I) and (Hypo-II).: In particular, the top equation in both (H)-I, (H)-II implies that the diffusion matrix of the rough component is of full rank, thus \(X_{R,t}\) acquires the roughness of an elliptic SDE. The second equation in (H)-I, (H)-II ensures that all co-ordinates of component \(X_{S,t}\) and \(X_{S,t}\), respectively, possess the same smoothness as integrals of elliptic SDEs. Finally, the third equation in (H)-II implies that component \(X_{S,t}\) has the smoothness of second integrals of elliptic SDEs. Note, e.g., that (4) will not hold for (Hypo-II), since for the highly degenerate case, due to the drift function of the upper-most component \(X_{S,t}\) not involving \(X_{R,t}\), we have_
\[\mathrm{proj}_{1,X_{S,t}}[\tilde{V}_{0},V_{k}]=\mathbf{0}_{N_{S,t}},\quad 1\leq k \leq d.\]
_To check this latter equation, notice that for both (Hypo-I) and (Hypo-II) we have: i) \(V_{0}\) and \(\tilde{V}_{0}\) coincide on the smooth co-ordinates; ii) \(\tilde{V}_{0}V_{k}\) is zero on the smooth co-ordinates. Then, for (Hypo-II) we additionally have that \(V_{k}\tilde{V}_{0}\) is zero on the upper-most \(N_{S,\ t}\) co-ordinates due to the particular choice of \(V_{0}=V_{0}(x_{S,\ t},x_{S,})\)._
We introduced conditions (H)-I & II upon functionals \(\{V_{0},V_{1},\ldots,V_{k}\}\), so that the two classes of SDEs, (Hypo-I) and (Hypo-II), possess sufficient structure to allow for their intended use for the modelling objectives in mind. It turns out that the exact same conditions (H)-I & II play a key role so that the locally Gaussian schemes written down later in Section 3 are well-defined with a positive definite covariance matrix.
#### 2.2.1 An Example for (H)-II
We provide an example for (H)-II via the following three-dimensional hypo-elliptic diffusion motivated from model class (QGLE-II):
\[\begin{split} d_{q}&=p_{t}dt;\\ dp_{t}&=\big{(}-\nabla U(q_{t})+\lambda s_{t}\big{)} dt;\\ ds_{t}&=\big{(}-\lambda p_{t}-\alpha s_{t}\big{)} dt+\sigma dB_{1,t},\end{split} \tag{6}\]
where \(U:\mathbb{R}\to\mathbb{R}\), \(\alpha>0\), \(\sigma>0\) and \(\lambda\in\mathbb{R}/\{0\}\). In this case, for \(x=(p,q,s)\in\mathbb{R}^{3}\), \(\theta=(\lambda,\alpha,\sigma)\in\Theta\), we have:
\[\begin{split}\tilde{V}_{0}&=p\,\partial_{q}+\big{(} -\nabla U(q)+\lambda s\big{)}\partial_{s}+\big{(}-\lambda p-\alpha s\big{)} \partial_{s},\qquad V_{1}=\sigma\partial_{s},\\ [\tilde{V}_{0},V_{1}]&=-\lambda\sigma\partial_{p} +\alpha\sigma\partial_{s},\qquad\big{[}\tilde{V}_{0},[\tilde{V}_{0},V_{1}] \big{]}=\lambda\sigma\partial_{q}-\lambda\alpha\sigma\partial_{p}+\sigma(- \lambda^{2}+\alpha^{2})\partial_{s}.\end{split}\]
We obtain:
\[\begin{split}\mathrm{span}\big{\{}V_{R}(x,\theta)\big{\}}& =\mathrm{span}\big{\{}\sigma\}=\mathbb{R};\\ \mathrm{span}\Big{\{}\mathrm{proj}_{2,3}\big{\{}V_{1}(x,\theta) \big{\}},\mathrm{proj}_{2,3}\big{\{}\tilde{V}_{0},V_{1}](x,\theta)\big{\}}& =\mathrm{span}\left\{\begin{bmatrix}0\\ \sigma\end{bmatrix},\begin{bmatrix}-\lambda\sigma\\ \alpha\sigma\end{bmatrix}\right\}=\mathbb{R}^{2};\\ \mathrm{span}\Big{\{}V_{1}(x,\theta),[\tilde{V}_{0},V_{1}](x,\theta), \big{[}\tilde{V}_{0},[\tilde{V}_{0},V_{1}]\big{]}(x,\theta)\Big{\}}&= \mathrm{span}\left\{\begin{bmatrix}0\\ 0\\ \sigma\end{bmatrix},\begin{bmatrix}0\\ -\lambda\sigma\\ \alpha\sigma\end{bmatrix},\begin{bmatrix}\lambda\sigma\\ -\lambda\alpha\sigma\\ \sigma(-\lambda^{2}+\alpha^{2})\end{bmatrix}\right\}\\ &=\mathbb{R}^{3}.\end{split}\]
Thus, SDE (6) satisfies condition (H)-II and lies within the framework of class (Hypo-II).
## 3 Time-Discretisation of Hypo-Elliptic SDEs
We discuss time-discretisation schemes for hypo-elliptic diffusions within classes (Hypo-I) and (Hypo-II), with the focus being on the performance of the schemes for the purposes of parametric inference. We set up the context by reviewing schemes proposed in literature for class (Hypo-I). We then propose a new scheme for the highly degenerate diffusion class (Hypo-II) that will later be proven to possess desirable statistical properties. Hereafter, to distinguish among the two classes of SDEs, we use the notation \(X_{t}^{\text{(I)}}\) and \(X_{t}^{\text{(II)}}\) for processes in (Hypo-I) and (Hypo-II), respectively.
### Time-Discretisation of (Hypo-I) - Brief Review
We review relevant schemes for the hypo-elliptic class (Hypo-I) used in the literature. First, the classical _Euler-Maruyama_ scheme is defined as follows, for \(0\leq i\leq n\),
\[X_{S,i+1}^{\text{EM,(I)}} =X_{S,i}^{\text{EM,(I)}}+V_{S,0}\big{(}X_{i}^{\text{EM,(I)}}, \beta_{S}\big{)}\Delta_{n};\] \[X_{R,i+1}^{\text{EM,(I)}} =X_{R,i}^{\text{EM,(I)}}+V_{R,0}\big{(}X_{i}^{\text{EM,(I)}}, \beta_{R}\big{)}\Delta_{n}+\sum_{j=1}^{d}V_{R,j}\big{(}X_{i}^{\text{EM,(I)}}, \sigma\big{)}\times\big{(}B_{j,i+1}-B_{j,i}\big{)},\]
with subscript \(i\) in \(X_{i}^{\text{EM,(I)}}\) and \(B_{j,i}\) indicating the time instance \(t_{i}=i\Delta_{n}\). The approximation of the smooth component does not involve noise, thus the Euler-Maruyama scheme is degenerate. Pokern et al. (2009) studied some example bivariate SDEs with drift function \(V_{S,0}(x,\beta_{S})=x_{R}\), and showed that use of the Euler-Maruyama scheme in the high-frequency partial observation regime, where only the smooth component \(X_{S,t}\) is observed, induces bias in parameter estimates. Note that in this setting the unobserved component \(X_{R,t}\) is estimated via a finite-difference approach, i.e.,
\[X_{R,i}^{\text{(I)}}\approx\hat{X}_{R,i}^{\text{(I)}}=\frac{X_{S,i+1}^{\text{ (I)}}-X_{S,i}^{\text{(I)}}}{\Delta_{n}},\]
and such an imputation is a main cause for the presence of bias in the estimation of parameter \(\sigma\). To sidestep the above issue, Pokern et al. (2009) proposed the following conditionally Gaussian scheme, for \(0\leq i\leq n\):
\[\begin{split}\widetilde{X}_{S,i+1}^{\text{(I)}}&= \widetilde{X}_{S,i}^{\text{(I)}}+V_{S,0}\big{(}\widetilde{X}_{i}^{\text{(I)}}, \beta_{S}\big{)}\Delta_{n}+\sum_{j=1}^{d}\mathcal{L}_{j}V_{S,0}(\widetilde{X}_ {i}^{\text{(I)}},\theta)\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{j,v}du;\\ \widetilde{X}_{R,i+1}^{\text{(I)}}&=\widetilde{X}_{ R,i}^{\text{(I)}}+V_{R,0}\big{(}\widetilde{X}_{i}^{\text{(I)}},\beta_{R} \big{)}\Delta_{n}+\sum_{j=1}^{d}V_{R,j}\big{(}\widetilde{X}_{i}^{\text{(I)}}, \sigma\big{)}\times\big{(}B_{j,i+1}-B_{j,i}\big{)}.\end{split} \tag{7}\]
Note that now the smooth component \(\widetilde{X}_{S,i+1}^{\text{(I)}}\) involves Gaussian noise after application of an Ito-Taylor expansion for \(V_{S,0}(X_{t},\beta_{S})\). Under condition (H)-I, \((\widetilde{X}_{S,i+1}^{\text{(I)}},\widetilde{X}_{R,i+1}^{\text{(I)}})\) is conditionally Gaussian with an invertible covariance matrix. Then, Pokern et al. (2009) utilised the well-posed likelihood of scheme (7) to estimate both the hidden paths of the rough components and the parameters via a Bayesian approach, namely Gibbs sampling. Under a high-frequency observation setting, they empirically showed that the estimate of parameter \(\sigma\) is asymptotically unbiased, but the estimator of the drift parameter \(\beta_{R}\) based on scheme (7) suffers from bias even in the complete observation regime.
Gloter and Yoshida (2020) introduced the 'local Gaussian' scheme, where, for \(0\leq i\leq n\),
\[\begin{split}\bar{X}_{S,i+1}^{\text{(I)}}&=\bar{X}_ {S,i}^{\text{(I)}}+V_{S,0}\big{(}\bar{X}_{i}^{\text{(I)}},\beta_{S}\big{)} \Delta_{n}+\frac{\Delta_{n}^{2}}{2}\mathcal{L}V_{S,0}\big{(}\bar{X}_{i}^{ \text{(I)}},\theta\big{)}\\ &\qquad\qquad\qquad\qquad\qquad+\sum_{j=1}^{d}\mathcal{L}_{j}V_{S,0}(\bar{X}_{i}^{\text{(I)}},\theta)\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{ j,v}du;\\ \bar{X}_{R,i+1}^{\text{(I)}}&=\bar{X}_{R,i}^{\text{(I )}}+V_{R,0}\big{(}\bar{X}_{i}^{\text{(I)}},\beta_{R}\big{)}\Delta_{n}+\sum_{j= 1}^{d}V_{R,j}\big{(}\bar{X}_{i}^{\text{(I)}},\sigma\big{)}\times\big{(}B_{j,i+ 1}-B_{j,i}\big{)}.\end{split}\] (LG-I)
Compared to (7), scheme (LG-I) includes term \(\Delta_{n}^{2}(\mathcal{L}V_{S,0})(\bar{X}_{i}^{(\rm II)},\theta)/2\) in the smooth component. Gloter and Yoshida (2020) illustrate the significance of this term for the purposes of parameter inference, by proving asymptotic consistency and normality for the contrast estimator derived from the likelihood of the discretisation scheme (LG-I), in the high-frequency, complete observation regime, namely, \(n\to\infty\), \(\Delta_{n}\to 0\), \(n\Delta_{n}\to\infty\), under the step-size condition \(\Delta_{n}=o(n^{-1/2})\).
Remark 3: _Ditlevsen and Samson (2019) applied a strong 1.5 order scheme (Kloeden and Platen, 1992) to construct contrast estimators for class (Hypo-I) with \(N_{S}=1\), under strong conditions for the diffusion matrix so that the scheme becomes conditionally Gaussian. Then, they provided two separate contrast functions for estimating \(\beta_{S}\) and \((\beta_{R},\sigma)\) from the approximate Gaussian density for \(X_{S}\) and \(X_{R}\), respectively, rather than the joint density. As noted in Remark 4.6 in Gloter and Yoshida (2020), the separate contrast functions results in a larger asymptotic variance for the estimation for \(\beta_{S}\) compared with the single contrast estimator defined via the joint density of rough and smooth components._
### Time-Discretisation of (Hypo-II)
We propose a time-discretisation scheme for the second hypo-elliptic class (Hypo-II), with desirable properties for the purposes of parameter inference. The brief review of schemes for (Hypo-I) in the previous section suggests that the discretisation scheme for (Hypo-II) should satisfy the following two key criteria:
1. The scheme should be conditionally non-degenerate, i.e., the law of \(X_{t_{i+1}}\) given \(X_{t_{i}}\) should admit a Lebesgue transition density for the full co-ordinates. This will allow to impute unobserved paths conditionally on observations without making use of bias-inducing finite-difference approximations.
2. The scheme should involve deterministic terms obtained from careful truncation of the stochastic Taylor expansion for the drift of the smooth component, \(V_{S,0}(X_{t}^{(\rm II)},\beta_{S})\), so that the contrast estimator corresponding to the scheme is asymptotically unbiased under the high-frequency, complete observation regime.
As for Criterion I, we will explain later in Section 4.2.1 that, indeed, use of a degenerate discretisation scheme or of finite-differences to estimate hidden components induces a bias in the estimation of parameters. Based upon the above key criteria, we propose the following discretisation scheme for (Hypo-II):
\[\bar{X}_{S_{i,i+1}}^{(\rm II)}=\mu_{S_{1}}\big{(}\Delta_{n},\bar{X}_{i}^{(\rm II )},\theta\big{)}+\sum_{j=1}^{d}\mathcal{L}_{j}\mathcal{L}V_{S_{1},0}\big{(} \bar{X}_{i}^{(\rm II)},\theta\big{)}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{ j,u}dvdu;\]
\[\bar{X}_{S_{i,i+1}}^{(\rm II)}=\mu_{S_{2}}\big{(}\Delta_{n},\bar{X}_{i}^{(\rm II )},\theta\big{)}+\sum_{j=1}^{d}\mathcal{L}_{j}V_{S_{2},0}\big{(}\bar{X}_{i}^{ (\rm II)},\theta\big{)}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{j,u}du;\] (LG-II)
\[\bar{X}_{R,i+1}^{(\rm II)}=\mu_{R}\big{(}\Delta_{n},\bar{X}_{i}^{(\rm II)}, \theta\big{)}+\sum_{j=1}^{d}V_{j}\big{(}\bar{X}_{i}^{(\rm II)},\sigma\big{)} \times\big{(}B_{j,i+1}-B_{j,i}\big{)},\]
where we have set, for \(\big{(}\Delta,x,\theta\big{)}\in(0,\infty)\times\mathbb{R}^{N}\times\Theta\),
\[\begin{bmatrix}\mu_{S_{1}}(\Delta,x,\theta)\\ \mu_{S_{2}}(\Delta,x,\theta)\\ \mu_{R}(\Delta,x,\theta)\end{bmatrix}=\begin{bmatrix}x_{S_{1}}+V_{S_{1},0}(x _{S},\beta_{S_{1}})\Delta+\mathcal{L}V_{S_{1},0}(x,\theta)\frac{\Delta^{2}}{ 2}+\mathcal{L}^{2}V_{S_{1},0}(x,\theta)\frac{\Delta^{2}}{\theta}\\ x_{S_{2}}+V_{S_{2},0}(x,\beta_{S_{2}})\Delta+\mathcal{L}V_{S_{2},0}(x,\theta) \frac{\Delta^{2}}{2}\\ x_{R}+V_{R,0}(x,\beta_{R})\Delta\end{bmatrix}.\]
Notice that the scheme involves \(3d\) Gaussian random variables:
\[B_{j,i+1}-B_{j,i}, \int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{j,v}du, \int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}\int_{t_{i}}^{v}dB_{j,u}dvdu, \qquad 1\leq j\leq d.\]
The latter of the above integrals appears due to the application of a third-order stochastic Taylor expansion on \(V_{S_{1},0}(x_{S},\beta_{S_{1}})\), in the smoothest component \(\bar{X}^{(\mathrm{II})}_{S_{1},i+1}\). As we will show in Section 4, the log-likelihood based on the local Gaussian scheme (LG-II) produces a contrast estimator that is asymptotically unbiased in the high-frequency, complete observation regime. In order for the deduced contrast function to provide desirable asymptotic properties, it is required to include terms up to \(\mathcal{O}(\Delta_{s}^{\mathrm{A}})\) in the definition of \(\mu_{S_{i}}\), otherwise estimation of the parameter \(\beta_{R}\) in the model (Hypo-II) can be asymptotically biased as Pokern et al. (2009) observed for some bivariate hypo-elliptic diffusions in the framework of (Hypo-I).
We denote by \(\Sigma(\Delta,x,\theta)\) the covariance matrix for one-step implementation of scheme \(\bar{X}^{(\mathrm{II})}\), with step-size \(\Delta>0\), current state \(x\in\mathbb{R}^{N}\) and parameter \(\theta\in\Theta\). The covariance matrix is given as:
\[\Sigma(\Delta,x,\theta)=\begin{bmatrix}\Sigma_{S_{1}S_{1}}(\Delta,x,\theta)& \Sigma_{S_{1}S_{2}}(\Delta,x,\theta)&\Sigma_{S_{1}R}(\Delta,x,\theta)\\ \Sigma_{S_{2}S_{1}}(\Delta,x,\theta)&\Sigma_{S_{2}S_{2}}(\Delta,x,\theta)& \Sigma_{S_{2}R}(\Delta,x,\theta)\\ \Sigma_{R_{1}}(\Delta,x,\theta)&\Sigma_{RS_{2}}(\Delta,x,\theta)&\Sigma_{RR}( \Delta,x,\theta)\\ \end{bmatrix}, \tag{8}\]
where each block matrix is specified as: for \(x=(x_{S_{1}},x_{S_{2}},x_{R})\in\mathbb{R}^{N}\), \(\theta=(\beta_{S_{1}},\beta_{S_{2}},\beta_{R},\sigma)\in\Theta\),
\[\Sigma_{S_{1}S_{1}}(\Delta,x,\theta)\equiv\tfrac{\Delta^{3}}{ \theta}a_{S_{1}}(x,\theta),\quad\Sigma_{S_{1}S_{2}}(\Delta,x,\theta)\equiv \tfrac{\Delta^{4}}{\theta}\partial_{x_{S_{2}}}^{\top}V_{S_{1},0}(x_{S},\beta_ {S_{2}})a_{S_{2}}(x,\theta);\] \[\Sigma_{S_{1}R}(\Delta,x,\theta)\equiv\tfrac{\Delta^{2}}{\theta} \partial_{x_{S_{2}}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{2}})\partial_{x_{R}}^{ \top}V_{S_{1},0}(x,\beta_{S_{2}})a_{R}(x,\sigma);\] \[\Sigma_{S_{2}S_{1}}(\Delta,x,\theta)\equiv\Sigma_{S_{1}S_{2}}( \Delta,x,\theta)^{\top},\quad\Sigma_{S_{2}S_{2}}(\Delta,x,\theta)\equiv \tfrac{\Delta^{3}}{3}a_{S_{2}}(x,\theta);\] \[\Sigma_{S_{2}R}(\Delta,x,\theta)\equiv\tfrac{\Delta^{2}}{2} \partial_{x_{R}}^{\top}V_{S_{2},0}(x,\beta_{S_{2}})a_{R}(x,\sigma),\quad \Sigma_{RS_{1}}(\Delta,x,\theta)\equiv\Sigma_{S_{1}R}(\Delta,x,\theta)^{\top};\] \[\Sigma_{RS_{2}}(\Delta,x,\theta)\equiv\Sigma_{S_{2}R}(\Delta,x, \theta)^{\top},\quad\Sigma_{RR}(\Delta,x,\theta)\equiv\Delta\,a_{R}(x,\sigma).\]
In the above, we have set
\[a_{R}(x,\sigma) =\sum_{k=1}^{d}V_{R,k}(x,\sigma)V_{R,k}(x,\sigma)^{\top};\] \[a_{S_{2}}(x,\theta) =\partial_{x_{R}}^{\top}V_{S_{2},0}(x,\beta_{S_{2}})a_{R}(x, \sigma)\big{(}\partial_{x_{R}}^{\top}V_{S_{2},0}(x,\beta_{S_{2}})\big{)}^{ \top};\] \[a_{S_{1}}(x,\theta) =\partial_{\sigma_{S_{2}}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{1}}) a_{S_{2}}(x,\theta)\big{(}\partial_{x_{R}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{1}}) \big{)}^{\top}.\]
**Proposition 1**: _Under condition (H)-II, the covariance matrix \(\Sigma(\Delta,x,\theta)\) is positive-definite for any \((\Delta,x,\theta)\in(0,\infty)\times\mathbb{R}^{N}\times\Theta\)._
The proof is given in Appendix B. Due to Proposition 1, the covariance \(\Sigma(\Delta,x,\theta)\) is invertible, thus the approximate log-likelihood based on the local Gaussian discretisation scheme (LG-II) is well-defined for the highly degenerate class (Hypo-II). We note that, in brief, the above result follows from the positive definiteness of \(a_{R}(x,\sigma)\), \(a_{S_{1}}(x,\theta)\) and \(a_{S_{1}}(x,\theta)\) under condition (H)-II.
## 4 Parameter Inference for Class (Hypo-II)
We explore analytically parameter inference procedures for hypo-elliptic diffusions in class (Hypo-II). We prove in Section 4.1 that a contrast estimator constructed from the conditionally Gaussian discretisation scheme (LG-II) is asymptotically unbiased under the high-frequency, complete observation regime. We illustrate the precise impact of the drift terms involved in scheme (LG-II) on the asymptotic results. In
Section 4.2, we consider the partial observation regime. As observed for the case of class (Hypo-I) in the literature, we show via analytical case studies that use of finite-differences for the estimation of hidden paths leads to biased parameter estimates within (Hypo-II). Also, we explain that the local Gaussian scheme can be put into effective use within computational approaches for filtering hidden components and estimating parameters.
### Complete Observation Regime
#### 4.1.1 Contrast Estimator
Based on the proposed scheme (LG-II) and the corresponding tractable transition density, we construct a contrast estimator for the hypo-elliptic class (Hypo-II). We write the transition density of the local Gaussian scheme (LG-II), for given \(\Delta>0\), current position \(x\in\mathbb{R}^{N}\) and parameter \(\theta\in\Theta\) as:
\[y \mapsto\bar{p}_{\Delta}(x,y;\theta)\] \[=\frac{1}{\sqrt{(2\pi)^{N}\Delta^{S_{N_{1}}+3N_{2}+N_{n}}[\Sigma( x,\theta)]}}\exp\Big{(}-\tfrac{1}{2}m(\Delta,x,y,\theta)^{\top}\Sigma^{-1}(x, \theta)m(\Delta,x,y,\theta)\Big{)},\]
where we have set, for \(y=(y_{S_{1}},y_{S_{2}},y_{R})\in\mathbb{R}^{N_{S_{1}}}\times\mathbb{R}^{N_{S_ {2}}}\times\mathbb{R}^{N_{n}}\),
\[m(\Delta,x,y,\theta)=\begin{bmatrix}\frac{1}{\sqrt{\Delta^{3}}}(y_{S_{1}}-\mu _{S_{1}}(\Delta,x,\theta))\\ \frac{1}{\sqrt{\Delta^{3}}}(y_{S_{2}}-\mu_{S_{2}}(\Delta,x,\theta))\\ \frac{1}{\sqrt{\Delta}}(y_{R}-\mu_{R}(\Delta,x,\theta))\end{bmatrix},\qquad \Sigma(x,\theta)=\Sigma(1,x,\theta).\]
Note that from Proposition 1, under Condition (H)-II the covariance matrix \(\Sigma(x,\theta)\) is invertible for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\). We denote by \(X_{i}^{\text{(II)}}\) the (complete) observation of a diffusion within class (Hypo-II) at time \(t\), \(0\leq i\leq n\). Then, after removing some constant terms from \(-2\sum_{i=1}^{n}\log\bar{p}_{\Delta_{n}}(X_{i-1}^{(\text{II})},X_{i}^{(\text{ III})};\theta)\), we define the following contrast function:
\[\begin{split}\ell_{n}(\theta)=\sum_{i=1}^{n}m&( \Delta_{n},X_{i-1}^{(\text{II})},X_{i}^{(\text{II})},\theta)^{\top}\,\Sigma^{- 1}(X_{i-1}^{(\text{II})},\theta)\,m(\Delta_{n},X_{i-1}^{(\text{II})},X_{i}^{( \text{II})},\theta)\\ &+\sum_{i=1}^{n}\log[\Sigma(X_{i-1}^{(\text{II})},\theta)].\end{split} \tag{9}\]
Thus, the contrast estimator for the hypo-elliptic class (Hypo-II) is defined as:
\[\hat{\theta}_{n}=\big{(}\hat{\beta}_{S_{1},n},\,\hat{\beta}_{S_{2},n},\,\hat{ \beta}_{R,n},\,\hat{\sigma}_{n}\big{)}=\operatorname*{argmin}_{\theta\in \Theta}\ell_{n}\big{(}\theta\big{)}. \tag{10}\]
#### 4.1.2 Asymptotic Results
Before we state our main results, we introduce some conditions for class (Hypo-II).
1. For each \(\theta\in\Theta\), \(V_{j}(\cdot,\theta)\in C_{0}^{\infty}(\mathbb{R}^{N};\mathbb{R}^{N})\), \(0\leq j\leq d\).
2. For any \(\alpha\in\{1,\ldots,N\}^{t}\), \(0\leq l\leq 2\), and \(1\leq i\leq N\), \(0\leq j\leq d\), the following function \[\theta\mapsto\partial_{x}^{\alpha}V_{j}^{\alpha}(x,\theta)\] is three times differentiable for all \(x\in\mathbb{R}^{N}\). Furthermore, derivatives of the above map up to the third order are of polynomial growth in \(x\in\mathbb{R}^{N}\) uniformly in \(\theta\in\Theta\).
3. The diffusion process \(\{X_{t}\}_{t\geq 0}\) defined via (Hypo-II) is ergodic under \(\theta=\theta^{\dagger}\), with invariant distribution \(\nu_{\theta^{\dagger}}\) on \(\mathbb{R}^{N}\). Furthermore, all moments of \(\nu_{\theta^{\dagger}}\) are finite.
4. It holds that for all \(p\geq 1\), \(\sup_{t>0}\mathbb{E}_{\theta^{\dagger}}[|X_{t}|^{p}]<\infty\).
* If it holds that \[V_{S,0}(x,\beta_{S})=V_{S,0}(x,\beta_{S}^{\dagger}),\ \ V_{R,0}(x,\beta_{R})=V_{R,0}(x,\beta_{R}^{ \dagger}),\ \ V_{R}(x,\sigma)=V_{R}(x,\sigma^{\dagger}),\] for \(x\) in set of probability \(1\) under \(\nu_{\theta^{\dagger}}\), then \(\beta_{S}=\beta_{S}^{\dagger}\), \(\beta_{R}=\beta_{R}^{\dagger}\), \(\sigma=\sigma^{\dagger}\).
Note that under condition (C1) and (H)-II, the law of the solution to the degenerate SDE (Hypo-II) admits a smooth Lebesgue density as we explained in Section 2.1. We write the true value of the parameter for a model in (Hypo-II) as \(\theta^{\dagger}=(\beta_{S_{1}}^{\dagger},\beta_{S_{1}}^{\dagger},\beta_{R}^{ \dagger},\sigma^{\dagger})\in\Theta\). The latter, \(\theta^{\dagger}\), is assumed to lie in the interior of \(\Theta\). Recall the definition of function \(V_{0}:\mathbb{R}^{N}\times\Theta\to\mathbb{R}^{N}\) in (3). Then, the contrast estimator defined in (10) has the following asymptotic properties in the high-frequency observation setting.
**Theorem 1** (Consistency): _Assume that conditions (H)-II, (C1)-(C5) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then_
\[\hat{\theta}_{n}\xrightarrow{\mathbb{P}_{\theta^{\dagger}}}\theta^{\dagger}.\]
**Theorem 2** (Asymptotic Normality): _Assume that conditions (H)-II, (C1)-(C5) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\) with \(\Delta_{n}=o(n^{-1/2})\), then_
\[\begin{array}{l}\left[\begin{array}{c}\sqrt{\frac{n}{\Delta_{n}^{2}}} \big{(}\hat{\beta}_{S_{1},n}-\beta_{S_{1}}^{\dagger}\big{)}\\ \sqrt{\frac{n}{\Delta_{n}}}\big{(}\hat{\beta}_{S_{2},n}-\beta_{S_{2}}^{\dagger} \big{)}\\ \sqrt{n\Delta_{n}}\big{(}\hat{\beta}_{R,n}-\beta_{R}^{\dagger}\big{)}\\ \sqrt{n}\big{(}\hat{\sigma}_{n}-\sigma^{\dagger}\big{)}\end{array}\right] \xrightarrow{\mathcal{E}_{\sigma^{\dagger}}}\mathscr{N}\big{(}\mathbf{0}_{N_{ \theta}},\Gamma^{-1}(\theta^{\dagger})\big{)},\\ \\ \sqrt{n}\big{(}\hat{\sigma}_{n}-\sigma^{\dagger}\big{)}\end{array}\]
_where the asymptotic precision matrix \(\Gamma(\theta^{\dagger})\) is given as:_
\[\Gamma(\theta^{\dagger})=\mathrm{diag}\Big{(}\Gamma^{\beta_{S_{1}}}(\theta^{ \dagger}),\,\Gamma^{\beta_{S_{2}}}(\theta^{\dagger}),\,\Gamma^{\beta_{R}}( \theta^{\dagger}),\,\Gamma^{\sigma}(\theta^{\dagger})\Big{)}, \tag{11}\]
_with the involved block matrices \(\Gamma^{\beta_{S_{1}}}(\theta^{\dagger})\in\mathbb{R}^{N_{\beta_{S_{1}}} \times N_{\beta_{S_{1}}}}\), \(\Gamma^{\beta_{S_{2}}}(\theta^{\dagger})\in\mathbb{R}^{N_{\beta_{S_{2}}} \times N_{\beta_{S_{2}}}}\), \(\Gamma^{\beta_{R}}(\theta^{\dagger})\in\mathbb{R}^{N_{\beta_{R}}\times N_{ \beta_{R}}}\), \(\Gamma^{\sigma}(\theta^{\dagger})\in\mathbb{R}^{N_{\sigma}\times N_{\beta_{R}}}\) specified as:_
\[\Gamma^{\beta_{S_{1}}}_{ij}(\theta^{\dagger})=720\int\partial_{ \theta^{\dagger}}^{\beta_{S_{1}}}V_{S_{1},0}(x_{S},\beta_{S_{1}}^{\dagger})^{ \top}\,a_{S_{1}}^{-1}(x,\theta^{\dagger})\,\partial_{\theta^{\dagger}}^{\beta _{S_{1}}}V_{S_{1},0}(x_{S},\beta_{S_{1}}^{\dagger})\,\nu_{\theta^{\dagger}}(dx);\] \[\Gamma^{\beta_{R}}_{ij}(\theta^{\dagger})=12\int\partial_{ \theta^{\dagger}}^{\beta_{S_{2}}}V_{S_{2},0}(x,\beta_{S_{1}}^{\dagger})^{\top} \,a_{S_{1}}^{-1}(x,\theta^{\dagger})\,\partial_{\theta^{\dagger}}^{\beta_{S_{2}} }V_{S_{2},0}(x,\beta_{S_{1}}^{\dagger})\,\nu_{\theta^{\dagger}}(dx);\] \[\Gamma^{\beta_{R}}_{ij}(\theta^{\dagger})=\int\partial_{\theta^{ \dagger}}^{\beta_{R}}V_{R,0}(x,\beta_{R}^{\dagger})^{\top}\,a_{R}^{-1}(x, \sigma^{\dagger})\,\partial_{\theta^{\dagger}}^{\beta_{R}}V_{R,0}(x,\beta_{R}^{ \dagger})\,\nu_{\theta^{\dagger}}(dx);\] \[\Gamma^{\sigma}_{ij}(\theta^{\dagger})=\tfrac{1}{2}\int\mathrm{tr} \big{(}\partial_{\theta^{\prime}}^{\prime}\Sigma(x,\theta^{\dagger})\Sigma^{-1}( x,\theta^{\dagger})\partial_{\theta^{\prime}}^{\prime}\Sigma(x,\theta^{\dagger})\Sigma^{-1}( x,\theta^{\dagger})\big{)}\,\nu_{\theta^{\dagger}}(dx).\]
The proofs of Theorems 1 & 2 are given in Appendix C.
**Remark 4**: _Gloter and Yoshida (2020) prove consistency and asymptotic normality of a contrast estimator constructed via a locally Gaussian scheme for class (Hypo-I). Our proofs follow a different approach from the one in the above work. Indicatively, a condition on the step-size of \(\Delta_{n}=o(n^{-1/2})\) is not required for our proof of consistency, whereas it is needed in Gloter and Yoshida (2020). Our proofs avoid such a
condition by making use of preliminary convergence rates for the estimators \(\hat{\beta}_{S_{1},n}\), \(\hat{\beta}_{S_{2},n}\) and some key identities in the involved matrix calculations to control terms arising in the expansion of the logarithm of the contrast function - this method then provides the consistency for \(\hat{\beta}_{R,n}\). We note that we believe that a consequence of the proofs we derive in this work is that they provide a direction for further results to be obtained on the analysis of contrast functions and the estimates they deliver for general classes of hypo-elliptic SDEs, see the discussion in Section 6._
#### 4.1.3 Case Study - Bias due to Incorrect Drift Expansion
We have proven that the contrast estimator based on the proposed scheme (LG-II) is asymptotically unbiased under the high-frequency, complete observation regime. The inclusion of an appropriate number terms from the stochastic Taylor expansion of \(V_{S_{1},0}(X_{S_{1},t},X_{S_{2},t},\beta_{S_{1}})\) and \(V_{S_{2},0}(X_{t},\beta_{S_{2}})\) in scheme (LG-II) is critical for obtaining desirable asymptotic properties. Oursion of such terms will typically give rise to an asymptotic bias. In this subsection, we briefly highlight the effect of the 'drift correction' via a simple three-dimensional hypo-elliptic model from (Hypo-II). We consider the following SDE:
\[dq_{t} =p_{t}dt; \tag{12}\] \[dp_{t} =s_{t}dt;\] \[ds_{t} =-\beta s_{t}dt+\sigma dB_{t},\]
where \(\theta=(\beta,\sigma)\) is the parameter vector. We assume that all components of the system are observed, and consider the following discretisation scheme for SDE (12):
\[\bar{x}_{i+1}=\begin{bmatrix}\bar{q}_{i+1}\\ \bar{p}_{i+1}\\ \bar{s}_{i+1}\end{bmatrix}=\begin{bmatrix}\bar{q}_{i}+\bar{p}_{i}\Delta_{n}+ \bar{s}_{i}\frac{\Delta_{i}^{2}}{2}\\ \bar{p}_{i}+\bar{s}_{i}\Delta_{n}-\beta s_{i}\frac{\Delta_{i}^{2}}{2}\\ \bar{s}_{i}-\beta s_{i}\Delta_{n}\end{bmatrix}+\begin{bmatrix}\sigma\times \int_{t_{1}}^{t_{i+1}}\int_{t_{i}}^{u}\int_{t_{i}}^{v}dB_{u}dvdu\\ \sigma\times\int_{t_{1}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{u}du\\ \sigma\times(B_{t_{i+1}}-B_{t_{i}})\end{bmatrix}. \tag{13}\]
Thus, terms of size \(\mathcal{O}(\Delta_{n}^{3})\) are not included in the deterministic part of the approximation \(\bar{q}_{i+1}\) of the smoothest component. Based on the conditionally Gaussian scheme (13), we define a contrast function as \((-2)\times\) (complete log-likelihood), that is, after some constants are removed:
\[\ell_{n}(\theta;x_{0:n})=6n\log\sigma+\frac{1}{\sigma^{2}}\sum_{i=1}^{n}m( \Delta_{n},x_{i-1},x_{i},\theta)^{\top}\,\Sigma^{-1}\,m(\Delta_{n},x_{i-1},x _{i},\theta),\]
where we have set \(x_{i}=[q_{i},p_{i},s_{i}]^{\top}\), \(0\leq i\leq n\), and
\[m(\Delta_{n},x_{i-1},x_{i},\theta)=\begin{bmatrix}\frac{1}{\sqrt{\Delta^{2}}} \big{(}q_{i}-q_{i-1}-p_{i-1}\Delta_{n}-s_{i-1}\frac{\Delta_{i}^{2}}{2}\big{)} \\ \frac{1}{\sqrt{\Delta^{2}}}\big{(}p_{i}-p_{i-1}-s_{i-1}\Delta_{n}+\beta s_{i-1} \frac{\Delta_{i}^{2}}{2}\big{)}\\ \frac{1}{\sqrt{\Delta_{n}}}\big{(}s_{i}-s_{i-1}+\beta s_{i-1}\Delta_{n}\big{)} \end{bmatrix},\quad\Sigma=\begin{bmatrix}\frac{1}{20}&\frac{1}{8}&\frac{1}{ 6}\\ \frac{1}{8}&\frac{1}{8}\\ \frac{1}{6}&\frac{1}{2}&1\end{bmatrix}.\]
Solving \(\partial_{\beta}\,\ell_{n}(\theta;x_{0:n})=0\), we obtain the contrast estimator for \(\beta\) as \(\widetilde{\beta}_{n}=\frac{\widetilde{q}_{n}}{f_{n}}\), where we have defined:
\[\widetilde{f}_{n} =\left\{\frac{1}{2}\Sigma_{32}^{-1}+\Sigma_{33}^{-1}+\frac{1}{4} \Sigma_{22}^{-1}+\frac{1}{2}\Sigma_{23}^{-1}\right\}\times\frac{n}{h}\sum_{i= 1}^{n}s_{i-1}^{2};\] \[\widetilde{g}_{n} =-\frac{1}{n\sqrt{\Delta_{n}}}\sum_{i=1}^{n}s_{i-1}\times\left\{ \Sigma_{33}^{-1}\times\frac{q_{i}-q_{i-1}-p_{i-1}\Delta_{n}-s_{i-1}\frac{ \Delta_{i}^{2}}{2}}{\sqrt{\Delta_{n}^{2}}}\right.\,+\,\Sigma_{32}^{-1}\times \frac{p_{i}-p_{i-1}-s_{i-1}\Delta_{n}}{\sqrt{\Delta_{n}^{2}}}\]
\[+\Sigma_{33}^{-1}\times\frac{\kappa_{n-t_{c1}}}{\sqrt{\Delta_{n}}}+ \frac{1}{2}\Sigma_{21}^{-1}\times\frac{q_{-t_{c1}-p_{c1}}\Delta_{n}-s_{i,-1} \frac{\Delta_{n}^{2}}{2}}{\sqrt{\Delta_{n}^{2}}}+\frac{1}{2}\Sigma_{22}^{-1} \times\frac{p_{c1}-p_{c1}-s_{i-1}\Delta_{n}}{\sqrt{\Delta_{n}^{2}}}\] \[+\frac{1}{2}\Sigma_{23}^{-1}\times\frac{q_{-t_{c1}}}{\sqrt{\Delta_ {n}}}\Big{\}}.\]
From the ergodicity of the process \(\{s_{t}\}\) and Lemma 2 in the Appendix, we have that as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\),
\[\widetilde{f}_{n}\stackrel{{\mathbb{P}_{d}}}{{ \longrightarrow}}c_{1}\times\int s^{2}\,\nu_{\theta^{\dagger}}(ds),\]
for a non-zero constant \(c_{1}=\frac{1}{2}\Sigma_{32}^{-1}+\Sigma_{33}^{-1}+\frac{1}{2}\Sigma_{22}^{- 1}+\frac{1}{2}\Sigma_{23}^{-1}\), where \(\nu_{\theta}^{\dagger}(ds)\) is the invariant distribution of \(\{s_{t}\}\) under the true parameter \(\theta^{\dagger}\). For the numerator \(\widetilde{g}_{n}\), we apply Lemmas 2 & 3 in Appendix to obtain that
\[\widetilde{g}_{n}\stackrel{{\mathbb{P}_{d}}}{{ \longrightarrow}}(c_{1}+c_{2})\times\beta^{\dagger}\times\int s^{2}\,\nu_{ \theta^{\dagger}}(ds),\]
for a non-zero constant \(c_{2}\equiv\frac{1}{6}\Sigma_{31}^{-1}+\frac{1}{12}\Sigma_{21}^{-1}\). Hence, it holds that, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\widetilde{g}_{n}\stackrel{{\mathbb{P}_{d}}}{{ \longrightarrow}}\left(1+\frac{c_{2}}{c_{i}}\right)\times\beta^{\dagger}.\]
Thus, the drift estimation based on the discretisation scheme (13) with inappropriate drift expansion is, in general, asymptotically biased. One can check that the above bias is removed upon use of our locally Gaussian scheme (LG-II) instead of (13).
### Partial Observation Regime
#### 4.2.1 Case Study - Bias due to Finite-Differences
To motivate the 'appropriateness' of the proposed locally Gaussian scheme (LG-II) in the context of parameter inference in a partial observation setting, we illustrate that naive use of finite-differences to impute hidden components (a practice quite common in applications) induces a bias in the estimation of the SDE parameters. To observe this, consider the model (12) again but with the drift parameter \(\beta\) fixed to \(1\):
\[dq_{t} =p_{t}dt;\] \[dp_{t} =s_{t}dt; \tag{14}\] \[ds_{t} =-s_{t}dt+\sigma dB_{t},\]
for \(\sigma>0\). We then apply the Euler-Maruyama scheme for the first equation of (14) and the locally Gaussian scheme (LG-I) for the remaining dynamics, i.e.,
\[\begin{bmatrix}\bar{q}_{i+1}\\ \bar{p}_{i+1}\\ \bar{s}_{i+1}\end{bmatrix}=\begin{bmatrix}\bar{q}_{i}+\bar{p}_{i}\Delta_{n}\\ \bar{p}_{i}+\bar{s}_{i}\Delta_{n}-\bar{s}_{i}\frac{\Delta_{n}^{2}}{2}\\ \bar{s}_{i}-\bar{s}_{i}\Delta_{n}\end{bmatrix}+\begin{bmatrix}0\\ \sigma\times\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{v}du\\ \sigma\times\left(B_{i+1}-B_{i}\right)\end{bmatrix}. \tag{15}\]
Scheme (15) is degenerate since the upper-most equation does not involve noise. We now consider the estimator based on the likelihood provided by (15), given the discrete-time observations \(\{\varphi_{0:n},s_{0:n}\}\), and with the hidden paths \(p_{0:n}\) imputed via the first equation of (15) using the observations \(q_{0:n}\).
Remark 5_.: In practice, the rough component \(s_{t}\) is often not observed, so one must impute the missing components \(s_{0:n}\) conditionally on the observations \(q_{0:n}\) by making use of the transition density (or some approximation of it) for both \(\alpha\)-ordinates \((p,s)\) of (14). One would reasonably expect that presence of bias will be typical in such a practical scenario, if it found to be present in the simpler case when \(s_{t}\) is directly observed.
The complete likelihood of the discretisation scheme (15) is given as:
\[\prod_{i=1}^{n}\Bigl{\{} \frac{1}{\sqrt{(2\pi)^{2}\Delta_{i}^{2}\epsilon^{\prime}[\Sigma]} }\exp\Bigl{(}-\frac{1}{2\sigma^{2}}\,m(\Delta_{n},y_{i-1},y_{i})^{\top}\, \Sigma^{-1}\,m(\Delta_{n},y_{i-1},y_{i})\Bigr{)}\] \[\times\delta\bigl{(}q_{i}-q_{i-1}-p_{i-1}\Delta_{n}\bigr{)} \Bigr{\}},\]
where we have defined \(y_{i}=[p_{i},s_{i}]^{\top}\), \(0\leq i\leq n\), and
\[m(\Delta_{n},y_{i-1},y_{i})=\begin{bmatrix}\frac{1}{\sqrt{\Delta_{n}^{2}}} \bigl{(}p_{i}-p_{i-1}-s_{i-1}\Delta_{n}+s_{i-1}\frac{\Delta_{n}^{2}}{2}\bigr{)} \\ \frac{1}{\sqrt{\Delta_{n}}}\bigl{(}s_{i}-s_{i-1}+s_{i-1}\Delta_{n}\bigr{)} \end{bmatrix},\quad\Sigma=\begin{bmatrix}\frac{1}{3}&\frac{1}{2}\\ \frac{1}{2}&1\end{bmatrix}.\]
Integrating out \(p_{0:n}\), we obtain the marginal likelihood \(f_{n}(\sigma\,;q_{0:n+1},s_{0:n})\) as:
\[f_{n}(\sigma\,;q_{0:n+1},s_{0:n})\] \[=\prod_{i=1}^{n}\Bigl{\{} \frac{1}{\sqrt{(2\pi)^{2}\Delta_{i}^{2}\epsilon^{\prime}[\Sigma]}}\exp \Bigl{(}-\frac{1}{2\sigma^{2}}\,m(\Delta_{n},\hat{y}_{i-1},\hat{y}_{i})^{\top }\,\Sigma^{-1}\,m(\Delta_{n},\hat{y}_{i-1},\hat{y}_{i})\Bigr{)}\Bigr{\}},\]
where \(\hat{y}_{i}=[\hat{y}_{i},s_{i}]^{\top}\), with \(\hat{y}_{i}=\bigl{(}q_{i+1}-q_{i}\bigr{)}/\Delta_{n}\). Then, we obtain the following contrast function for \(\sigma\), after removing constant terms from \((-2)\times\log f_{n}(\sigma;q_{0:n+1},s_{0:n})\):
\[\ell_{n}(\theta;q_{0:n+1},s_{0:n})=4n\log\sigma+\frac{1}{\sigma^{\prime}}\sum _{i=1}^{n}m(\Delta_{n},\hat{y}_{i-1},\hat{y}_{i})^{\top}\,\Sigma^{-1}\,m( \Delta_{n},\hat{y}_{i-1},\hat{y}_{i}). \tag{16}\]
Solving \(\partial_{\sigma}\ell_{n}(\sigma;q_{0:n+1},s_{0:n})=0\), we obtain the estimator \(\hat{\sigma}_{n}\), such that:
\[(\hat{\sigma}_{n})^{2}=\frac{1}{2n}\sum_{i=1}^{n}m(\Delta_{n},\hat{y}_{i-1}, \hat{y}_{i})^{\top}\,\Sigma^{-1}\,m(\Delta_{n},\hat{y}_{i-1},\hat{y}_{i}).\]
It holds that, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[(\hat{\sigma}_{n})^{2}\,\tfrac{\hat{\tau}_{\sigma\hat{\tau}_{\sigma\hat{\tau} _{\sigma\hat{\tau}_{\sigma\hat{\tau}_{\sigma\hat{\tau}_{\sigma\hat{\tau}_{ \sigma\hat{\tau}_{\sigma\hat{\tau}_{\sigma\hat{\tau}_{\sigma\hat{\tau}_{ \sigma\hat{\tau}_{\sigma\hat{\tau}_{\sigma\hat{\tau}}_{\sigma\hat{\tau}}_{ \sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma \hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{ \tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{ \sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma \hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{ \tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{ \sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma \hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau _{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma \hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau }}}_{\sigma\hat{\tau}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma \hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{ \sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{ \tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma \hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{ \sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{ \tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{ \sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{ \tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{ \sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma \hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{ \sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau }}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma \hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{ \sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma \hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{ \sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}_{\sigma\hat{\tau }}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma \hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{ \sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau }}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}_{\sigma \hat{\tau}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}_{\sigma \hat{\tau}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma \hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{ \sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma \hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}}_{\sigma \hat{\tau_{\sigma\hat{\tau}}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}_{\sigma\hat{\tau}}_{ \sigma\hat{\tau_{\sigma\hat{\tau}}_{\sigma\hat{\tau}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}_{ \sigma\hat{\tau_{\tau}}_{\sigma\hat{\tau}}_{\sigma\hat{\tau_{\sigma\hat{\tau}}_{\sigma\hat{ \tau}}_{\
and Samson, 2019; Pokern et al., 2009) that applied some conditionally Gaussian schemes for inference of specific hypo-elliptic models within the class (Hypo-I).
We now highlight the use of a relatively straightforward Kalman filter recursion for carrying out statistical inference once the locally Gaussian scheme is adopted, for a rich sub-class of hypo-elliptic models, referred to here as _conditionally Gaussian non-linear systems_. That is, the system is originally specified as a non-linear SDE but can be treated as a linear system given components that correspond to observations. For elliptic diffusions with such a structure, continuous-time filtering and smoothing have been investigated in engineering, see e.g. Chapter 8 of Chen (2023). Several important hypo-elliptic models used in applications fall within this sub-class, e.g., standard Langevin equations, Quasi-Markovian generalised Langevin Equations (QGLE-I, QGLE-II). Here, our interest lies in the sub-class derived via the general model (Hypo-II) once the constituent coefficients are specified as:
\[V_{S_{1},0}(x_{S},\beta_{S_{1}})=C_{S_{S_{1}}}x_{S_{1}}+\hat{C}_{S_{S_{1}}}x_{S _{2}},\quad V_{S_{2},0}(x,\beta_{S_{2}})=f_{S_{2}}(x_{S_{1}},\beta_{S_{1}})+C_{ S_{S_{2}}}x_{H};\]
\[V_{R,0}(x,\beta_{R})=f_{R}(x_{S_{1}},\beta_{R})+C_{\beta_{R}}x_{H},\quad V_{R, j}(x,\sigma)=f_{R,j}(\sigma),\quad 1\leq j\leq d,\]
for \(x=(x_{S_{1}},x_{S_{2}},x_{R})=(x_{S_{1}},x_{H})\in\mathbb{R}^{N_{S_{1}}}\times \mathbb{R}^{N_{S_{2}}+N_{R}}\) and \(\theta=(\beta_{S_{1}},\beta_{S_{2}},\beta_{R},\sigma)\in\Theta\), where: (i) \(f_{S_{2}}\), \(f_{R}\), \(f_{R,j}\) are vector-valued functions, allowed to be non-linear w.r.t. the state \(x_{S_{1}}\); (ii) matrices
\[C_{S_{S_{1}}}\in\mathbb{R}^{N_{S_{1}}\times N_{S_{1}}},\qquad \hat{C}_{S_{1}}\in\mathbb{R}^{N_{S_{1}}\times N_{S_{2}}},\] \[C_{S_{2}}\in\mathbb{R}^{N_{S_{2}}\times(N_{S_{2}}+N_{R})},\qquad C _{S_{2}}\in\mathbb{R}^{N_{R}\times(N_{S_{2}}+N_{R})}\]
are independent of the state \(x\). Critically, given the observable component \(x_{S_{1}}\), the drift functions are linear functions of the hidden component \(x_{H}\). For the model with the above choice of coefficients, the locally Gaussian scheme (LG-II) writes as:
\[\bar{X}_{i+1}=\begin{bmatrix}\bar{X}_{S_{1},i+1}\\ \bar{X}_{S_{2},i+1}\\ \bar{X}_{R,i+1}\end{bmatrix}=b(\Delta_{n},\bar{X}_{S_{1},i},\theta)+A(\Delta_ {n},\bar{X}_{S_{1},i},\theta)\begin{bmatrix}\bar{X}_{S_{2},i}\\ \bar{X}_{R,i}\end{bmatrix}+w_{i}(\Delta_{n},\theta), \tag{18}\]
for functions \(b:(0,\infty)\times\mathbb{R}^{N_{S_{1}}}\times\Theta\to\mathbb{R}^{N}\), \(A:(0,\infty)\times\mathbb{R}^{N_{S_{1}}}\times\Theta\to\mathbb{R}^{N\times(N _{S_{2}}+N_{R})}\) and an \(N\)-dimensional Gaussian variate \(w_{i}(\Delta_{n},\theta)\). Since the right-hand-side of scheme (18) is linear w.r.t. the hidden components \(\bar{X}_{S_{2},i}\), \(\bar{X}_{R,i}\) given the observed component \(\bar{X}_{S_{1},i}\), one can obtain Kalman filtering and smoothing recursions, and calculate the marginal likelihood for the observations \(\bar{X}_{S_{1},0,n}\). We provide the closed form filtering and marginal likelihood calculations in Appendix F. We use these tools in the numerical experiments of parameter inference under the partial observation regime in Section 5 that follows.
Remark 6: _Vroylandt et al. (2022) studied parameter inference for the QGLE of first-type in (QGLE-I), where they applied an Euler-Maruyama scheme to construct Kalman filtering and smoothing for the rough components \((p_{t},s_{t})\) given the velocity \(p_{t}\), with values of the latter obtained (via finite-differences) from discrete observations of the position \(q_{t}\). Then, they used Kalman filtering and smoothing within an Expectation-Maximisation (EM) algorithm to estimate the parameters. However, as we have seen, such a finite-differences approach induces bias in the estimation of the diffusion parameters._
## 5 Numerical Studies
### Linear SDE in a Partial Observation Regime
We illustrate empirically, for an example SDE model, that parameter estimation via the proposed locally Gaussian scheme (LG-II) leads to asymptotically unbiased estimation under the partial observation regime.
We also highlight the effect of the drift correction in the properties of the estimators. We again consider the model studied in Section 4.1.3, that is,
\[dq_{t} =p_{t}dt;\] \[dp_{t} =s_{t}dt;\] \[ds_{t} =-\beta s_{t}dt+\sigma dB_{t},\]
where \(\theta=(\beta,\sigma)\in\Theta=(0,\infty)\times(0,\infty)\) is the parameter vector. In agreement with practice, we assume that only discrete observations of the smoothest component, \(q_{0:n}\), are available, with an equidistant step-size \(\Delta_{n}\). We compute the following two estimators based on two different discretisation schemes:
\[\hat{\theta}_{n,j}=(\hat{\beta}_{n,j},\hat{\sigma}_{n,j})=\operatorname{ argmax}_{\theta\in\Theta}\log p_{j}(\theta;q_{0:n}),\quad j=1,2,\]
where \(p_{1}(\theta;q_{0:n})\) is the approximate likelihood of the observations as obtained by use of Kalman filter in the setting of our locally Gaussian scheme (LG-II), and \(p_{2}(\theta;q_{0:n})\) is a different approximate likelihood obtained in the setting of the following conditionally Gaussian scheme that omits correction terms of order \(\mathcal{O}(\Delta_{n}^{2})\) (resp. \(\mathcal{O}(\Delta_{n}^{3})\)) from the stochastic Taylor expansion of the drift function of component \(p\) (resp. \(q\)):
\[\begin{bmatrix}\widetilde{q}_{i+1}\\ \widetilde{p}_{i+1}\\ \widetilde{s}_{i+1}\end{bmatrix}=\begin{bmatrix}\widetilde{q}_{i}+\widetilde{ p}_{i}\Delta_{n}\\ \widetilde{p}_{i}+\widetilde{s}_{i}\Delta_{n}\\ \widetilde{s}_{i}-\beta\widetilde{s}_{i}\Delta_{n}\end{bmatrix}+\begin{bmatrix} \sigma\times\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{u}dvdu\\ \sigma\times\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{v}dB_{v}du\\ \sigma\times\left(B_{i+1}-B_{i}\right)\end{bmatrix}.\]
We generate 100 independent realisations of the dataset \(q_{0:n}\) by sub-sampling trajectories obtained from scheme (LG-II) with a small step-size \(10^{-4}\). We have chosen the scheme because it is expected to have a better accuracy than other classical schemes (such as Euler-Maruyama scheme) due to the higher order stochastic Taylor expansion of drift functions. We consider the following three high-frequency scenarios for the data:
**Set I.**\(n=5\cdot 10^{5}\), \(\Delta_{n}=10^{-3}\), \(T(=n\Delta_{n})=500\).
**Set II.**\(n=2\cdot 10^{6}\), \(\Delta_{n}=5\cdot 10^{-4}\), \(T=10^{3}\).
**Set III.**\(n=10^{7}\), \(\Delta_{n}=10^{-3}\), \(T=10^{4}\).
The true parameter value is set to \(\theta^{\dagger}=(\beta^{\dagger},\sigma^{\dagger})=(2.0,4.0)\), and the Nelder-Mead method is applied to optimise the marginal likelihoods. In Figure 1, we plot the 100 realisations of the two different estimators. Table 1 summarizes the mean and standard error of \(\hat{\theta}_{n}-\theta^{\dagger}\), i.e., (estimate) - (true value), from the 100 repetitions. First, we observe that the estimates of \(\hat{\theta}_{n,1}\) (using our scheme (LG-II)) is centred at the true value in all scenarios, thus in this case we have an empirical illustration of an asymptotically unbiased estimation in the partial observation setting. Secondly, it is clear from the figures and the table that the mean of estimate of \(\hat{\theta}_{n,2}\) (estimator based on the conditionally Gaussian scheme without appropriate drift correction) is shifted from the true value, and seems to be centred at \((\beta,\sigma)=(2.10,3.960)\). Thus \(\hat{\theta}_{n,2}\) induces an asymptotic bias in the partial observation regime, in agreement with the case study in Section 4.1.3 in the complete observation case. Notably, there is a clear separation between the two estimators of \(\sigma\). We stress here that the bias in \(\hat{\theta}_{n,2}\) is not removed with increasing \(n\) or decreasing \(\Delta_{n}\). Also, one can still observe the bias even if the datasets are obtained with other numerical schemes, e.g., Euler-Maruyama scheme, rather than scheme (LG-II).
### Quasi-Markovian Generalised Langevin Equations
#### 5.2.1 Scalar Extended State
We consider the QGLE describing one-dimensional positional domain:
\[\begin{split}& d_{\text{0}t}=p_{t}dt;\\ & d_{\text{p}t}=\big{(}-\nabla U(q_{t})+\lambda s_{t}\big{)}dt;\\ & ds_{t}=(-\lambda p_{t}-\alpha s_{t})dt+\sigma dB_{t},\quad(q_{0}, p_{0},s_{0})\in\mathbb{R}^{3},\end{split} \tag{19}\]
where \(\alpha>0\), \(\sigma>0\), \(\lambda\in\mathbb{R}\setminus\{0\}\) and \(U:\mathbb{R}\rightarrow\mathbb{R}\). In this experiment, we consider the following two choices of potential \(U\):
\[q\mapsto U_{\text{HO}}(q)=D\times\tfrac{q^{2}}{2},\qquad q\mapsto U_{\text{DW }}(q)=D\times\tfrac{q^{2}}{2}+\sin\Big{(}\tfrac{1}{4}+2q\Big{)},\]
where \(D>0\) is a parameter. The function \(U_{\text{DW}}\) (used in experiments in the work Leimkuhler and Sachs (2022)) represents an uneven double well potential, under which model (19) is non-linear. We generate 30 independent datasets by sub-sampling trajectories produced by the distribution scheme (LG-II) with a small step-size \(10^{-4}\) so that obtained observations correspond to \(n=2\times 10^{5}\), \(\Delta_{n}=10^{-3}\), \(T=n\Delta_{n}=200\). For the complete observation regime, we compute the contrast estimator (10) for each given dataset. For experiments of partial observations, we use the trajectories of the position \(q_{t}\) only, extracted from the complete observations, and compute the MLE by maximising the marginal likelihood obtained from the Kalman recursion formula under the locally Gaussian scheme (LG-II), as shown in Section 4.2.2. To minimise the relevant target functions we use the adaptive moments (Adam) optimiser with the following algorithmic specifications: (step-size) = 0.1, (exponential decay rate for the first moment estimates) = 0.9, (exponential decay rate for the second moment estimates) = 0.999, (additive term for numerical stability) = \(10^{-8}\). The true parameters are set to: \((D^{\dagger},\lambda^{\dagger},\alpha^{\dagger},\sigma^{\dagger})=(1.0,2.0,4. 0,1.0)\) and \((D^{\dagger},\lambda^{\dagger},\alpha^{\dagger},\sigma^{\dagger})=(1.0,2.0,4. 0,4.0)\) for the QGLE with
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Set & \multicolumn{2}{c}{\(\hat{\beta}_{n}-\beta^{\dagger}\)} & \multicolumn{2}{c}{\(\hat{\sigma}_{n}-\sigma^{\dagger}\)} \\ \cline{2-5} & Proposed scheme & Incorrect scheme & Proposed scheme & Incorrect scheme \\ \hline I. & 0.0037 (0.0091) & 0.1016 (0.0939) & -0.0024 (0.0039) & -0.0415 (0.0040) \\ II. & 0.0125 (0.0699) & 0.1109 (0.0752) & -0.0011 (0.0019) & -0.0381 (0.0020) \\ III. & 0.0028 (0.0296) & 0.0984 (0.0252) & -0.0026 (0.0015) & -0.0418 (0.0013) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean and standard error (in parenthesis) of (maximum likelihood estimate) – (true value) from \(100\) trajectories of partial observations.
harmonic potential \(U_{\rm HO}\) and the double well potential \(U_{\rm DW}\), respectively. Also, the initial guesses for the parameter are set to: \((D_{0},\lambda_{0},\alpha_{0},\sigma_{0})=(2.0,2.0,2.0,2.0)\) and \((D_{0},\lambda_{0},\alpha_{0},\sigma_{0})=(3.0,3.0,3.0,3.0)\) for the case of \(U_{\rm HO}\) and \(U_{\rm DW}\), respectively. We summarise the mean and standard error of the estimates from the 30 independent trajectories in Table 3.2. We notice that the results for the complete observation regime are in agreement with the analytical results in Theorem 3.2. For instance, convergence to the true values appears to be faster for parameters \((D,\lambda)\) in the smooth component \(p_{t}\) (recall the convergence rate \(\sqrt{\Delta_{n}/n}\) for such parameters in the CLT of Theorem 3.2). Besides, under the partial observation regime, the estimates seem to be centred around the true parameter as well, with standard errors that are larger than the ones in the case of complete observations (as expected). Thus, parameter inference carried out via the proposed locally Gaussian scheme (LG-II) appears in this case to provide unbiased estimates in the partial observation regime.
#### 5.2.2 Multivariate Extended State
We consider a QGLE with one-dimensional coordinates and multivariate extended variable, motivated from the work of (Ayaz et al., 2021) that studies protein-folding kinetics via a Quasi-Markovian GLE (QGLE-II) and showcases that a QGLE accurately reproduces simulations of molecular dynamics (MD) that involve memory effects in the friction. In their investigation, an one-dimensional reaction coordinate, \(q_{t}\), given as the sum of the separations between native contacts, is modelled via the following QGLE:
\[dq_{t}=\frac{1}{m}\times p_{t}dt;\] \[dp_{t}=-\nabla U(q_{t})dt+\sum_{l=1}^{L}s_{l,t}dt; \tag{20}\] \[ds_{l,t}=-\frac{1}{\tau_{i}}\times s_{l,t}\,dt-\frac{c_{l}}{ \tau_{i}}\times p_{t}\,dt+\frac{\sqrt{2\beta^{-1}\tau_{i}}}{\tau_{i}}\,dB_{l, t},\quad s_{l,0}\sim\mathcal{N}(0,\beta^{-1}),\quad 1\leq l\leq L,\]
where \(M,\beta>0\) denote the mass and the inverse thermal energy respectively, \(\{c_{l},\tau_{l}\}_{1\leq l\leq L}\) are the unknown parameters taking positive values, for \(L\geq 1\), and \(U:\mathbb{R}\rightarrow\mathbb{R}\), the folding free energy landscape for proteins, is specified as \(q\mapsto U(q)=-\beta^{-1}\log\nu(q)\) with \(\nu(\cdot)\) being the equilibrium probability density function. QGLE (20) corresponds to the non-Markovian GLE (GLE) with the memory kernel given as a so-called _Prony series_:
\[K(t)=\sum_{l=1}^{L}\frac{c_{l}}{\tau_{l}}\times\exp\Bigl{(}-\frac{t}{\tau_{l} }\Bigr{)},\quad t\geq 0. \tag{21}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Potential} & \multirow{2}{*}{Parameter} & \multirow{2}{*}{True value} & \multicolumn{2}{c}{Mean (standard error) of estimates} \\ \cline{3-4} & & & Complete observation & Partial observation \\ \hline \multirow{4}{*}{\(U_{\rm HO}\)} & \(D\) & 1.0 & 1.0010 (0.0054) & 1.0128 (0.1069) \\ & \(\lambda\) & 2.0 & 2.0011 (0.0052) & 1.9732 (0.1131) \\ & \(\alpha\) & 4.0 & 4.0263 (0.2019) & 4.0412 (0.2070) \\ & \(\sigma\) & 1.0 & 1.0017 (0.0096) & 1.0157 (0.0572) \\ & \(D\) & 1.0 & 1.0000 (0.0001) & 1.0130 (0.1166) \\ & \(\lambda\) & 2.0 & 2.0000 (0.0001) & 2.0094 (0.1240) \\ & \(\alpha\) & 4.0 & 4.0146 (0.1937) & 4.0310 (0.1969) \\ & \(\sigma\) & 4.0 & 3.9982 (0.0030) & 3.9880 (0.2417) \\ \hline \hline \end{tabular}
\end{table}
Table 3.2: Parameter estimation of the QGLE (19). Mean and standard error (in brackets) of maximum likelihood estimates from 30 trajectories of observations.
Ayaz et al. (2021) constructed QGLE (20) with \(L=5\) by determining the parameters via a least squares method so that the memory kernel (21) fits the one extracted numerically from the observed time-series of \(q\).
In our experiment, we estimate the unknown parameters by maximising the marginal likelihood given the partial observations \(g_{0:n}\). For simplicity, we set \(m=1,\,\sum_{2}=2\), and specify the free energy function as \(q=V(q)=a(q-q_{\min})^{2}(q-q_{\max})^{2}+bq^{3}\) with constants \((q_{\min},q_{\max},a,b)=(0.30,0.90,1200,0.001)\). We set \(\beta^{-1}=2.949\) and select the true parameters as \(\theta^{\dagger}=(c_{1}^{\dagger},\tau_{1}^{\dagger},\tau_{2}^{\dagger})=(0.22,0.07,1.2,4.6)\), as such a choice closely reproduces the shape of the memory kernel estimated in (Ayaz et al., 2021). We generate 50 independent trajectories of \(q\) on the time interval \([0,1500]\) by applying scheme (LG-II) to QGLE (20) with step-size \(10^{-4}\). We discarded the observations up to time 500 and sub-sample the datasets \(q_{0:n}\) in equilibrium, so that \(n=10^{6}\), \(\Delta_{n}=10^{-3}\), \(T=1000\). In Figure 2, we plot the shape of the free energy \(U\) and one trajectory of the component \(q\) from the QGLE (20) in the chosen setting. Notice that QGLE (20) is a conditionally Gaussian non-linear system (given the component \(q\)), thus upon adoption of the locally Gaussian discussion (LG-II), the marginal likelihood can be calculated via the Kalman filter shown in Section 4.2.2. We use the Nelder-Mead method to optimise the marginal likelihoods with the initial value \(\theta_{0}=(0.1,0.01,1.0,10.0)\). Figure 3, summarises the results from estimation of \(\theta=(c_{1},\tau_{1},c_{2},\tau_{2})\) given the partial observations. The boxplots of (approximate) maximum likelihood estimates in Figure (3a) indicate that parameter estimation carried out via the locally Gaussian scheme (LG-II) delivers consistent estimates of the parameters. We typically observe that the standard error of estimates for \((c_{1},\tau_{1})\) is much smaller than that for \((c_{2},\tau_{2})\). In Figure (3b), we plot the memory kernel (21) with the parameters set equal to the mean value the 50 MLEs to observe the level of agreement between the estimated memory kernel and the reference kernel, the latter computed with the true parameter values, where the relative absolute errors between the true and estimated memory kernels are within \(0.1\) across periods \(t\in[0.001,10]\).
## 6 Conclusions and Future Directions
We have studied parameter inference procedures for the highly degenerate class of SDEs that includes a wide range of practical models, e.g., quasi-Markovian generalised equations (QGLEs), epidemiological models with time-varying parameters (Spannaus et al., 2022; Dureau et al., 2013), non-linear continuous-time autoregressive models (Tsai and Chan, 2000) and the classical Lorentz system upon consideration of noise effects (Coti Zelati and Hairer, 2021). We have introduced the locally Gaussian time-discretisation scheme (LG-II) and provided analytical/numerical results showcasing that parameter estimation based upon such scheme sidesteps biases that would arise under alternative schemes. The approach followed in this
Figure 2: Left panel (2a): The free energy used in the experiment. Right panel (2b): A trajectory of the observable coordinate \(q_{t}\) from the QGLE (20).
work for establishing our results for class (Hypo-II) are expected to also guide extensions to more general classes of degenerate diffusions, for which iterated Lie brackets of _any_ order, i.e., \(|\tilde{V}_{0},[\tilde{V}_{0},\ldots,[\tilde{V}_{0},V_{k}]]|\), \(1\leq k\leq d\), are required for Hormander's condition to hold. Here, we draw upon the understanding obtained via the study of classes (Hypo-I) and (Hypo-II) to summarise key arguments for carrying out unbiased parameter estimation for general hypo-elliptic systems. First, in a partial observation regime, use of a degenerate discretisation (e.g. Euler-Maruyama) or equivalently of finite-differences to impute latent components will induce bias at estimates of diffusion coefficient parameters (recall the case study in Section 4.2.1). To avoid use of finite-differences, the development of a non-degenerate conditionally Gaussian scheme for the full coordinates is essential, with the Gaussian noise obtained via high-order stochastic Taylor expansion of the drift functions. A lot of care should be given at the deterministic terms of the expansion to be included into the scheme, to avoid emergence of biases in estimates of drift parameters (recall the analytical study in Section 4.1.3 and the numerical results in Section 5.1). We summarise below the above-designated roadmap for the construction of 'correct' time-discretisation schemes for general classes of hypo-elliptic diffusions.
1. For the rough component, \(X_{R}\), the Euler-Maruyama scheme is applied.
2. For the smooth coordinates in the model, one recursively applies stochastic Taylor expansion to drift functions so that Gaussian variates, in the form of iterated integrals involving Brownian motions, e.g. of the form \(\int_{t_{i}}^{t_{i+1}}B_{s}ds,\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}B_{s}dsdu\), appear in all smooth coordinates. This process is completed once the covariance-variance of the Gaussian approximation is positive definite.
3. For a smooth component containing Gaussian noise of size \(\mathcal{O}(\Delta_{n}^{(2k-1)/2})\), \(k\geq 2\), the scheme should include all deterministic terms from the stochastic Taylor expansion up to size \(\mathcal{O}(\Delta_{n}^{k})\).
Indicatively, Table 3 summarises the size of determistic and noisy parts of the locally Gaussian scheme (LG-II) for class (Hypo-II).
Figure 3: Parameter estimation for QGLE (20) given 50 independent observed trajectories of \(q_{t}\). Left panel (3a): Boxplots of the maximum likelihood estimates. The true values are \(\theta^{\dagger}=(c_{1}^{\dagger},\tau_{1}^{\dagger},c_{2}^{\dagger},\tau_{2} ^{\dagger})=(0.22,0.007,1.2,4.6)\). Right panel (3b): Memory kernel (21) computed with the true parameter (true kernel) and with the mean value of MLEs (estimated kernel).
Our work in this paper leads to further research in several directions. In the CLT of the main analytical result for the parameter estimator (Theorem 2), the step-size \(\Delta_{n}\) is required to satisfy \(\Delta_{n}=o(n^{-1/2})\). An open problem for hypo-elliptic dimensions is the construction of estimators giving a CLT under a weaker condition \(\Delta_{n}=o(n^{-1/p})\), \(p\geq 3\). We expect that such a general estimator for degenerate diffusion models can be produced, with accompanying theory then following the strategy used in our proofs in this work, as we discussed in Remark 4. In a different direction, the effectiveness of the developed locally Gaussian scheme is yet to be studied under a low-frequency observation setting, i.e. with the step-size \(\Delta\) assumed fixed and not small enough, in which case a number, say \(M\), of inner sub-steps are introduced by the user. Under such a setting, the discretisation error of the true (intractable) density over the period of size \(\Delta\) typically diminishes as \(M\) increases. In the case of elliptic diffusions, explicit rates of convergence to zero are provided in Gobet and Labart (2008); Iguchi and Yamada (2021). Finally, in this work, in the practical scenario of partial observations, we have investigated the behaviour of discretisation schemes via case studies and numerical experiments. Analytical theory would be quite instructive in this setting. Techniques used in the context of hidden Markov models (see e.g. Douc et al. (2014)) are expected to be valuable in such a pursuit.
## Acknowledgements
YI acknowledges support from the Additional Funding Programme for Mathematical Sciences, delivered by EPSRC (EP/V521917/1) and the Heilbronn Institute for Mathematical Research.
## Appendix A Preliminaries
In Section A.1 we present some notation used in the Appendix. In Section A.2 we introduce three auxiliary results needed in the proof of our main theorems (Theorems 1, 2) in Section 4.
### Notation
For \(0=t_{0}<\cdots<t_{n}\), with equi-distant step-size \(\Delta_{n}\), we write \(X_{i}\) for the observation at time \(t_{i}\) of the solution of the hypo-elliptic SDE (Hypo-II) under the true parameter value \(\theta^{\dagger}\), defined upon the filtered probability space \((\Omega,\mathcal{F},\{\mathcal{F}_{i}\}_{I\geq 0},\mathbb{P})\). We denote by \(\nu_{\theta\uparrow}\) the invariant distribution of process (Hypo-II) under \(\theta^{\dagger}\). In agreement with the structure of class (Hypo-II), we often represent \(x\in\mathbb{R}^{N}\) and \(\theta\in\Theta\subset\mathbb{R}^{N_{\theta}}\) as
\[x=(x_{S_{1}},x_{S_{2}},x_{R})\in\mathbb{R}^{N_{S_{1}}}\times \mathbb{R}^{N_{S_{2}}}\times\mathbb{R}^{N_{R}},\quad x_{S}\equiv(x_{S_{1}},x_{ S_{2}});\] \[\theta=(\beta_{S_{1}},\beta_{S_{2}},\beta_{R},\sigma)\in\Theta_{ N_{S_{1}}}\times\Theta_{N_{S_{2}}}\times\Theta_{N_{S_{R}}}\times\Theta_{N_{S_{ \sigma}}},\quad\beta_{S}\equiv(\beta_{S_{1}},\beta_{S_{2}}),\]
For \(\varphi(\cdot,\theta):\mathbb{R}^{N}\to\mathbb{R}\), \(\theta\in\Theta\), bounded up to second derivatives, we define the differential operators \(\mathcal{L}\) and \(\mathcal{L}_{j}\), \(1\leq j\leq d\):
\[\mathcal{L}\varphi(x,\theta)=\sum_{i=1}^{N}V_{0}^{i}(x,\theta) \frac{\partial\varphi}{\partial x_{i}}(x,\theta)+\tfrac{1}{2}\sum_{i_{i},i_{ 2}=1}^{N}\sum_{k=1}^{d}V_{k}^{i_{1}}(x,\theta)V_{k}^{i_{2}}(x,\theta)\frac{ \partial^{2}\varphi}{\partial x_{i_{1}}\partial x_{i_{2}}}(x,\theta);\]
\begin{table}
\begin{tabular}{l c c c} \hline \hline Component & Gaussian part & Deterministic part \\ \hline \(\bar{X}_{S_{1},i+1}^{(\mathrm{I})}\) & \(\mathcal{O}(\Delta_{n}^{5/2})\), & \(\left(\int_{t_{i}}^{t_{i+1}}f_{t_{i}}^{u}B_{s}dsdu\right)\) & \(\mathcal{O}(\Delta_{n}^{3})\) \\ \(\bar{X}_{S_{2},i+1}^{(\mathrm{II})}\) & \(\mathcal{O}(\Delta_{n}^{3/2})\), & \(\left(\int_{t_{i}}^{t_{i+1}}B_{s}ds\right)\) & \(\mathcal{O}(\Delta_{n}^{2})\) \\ \(\bar{X}_{R,i+1}^{(\mathrm{II})}\) & \(\mathcal{O}(\Delta_{n}^{1/2})\), & \(\left(B_{t_{i+1}}-B_{t_{i}}\right)\) & \(\mathcal{O}(\Delta_{n})\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Size (in \(\Delta_{n}\)) of the terms appearing in the locally Gaussian scheme (L-G-II).
\[\mathcal{L}_{j}\varphi(x,\theta)=\sum_{i=1}^{N}V_{j}^{i}(x,\theta)\frac{\partial \varphi}{\partial x_{i}}(x,\theta),\quad 1\leq j\leq d.\]
Application of the above differential operators is extended to vector-valued functions in the apparent way, via separate consideration of each scalar component. We recall some notation used in the definition of the contrast function \(\ell_{n}(\theta)\) in (9). We have that
\[\mu(\Delta,x,\theta)=\left[\mu_{S_{1}}(\Delta,x,\theta)^{\top},\,\mu_{S_{2}}( \Delta,x,\theta)^{\top},\,\mu_{R}(\Delta,x,\theta)^{\top}\right]^{\top},\]
where
\[\begin{bmatrix}\mu_{S_{1}}(\Delta,x,\theta)\\ \mu_{S_{2}}(\Delta,x,\theta)\\ \mu_{R}(\Delta,x,\theta)\end{bmatrix}=\begin{bmatrix}x_{S_{1}}+V_{S_{1},0}(x_ {S},\beta_{S_{1}})\Delta+\mathcal{C}V_{S_{1},0}(x,\theta)\frac{\Delta^{2}}{ 2}+\mathcal{C}^{2}V_{S_{1},0}(x,\theta)\frac{\Delta^{2}}{6}\\ x_{S_{3}}+V_{S_{2},0}(x,\beta_{S_{3}})\Delta+\mathcal{C}V_{S_{1},0}(x,\theta) \frac{\Delta^{2}}{2}\\ x_{R}+V_{R,0}(x,\beta_{R})\Delta\end{bmatrix}.\]
When \(\Delta=1\), we simply write
\[\mu(x,\theta)\equiv\mu(1,x,\theta).\]
For \(x=(x_{S_{1}},x_{S_{2}},x_{R})\in\mathbb{R}^{N}\equiv\mathbb{R}^{N_{S_{1}}} \times\mathbb{R}^{N_{S_{2}}}\times\mathbb{R}^{N_{R}}\), \(y=(y_{S_{1}},y_{S_{2}},y_{R})\in\mathbb{R}^{N}\), \(\Delta>0\) and \(\theta\in\Theta\), we define
\[m(\Delta,x,y,\theta)=\left[\frac{y_{S_{1}}^{\top}-\mu_{S_{1}}(\Delta,x;\theta )^{\top}}{\sqrt{\Delta^{\Delta}}},\,\,\frac{y_{S_{1}}^{\top}-\mu_{S_{2}}( \Delta,x;\theta)^{\top}}{\sqrt{\Delta^{\Delta}}},\,\,\frac{y_{R}^{\top}-\mu_{ R}(\Delta,x;\theta)^{\top}}{\sqrt{\Delta}}\,\right]^{\top}. \tag{22}\]
We write, for \(1\leq i\leq n\),
\[m_{i}(\Delta,\theta)\equiv m(\Delta,X_{i-1},X_{i},\theta). \tag{23}\]
We use \(\Sigma(\Delta,x,\theta)\) to represent the covariance of one step of the local Gaussian scheme (LG-II) for the hypo-elliptic SDE (Hypo-II), given step-size \(\Delta>0\), initial point \(x\in\mathbb{R}^{N}\) and parameter \(\theta\). We often write
\[\Sigma(x,\theta)\equiv\Sigma(1,x,\theta).\]
We express the inverse of \(\Sigma(x,\theta)\) as:
\[\Sigma^{-1}(x,\theta)=\Lambda(x,\theta)=\begin{bmatrix}\Lambda_{S_{1}S_{1}}(x,\theta)&\Lambda_{S_{1}S_{2}}(x,\theta)&\Lambda_{S_{1}R}(x,\theta)\\ \Lambda_{S_{2}S_{1}}(x,\theta)&\Lambda_{S_{2}S_{2}}(x,\theta)&\Lambda_{S_{1}R }(x,\theta)\\ \Lambda_{RS_{1}}(x,\theta)&\Lambda_{RS_{2}}(x,\theta)&\Lambda_{RR}(x,\theta) \end{bmatrix}, \tag{24}\]
where each block matrix is specified as
\[\Lambda_{s_{1}s_{2}}(x,\theta)\in\mathbb{R}^{N_{1}\times N_{2}},\quad\iota_{ 1},\iota_{2}\in\{S_{1},S_{2},R\}.\]
Each block element of the matrix \(\Sigma(x,\theta)\) is given later in Section B, and we emphasise here that \(\Sigma(x,\theta)\) and its inverse \(\Lambda(x,\theta)\) depend on \(x\) and \((\beta_{S},\sigma)\) but not on the drift parameter \(\beta_{R}\) in the rough component, and this is critical in the proof of consistency of \(\hat{\beta}_{R,n}\). Thus, we sometimes write \(\Sigma(x,(\beta_{S},\sigma))\) and \(\Lambda(x,(\beta_{S},\sigma))\)
to highlight the parameter dependency. We define the mappings
\[\eta_{S_{1}}:\mathbb{R}^{N}\times\Theta_{\beta_{S_{1}}}\to\mathbb{R}^{N_{S_{1}}}, \quad\eta_{S_{2}}:\mathbb{R}^{N}\times\Theta_{\beta_{S_{2}}}\to\mathbb{R}^{N_{S_ {2}}},\quad\eta_{R}:\mathbb{R}^{N}\times\Theta_{\beta_{R}}\to\mathbb{R}^{N_{R}}\]
as
\[\eta_{S_{1}}(x,\beta_{S_{1}})=V_{S_{1},0}(x_{S},\beta_{S_{1}}^{\dagger})-V_{S_{1 },0}(x_{S},\beta_{S_{1}}),\quad\eta_{S_{2}}(x,\beta_{S_{2}})=V_{S_{2},0}(x, \beta_{S_{2}}^{\dagger})-V_{S_{2},0}(x,\beta_{S_{2}});\]
\[\eta_{R}(x,\beta_{R})=V_{R,0}(x,\beta_{R}^{\dagger})-V_{R,0}(x,\beta_{R}).\]
We write, with a slight abuse of notation, for \(0\leq i\leq n\),
\[\eta_{S_{i},i}(\Delta,\beta_{S_{i}})\equiv\tfrac{\eta_{S_{i}}(X_{i},\beta_{S_{ i}})}{\Delta},\quad\eta_{S_{2},i}(\Delta,\beta_{S_{2}})\equiv\tfrac{\eta_{S_{ 2}}(X_{i},\beta_{S_{2}})}{\Delta}.\]
We denote by \(\mathcal{S}\) the space of functions \(f:[0,\infty)\times\mathbb{R}^{N}\times\Theta\to\mathbb{R}\) so that there are constants \(C,q>0\) such that \(|f(\Delta,x,\theta)|\leq C\Delta\left(1+|x|^{q}\right)\) for any \((\Delta,x,\theta)\in[0,\infty)\times\mathbb{R}^{N}\times\Theta\). For an \(M_{1}\times M_{2}\) matrix \(A\), with \(M_{1},M_{2}\geq 1\), we write each matrix entry as \([A]_{ij}\) for \(1\leq i\leq M_{1}\), \(1\leq j\leq M_{2}\). We recall the probability law of the process \(\{X_{t}\}_{t\geq 0}\) under a parameter \(\theta\in\Theta\) is written as \(\mathbb{P}_{\theta}\), and
\[\xrightarrow{\mathbb{P}_{d^{\dagger}}},\quad\xrightarrow{\mathbb{E}_{d^{ \dagger}}},\]
indicate convergence in probability and distribution, respectively, under the true parameter \(\theta^{\dagger}\). An expectation under the probability law \(\mathbb{P}_{\theta}\) is written as \(\mathbb{E}_{\theta}\). We write
\[\partial_{u}=[\tfrac{\partial}{\partial u_{1}},\dots,\tfrac{\partial}{ \partial u_{n}}]^{\top},\qquad\partial_{u}^{2}=\partial_{u}\partial_{u}^{ \top}\equiv(\tfrac{\partial^{2}}{\partial u_{1}\partial u_{j}})_{i,j=1}^{n}\]
for the standard differential operators acting upon maps \(\mathbb{R}^{n}\to\mathbb{R}\), \(n\geq 1\). We also write \(\partial_{\alpha}^{u}=\tfrac{\partial^{\dagger}}{\partial u_{n_{1}}\cdots \partial u_{n_{l}}}\) for a multi-index \(\alpha\in\{1,\dots,n\}^{l}\), \(l\in\mathbb{N}\). For a function \(g:\mathbb{R}^{n}\to\mathbb{R}^{m}\), \(n,m\in\mathbb{N}\), we write:
\[\partial_{u}g(u)^{\top}=\big{[}\tfrac{\partial}{\partial u_{1}}g^{j}(u)\big{]} _{1\leq i\leq n,1\leq j\leq m},\quad\partial_{u}^{\top}g(u)=\big{(}\partial_ {u}g(u)^{\top}\big{)}^{\top};\]
\[\partial_{u}^{\alpha}g(u)^{\top}=\big{[}\partial_{\alpha}^{\alpha}g^{1}(u), \dots,\partial_{\alpha}^{\alpha}g^{m}(u)\big{]},\quad\partial_{\alpha}^{ \alpha}g(u)=\big{(}\partial_{\alpha}^{\alpha}g(u)^{\top}\big{)}^{\top}.\]
### Auxiliary Results
We prepare some auxiliary results to be used in the proof of Theorems 2.1 and 2.1.
Let \(Y_{i_{*}}\), \(U\) be random variables, with \(Y_{i_{*}}\) being \(\mathcal{F}_{i_{*}}\)-measurable. If
\[\sum_{i=1}^{n}\mathbb{E}_{\theta^{\dagger}}[\,Y_{i_{*}}\,|\,\mathcal{F}_{i_{*-1 }}\xrightarrow{\mathbb{P}_{d^{\dagger}}}U,\quad\sum_{i=1}^{n}\mathbb{E}_{ \theta^{\dagger}}\big{[}\,\big{(}Y_{i_{*}}\big{)}^{2}\,|\,\mathcal{F}_{i_{*-1 }}\xrightarrow{\mathbb{P}_{d^{\dagger}}}0,\]
then \(\sum_{i=1}^{n}Y_{i_{*}}\xrightarrow{\mathbb{P}_{d^{\dagger}}}U\).
Proof.: See Lemma 2.1 in Genon-Catalot and Jacod (1993).
Let \(f:\mathbb{R}^{N}\times\Theta\to\mathbb{R}\) be differentiable w.r.t. \((x,\theta)\in\mathbb{R}^{N}\times\Theta\) with derivatives of polynomial growth in \(x\) uniformly in \(\theta\). Under conditions (C1)-(C4), it holds that, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\tfrac{1}{n}\sum_{i=1}^{n}f(X_{i-1},\theta)\xrightarrow{\mathbb{P}_{d^{\dagger}} }\int f(x,\theta)\nu_{\theta^{\dagger}}(dx),\]
uniformly in \(\theta\in\Theta\).
Proof.: This is a multivariate version of Lemma 8 in Kessler (1997), so we omit the proof.
Let \(1\leq j_{1},j_{2}\leq N\) and assume that \(f:\mathbb{R}^{N}\times\Theta\to\mathbb{R}\) is as in Lemma 2. Under conditions (C1)-(C4), it holds that, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\frac{1}{n}\sum_{i=1}^{n}f(X_{i-1},\theta)m_{i}^{j_{1}}(\Delta_{n },\theta^{\dagger})\,m_{i}^{j_{2}}(\Delta_{n},\theta^{\dagger})\xrightarrow{ \mathbb{P}_{\mathbb{P}_{\mathbb{P}_{\mathbb{P}_{\mathbb{P}_{\mathbb{P}_{ \mathbb{P}_{\mathbb{P}_{\mathbb{P}_{\mathbb{P}_{\mathbb{P}_{\mathbb{P}}_{\mathbb{P }}_{\mathbb{P}_{\mathbb{P}}_{\mathbb{P}}_{\mathbb{P}_{\mathbb{P}}_{\mathbb{P} \mathbb{P}}_{\mathbb{P}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}
second equations of condition (H)-II yield:
\[\mathrm{span}\Big{\{}\mathrm{proj}_{N_{S_{1}}+1,N_{S}}\{[\tilde{V}_{0},V_{k}](x, \theta)\}:1\leq k\leq d\Big{\}}\]
\[=\mathrm{span}\Big{\{}\partial_{x_{n}}^{\top}V_{S_{1},0}(x,\beta_{S})V_{R,k}(x, \sigma):1\leq k\leq d\Big{\}}=\mathbb{R}^{N_{S_{2}}},\]
for each \((x,\theta)\in\mathbb{R}^{N}\times\Theta\), thus the matrix \(a_{S}(x,\theta)\) is positive definite. Similarly, due to condition (H)-II,
\[\mathrm{span}\Big{\{}\mathrm{proj}_{1,N_{S_{1}}}\{\big{[}\tilde{V}_{0},V_{k}] \big{[}x,\theta\big{]}\}:1\leq k\leq d\Big{\}}\]
\[=\mathrm{span}\big{\{}\partial_{x_{n}}^{\top}V_{S_{1},0}(x_{S},\beta_{S}) \partial_{x_{n}}^{\top}V_{S_{1},0}(x,\beta_{S})V_{R,k}(x,\sigma):1\leq k\leq d \big{\}}=\mathbb{R}^{N_{S_{1}}},\]
for each \((x,\theta)\in\mathbb{R}^{N}\times\Theta\). This implies the positive definiteness of \(a_{S_{1}}(x,\theta)\). The proof is complete.
### Proof of Main Results
In this section we prove the main results, i.e. Theorem 1, 2 in Section 4 of the main text. The proofs make use of some technical results from Appendix D.
#### Proof of Theorem 1 - Consistency
To show consistency, we study the limit of the contrast function \(\ell_{n}(\theta)\), defined in (9), that involves terms such as
\[\frac{X_{S_{1},i+1}-\mu_{S_{1}}(\Delta_{n},X_{i},\theta)}{\sqrt{\Delta_{n}^{2} }},\ \ \frac{X_{S_{2},i+1}-\mu_{S_{2}}(\Delta_{n},X_{i},\theta)}{\sqrt{\Delta_{n}^{2} }},\ \ 1\leq i\leq n-1,\]
where \(\{X_{i}\}_{i=0,...,n}\) are discrete-time observations under the true model (Hypo-II) with parameter \(\theta^{\dagger}\). Then, the stochastic Taylor expansion for \(X_{S,i+1}\) yields
\[\begin{split}\frac{X_{S_{1},i+1}-\mu_{S_{1}}(\Delta_{n},X_{i}, \theta)}{\sqrt{\Delta_{n}^{2}}}&=\frac{V_{S_{1},i}(X_{S_{1}}, \partial_{S_{1}}^{\dagger})-V_{S_{1},i}(X_{S_{2}},\partial_{S_{1}})}{\sqrt{ \Delta_{n}^{2}}}+R_{S_{1}}(\Delta_{n},X_{i},\theta);\\ \frac{X_{S_{2},i+1}-\mu_{S_{2}}(\Delta_{n},X_{i},\theta)}{\sqrt{ \Delta_{n}^{2}}}&=\frac{V_{S_{2},i}(X_{i},\partial_{S_{2}}^{ \dagger})-V_{S_{2},i}(X_{i},\partial_{S_{2}})}{\sqrt{\Delta_{n}}}+R_{S_{2}}( \Delta_{n},X_{i},\theta),\end{split} \tag{29}\]
where \(R_{S_{1}}\), \(R_{S_{2}}\in\mathbb{S}\). Careful steps are needed to control the first terms in the right-hand-sides of (29) as \(\Delta_{n}\to 0\), within the proof of consistency. Our proof proceeds with the following strategy which extends arguments used in Iguchi et al. (2022):
1. We prove consistency, along with a convergence rate, for the estimator \(\hat{\beta}_{S_{1},n}\). That is, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then \[\hat{\beta}_{S_{1},n}\xrightarrow{\mathbb{P}_{\mathcal{H}_{1}}}\beta_{S_{1}}^{\dagger}.\] In particular, we obtain the rate: \[\frac{1}{\sqrt{\Delta_{n}^{2}}}\big{(}\hat{\beta}_{S_{1},n}-\beta_{S_{1}}^{ \dagger}\big{)}\xrightarrow{\mathbb{P}_{\mathcal{H}_{1}}}0.\] (30)
2. Making use of the convergence rate in (30), we prove consistency, along with a convergence rate, for the estimator \(\hat{\beta}_{S_{2},n}\). That is, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then \[\hat{\beta}_{S_{2},n}\xrightarrow{\mathbb{P}_{\mathcal{H}_{1}}}\beta_{S_{2}}^{\dagger}.\] In particular, we obtain the rate: \[\frac{1}{\sqrt{\Delta_{n}}}\big{(}\hat{\beta}_{S_{2},n}-\beta_{S_{2}}^{ \dagger}\big{)}\xrightarrow{\mathbb{P}_{\mathcal{H}_{1}}}0.\] (31)
* Making use of the rates in (30) and (31), we prove consistency for the estimators \((\hat{\beta}_{R,n},\hat{\sigma}_{n})\). That is, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then \[(\hat{\beta}_{R,n},\hat{\sigma}_{n})\stackrel{{\mathbb{P}_{d_{1}} }}{{\longrightarrow}}(\beta_{R}^{\dagger},\sigma^{\dagger}).\] We emphasise that in our proof of consistency, the condition \(\Delta_{n}=o(n^{-1/2})\) is not required while Gloter and Yoshida (2020) assumed the condition throughout the proof of consistency in the case of the degenerate diffusion class (Hypo-I). Typically, in order to show the consistency of \(\hat{\beta}_{R,n}\), Gloter and Yoshida (2020) exploited the rates of convergence \[\sqrt{\tfrac{n}{\Delta_{n}}}(\beta_{S,n}-\beta_{S}^{\dagger})\stackrel{{ \mathbb{P}_{d_{1}}}}{{\longrightarrow}}0,\quad\sqrt{n}(\sigma_{n}-\sigma^{ \dagger})\stackrel{{\mathbb{P}_{d_{1}}}}{{\longrightarrow}}0\] (32) that are derived under the condition \(\Delta_{n}=o(n^{-1/2})\). In contrast, in our strategy, the rates of convergence (30) and (31) are obtained without requiring \(\Delta_{n}=o(n^{-1/2})\), and are put into effective use to avoid explosion of terms such as \[\tfrac{V_{\beta_{S,n}}(X_{S,s,t},\beta_{S}^{\dagger})-V_{\beta_{S,n}}(X_{S,s,t},\beta_{S,n})}{\sqrt{\Delta_{n}}},\quad\tfrac{V_{\beta_{S,n}}(X_{S,s},\beta _{S}^{\dagger})-V_{\beta_{S,n}}(X_{S,\beta_{S,n}})}{\sqrt{\Delta_{n}}}\] as \(\Delta_{n}\to 0\) only, with the help of some results derived from straightforward matrix calculations (Lemma 6, 13 and 14 shown later).
#### c.1.1 Step 1
Consistency of the estimator \(\hat{\beta}_{S_{1},n}\) is deduced from the following result.
**Lemma 4**: _Assume that conditions (H)-II and (C1)-(C4) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then,_
\[\tfrac{\Delta_{n}^{3}}{n}\ell_{n}(\theta)\stackrel{{\mathbb{P}_{ d_{1}}}}{{\longrightarrow}}\int\eta_{S_{1}}(x,\beta_{S_{1}})^{\top}\Lambda_{S_{1}S_{ 1}}(x,\theta)\,\eta_{S_{1}}(x,\beta_{S_{1}})\nu_{\theta^{\prime}}(dx),\]
_uniformly in \(\theta\in\Theta\)._
The proof is given in Appendix D.1. Lemma 4 implies the consistency of \(\hat{\beta}_{S_{1},n}\) via the following discussion. From the definition of estimator, we have that for every \(\varepsilon>0\),
\[\mathbb{P}_{\theta^{\prime}}\big{(}|\hat{\beta}_{S_{1},n}-\beta_{S_{1}}^{ \dagger}|>\varepsilon\big{)}\leq\mathbb{P}_{\theta^{\prime}}\big{(}\tfrac{ \Delta_{n}^{3}}{n}\ell_{n}\big{(}\hat{\theta}_{n}\big{)}<\tfrac{\Delta_{n}^{3} }{n}\ell_{n}\big{(}\beta_{S_{1}}^{\dagger},\hat{\beta}_{S_{2},n},\hat{\beta}_{ R,n},\hat{\sigma}_{n}\big{)}\big{)}.\]
From the compactness of \(\Theta\), the identifiability condition (C5) and the positive definiteness of \(\Lambda_{S_{1}S_{1}}(x,\theta)\) for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\), Lemma 4 gives
\[\mathbb{P}_{\theta^{\prime}}\big{(}\tfrac{\Delta_{n}^{3}}{n}\ell_{n}\big{(} \hat{\theta}_{n}\big{)}<\tfrac{\Delta_{n}^{3}}{n}\ell_{n}\big{(}\beta_{S_{1}}^ {\dagger},\hat{\beta}_{S_{2},n},\hat{\beta}_{R,n},\hat{\sigma}_{n}\big{)} \big{)}\to 0,\]
as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), which leads to the consistency of \(\hat{\beta}_{S_{1},n}\).
We now prove the rate of convergence in (30). Considering the Taylor expansion of \(\partial_{\beta_{S_{1}}}\ell_{n}(\hat{\theta}_{n})\) around \(\partial_{\beta_{S_{1}}}\ell_{n}(\beta_{S_{1}}^{\dagger},\hat{\beta}_{S_{2},n}, \hat{\beta}_{R,n},\hat{\sigma}_{n})\) with an appropriate scaling factor, we obtain
\[\mathscr{A}_{S_{1},n}(\beta_{S_{1}}^{\dagger},\hat{\beta}_{S_{2},n},\hat{\beta }_{R,n},\hat{\sigma}_{n})=\mathscr{B}_{S_{1},n}(\hat{\theta}_{n})\,\times \tfrac{1}{\sqrt{\Delta_{n}^{3}}}\big{(}\hat{\beta}_{S_{1},n}-\beta_{S_{1}}^{ \dagger}),\]
where we have set, for \(\theta=(\beta_{S_{1}},\theta^{-\beta_{S_{1}}})\in\Theta\) with \(\theta^{-\beta_{S_{1}}}\equiv(\beta_{S_{2}},\beta_{R},\sigma)\),
\[\mathscr{A}_{S_{1},n}(\theta)=-\tfrac{\sqrt{\Delta_{n}^{3}}}{n}\partial_{\beta_ {S_{1}}}\ell_{n}(\theta),\quad\mathscr{B}_{S_{1},n}(\theta)=\tfrac{\Delta_{n}^ {3}}{n}\int_{0}^{1}\partial_{\beta_{S_{1}}}^{2}\ell_{n}\big{(}\beta_{S_{1}}^{ \dagger}+\lambda(\beta_{S_{1}}-\beta_{S_{1}}^{\dagger}),\theta^{-\beta_{S_{1}}} \big{)}\,d\lambda.\]
For the matrix \(\mathscr{B}_{S_{1},n}(\theta)\), we have the following result:
**Lemma 5**: _Assume that conditions (H)-II and (C1)-(C4) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then_
\[\mathscr{B}_{S_{1},n}(\hat{\beta}_{S_{1},n},\theta^{-\beta_{S_{1}}})\] \[\qquad\qquad\overset{\mathbb{P}_{\mathcal{E}_{1}}}{\longrightarrow} \int\partial_{\beta_{S_{1}}}\big{(}V_{S_{1},0}(x_{S},\beta_{S_{1}}^{\dagger}) \big{)}^{\top}\Lambda_{S_{1}S_{1}}\big{(}x,(\beta_{S_{1}}^{\dagger},\theta^{- \beta_{S_{1}}})\big{)}\partial_{\beta_{S_{1}}}^{\top}V_{S_{1},0}(x_{S},\beta_{ S_{1}}^{\dagger})\nu_{\theta^{\dagger}}(dx),\]
_uniformly in \(\theta^{-\beta_{S_{1}}}\equiv(\beta_{S_{1}},\beta_{R},\sigma)\in\Theta_{\beta _{S_{2}}}\times\Theta_{\beta_{R}}\times\Theta_{\sigma}\)._
We give the proof in Appendix D.2. We now check that \(\mathscr{A}_{S_{1},n}(\beta_{S_{1}}^{\dagger},\theta^{-\beta_{S_{1}}})\to 0\), as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), uniformly in \(\theta^{-\beta_{S_{1}}}=(\beta_{S_{2}},\beta_{R},\sigma)\). We have:
\[\mathscr{A}_{S_{1},n}(\beta_{S_{1}}^{\dagger},\theta^{-\beta_{S_{1}}}) =\widetilde{\mathscr{A}_{S_{1},n}}(\beta_{S_{1}}^{\dagger},\theta^{-\beta_{S_ {1}}})+\overset{n}{\underset{i=1}{\sum}}R\big{(}\sqrt{\Delta_{n}},X_{i-1},( \beta_{S_{1}}^{\dagger},\theta^{-\beta_{S_{1}}})\big{)},\]
for \(R\in\mathcal{S}\), where we have set, for \(\theta=(\beta_{S_{1}},\beta_{S_{1}},\beta_{R},\sigma)\in\Theta\),
\[\widetilde{\mathscr{A}_{S_{1},n}}(\theta)=\tfrac{1}{n\sqrt{\Delta_{n}}}\sum_{ i=1}^{n}\partial_{\beta_{S_{1}}}\big{(}V_{S_{1},0}(X_{S_{i}-1},\beta_{S_{1}}) \big{)}^{\top}\,\Phi(X_{i-1},\theta)\,\eta_{S_{1}}(X_{i-1},\beta_{S_{2}})\]
with \(\Phi:\mathbb{R}^{N}\times\Theta\to\mathbb{R}^{N_{S_{1}}\times N_{S_{2}}}\) defined as:
\[\Phi(x,\theta)=\Lambda_{S_{1}S_{1}}(x,\theta)\,\partial_{x_{2}}^{\top}V_{S_{1 },0}(x_{S},\beta_{S_{1}})+2\Lambda_{S_{1}S_{2}}(x,\theta). \tag{33}\]
From Lemmas 2 and 3 in Appendix A, we immediately have that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\overset{n}{\underset{i=1}{\sum}}R\big{(}\sqrt{\Delta_{n}},X_{i-1},(\beta_{S_ {1}}^{\dagger},\,\theta^{-\beta_{S_{1}}})\big{)}\overset{\mathbb{P}_{\mathcal{ E}_{1}}}{\longrightarrow}0,\]
uniformly in \(\theta^{-\beta_{S_{1}}}\). Furthermore, we have that, for any \(\theta\in\Theta\), \(\widetilde{\mathscr{A}_{S_{1},n}}(\theta)=0\) with probability 1, from the following result:
**Lemma 6**: _Assume that condition (H)-II holds. We have that for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\):_
\[\Phi(x,\theta)=\mathfrak{0}_{N_{S_{1}}\times N_{S_{2}}}.\]
We give the proof in Appendix D.3. Hence, we obtain \(\mathscr{A}_{S_{1},n}(\beta_{S_{1}}^{\dagger},\theta^{-\beta_{S_{1}}}) \overset{\mathbb{P}_{\mathcal{E}_{1}}}{\longrightarrow}0\), and now convergence (30) holds.
#### c.1.2 Step 2
Making use of convergence (30), we obtain the following result whose proof is postponed to Appendix D.4.
**Lemma 7**: _Assume that conditions (H)-II and (C1)-(C4) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then_
\[\tfrac{\Delta_{n}}{n}\ell_{n}\big{(}\beta_{S_{1},n},\beta_{S_{2}},\beta_{R}, \sigma)\overset{\mathbb{P}_{\mathcal{E}_{1}}}{\longrightarrow}\int\eta_{S_{ 2}}(x,\beta_{S_{2}})^{\top}\Lambda_{S_{2}S_{2}}\big{(}x,(\beta_{S_{1}}^{\dagger },\beta_{S_{2}},\sigma)\big{)}\eta_{S_{2}}(x,\beta_{S_{2}})\nu_{\theta^{\dagger}}(dx ),\]
_uniformly in \((\beta_{S_{2}},\beta_{R},\sigma)\in\Theta_{\beta_{S_{2}}}\times\Theta_{\beta_{R }}\times\Theta_{\sigma}\)._
This result leads to the consistency of \(\hat{\beta}_{S_{2},n}\) following an argument similar to the one used in **Step 1** to show consistency of \(\hat{\beta}_{S_{1},n}\).
To prove convergence (31), we apply a Taylor expansion on the contrast function to get:
\[\mathscr{A}_{S,n}(\beta_{S_{1}}^{\dagger},\beta_{S_{2}}^{\dagger},\hat{\beta}_{R, n},\hat{\sigma}_{n})=\mathscr{B}_{S,n}(\hat{\theta}_{n})\,\times\begin{bmatrix} \frac{1}{\sqrt{\Delta_{S}^{2}}}(\hat{\beta}_{S_{1},n}-\beta_{S_{1}}^{\dagger} )\\ \frac{1}{\sqrt{\Delta_{S}}}(\hat{\beta}_{S_{2},n}-\beta_{S_{2}}^{\dagger}) \end{bmatrix},\]
where we have set, for \(\theta=(\beta_{S},\beta_{R},\sigma)\in\Theta\) with \(\beta_{S}\equiv(\beta_{S_{1}},\beta_{S_{2}})\),
\[\mathscr{A}_{S,n}(\theta) =\begin{bmatrix}-\frac{\sqrt{\Delta_{S}^{2}}}{n}\partial_{\beta_{ S_{1}}}\ell_{n}(\theta)\\ -\frac{\sqrt{\Delta_{S}}}{n}\partial_{\beta_{S_{2}}}\ell_{n}(\theta)\end{bmatrix};\] \[\mathscr{B}_{S,n}(\theta) =\int_{0}^{1}M_{\beta_{S},n}\,\partial_{\beta_{S}}^{2}\ell_{n}( \beta_{S}^{\dagger}+\lambda(\beta_{S}-\beta_{S}^{\dagger}),\beta_{R},\sigma) \,M_{\beta_{S},n}d\lambda,\]
and \(M_{\beta_{S},n}\in\mathbb{R}^{N_{\beta_{S}}\times N_{\beta_{S}}}\) is defined as:
\[M_{\beta_{S},n}=\text{diag}\Big{(}\Big{[}\underbrace{\sqrt{\frac{\Delta_{S}^{ 2}}{n}},\ldots,\sqrt{\frac{\Delta_{S}^{2}}{n}}}_{N_{\beta_{S_{1}}}},\,\,\, \underbrace{\sqrt{\frac{\Delta_{S}}{n}},\ldots,\sqrt{\frac{\Delta_{S}}{n}}}_{ N_{\beta_{S_{2}}}}\Big{]}^{\top}\Big{)}.\]
Convergence (31) is immediately deduced from the following result.
**Lemma 8**: _Assume that conditions (H)-II and (C1)-(C4) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then_
\[\mathscr{A}_{S,n}(\beta_{S}^{\dagger},\beta_{R},\sigma) \xrightarrow{\mathbb{P}_{\sigma,\uparrow}}\mathbf{0}_{N_{\beta_{S}}}; \tag{34}\] \[\mathscr{B}_{S,n}\big{(}\hat{\beta}_{S,n},\beta_{R},\sigma) \xrightarrow{\mathbb{P}_{\sigma,\uparrow}}2\times\text{diag}\Big{(}\mathscr{ B}_{S_{1}S_{1}}(\beta_{S}^{\dagger},\beta_{R},\sigma),\,\mathscr{B}_{S_{2}S_{2}}( \beta_{S}^{\dagger},\beta_{R},\sigma)\Big{)}, \tag{35}\]
_uniformly in \((\beta_{R},\sigma)\in\Theta_{\beta_{R}}\times\Theta_{\sigma}\), where \(\beta_{S}^{\dagger}\equiv(\beta_{S_{1}}^{\dagger},\beta_{S_{2}}^{\dagger})\), \(\hat{\beta}_{S,n}\equiv(\hat{\beta}_{S_{1},n},\hat{\beta}_{S_{2},n})\) and we have set:_
\[\mathscr{B}_{S_{1}S_{1}}(\theta) =720\int\partial_{\beta_{S_{1}}}\big{(}V_{S_{1},0}(x_{S},\beta_{ S_{1}})\big{)}^{\top}a_{S_{1}}^{-1}(x,\theta)\,\partial_{\beta_{S_{1}}}^{\top}V_{S_{1},0} (x_{S},\beta_{S_{1}})\,\nu_{\theta\uparrow}(dx);\] \[\mathscr{B}_{S_{1}S_{2}}(\theta) =12\int\partial_{\beta_{S_{2}}}\big{(}V_{S_{2},0}(x,\beta_{S_{2}}) \big{)}^{\top}a_{S_{2}}^{-1}(x,\theta)\partial_{\beta_{S_{2}}}^{\top}V_{S_{2},0 }(x,\beta_{S_{2}})\,\nu_{\theta\uparrow}(dx),\]
_for \(x\in\mathbb{R}^{N}\), \(\theta=(\beta_{S},\beta_{R},\sigma)\in\Theta\), where \(\beta_{S}=(\beta_{S_{1}},\beta_{S_{2}})\)._
We give the proof in Appendix D.5.
#### c.1.3 Step 3
Finally, we prove the consistency of estimators \((\hat{\beta}_{R,n},\hat{\sigma}_{n})\). Working with the rates of convergence (30) and (31), we obtain the following result leading to the consistency of \(\hat{\sigma}_{n}\):
**Lemma 9**: _Assume that conditions (H)-II and (C1)-(C4) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then_
\[\tfrac{1}{n}\,\ell_{n}\big{(}\hat{\beta}_{S,n},\beta_{R},\sigma\big{)} \xrightarrow{\mathbb{P}_{\sigma,\downarrow}}\int\Big{\{}\mathrm{tr}\big{(} \Lambda(x,(\beta_{S}^{\dagger},\sigma))\Sigma(x,(\beta_{S}^{\dagger},\sigma^{ \dagger}))\big{)}+\log[\Sigma(x,(\beta_{S}^{\dagger},\sigma))]\Big{\}}\nu_{ \theta\uparrow}(dx),\]
_uniformly in \((\beta_{R},\sigma)\in\Theta_{\beta_{R}}\times\Theta_{\sigma}\)._
We provide the proof in Appendix D.6. To show the consistency of \(\hat{\beta}_{R,n}\), we consider, for \(\beta_{R}\in\Theta_{\beta_{n}}\),
\[\mathscr{L}(\beta_{R}):=\tfrac{1}{n\Delta_{n}}\ell_{n}\big{(}\beta_{S_{1},n}, \hat{\beta}_{S_{2},n},\beta_{R},\hat{\sigma}_{n}\big{)}-\tfrac{1}{n\Delta_{n}} \ell_{n}\big{(}\hat{\beta}_{S_{1},n},\hat{\beta}_{S_{2},n},\beta_{R}^{\dagger}, \hat{\sigma}_{n}\big{)}.\]
The consistency of estimator \(\hat{\beta}_{R,n}\) is obtained via the following result whose proof is given in Appendix D.7.
Assume that conditions (H)-II and (C1)-(C4) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\mathscr{L}(\beta_{R})\stackrel{{\mathbb{P}_{\pi_{\pi_{\pi_{\pi_{ \pi}}}}}}}{{\longrightarrow}}\int\eta_{R}(x,\beta_{R})^{\top}a_{R}^{-1}(x, \sigma^{\dagger})\,\eta_{R}(x,\beta_{R})\nu_{\theta^{\dagger}}(dx),\]
uniformly in \(\beta_{R}\in\Theta_{\beta_{R}}\).
The proof of consistency for the contrast estimator \(\hat{\theta}_{n}\) is now complete.
Proof of Theorem 2 - Asymptotic NormalityWe consider the Taylor expansion of the contrast function \(\ell_{n}(\theta)\):
\[\mathscr{C}_{n}(\theta^{\dagger})=\int_{0}^{1}\mathscr{I}_{n}\big{(}\theta^{ \dagger}+\lambda(\hat{\theta}_{n}-\theta^{\dagger})\big{)}d\lambda\times M_{n }(\hat{\theta}_{n}-\theta^{\dagger})\]
where we have set, for \(\theta\in\Theta\),
\[\mathscr{C}_{n}(\theta)=-M_{n}^{-1}\,\partial_{\theta}\ell_{n}(\theta),\quad \mathscr{I}_{n}(\theta)=M_{n}^{-1}\,\partial_{\theta}^{2}\ell_{n}(\theta)\,M _{n}^{-1},\quad M_{n}=\text{diag}(v_{n}),\]
with the \(N_{\theta}\)-dimensional vector \(v_{n}\) defined as:
\[v_{n}=\Big{[}\underbrace{\sqrt{\tfrac{n}{\Delta_{n}}},\dots,\sqrt{\tfrac{n}{ \Delta_{n}}}}_{N_{\beta_{1}}},\ \ \underbrace{\sqrt{\tfrac{n}{\Delta_{n}}},\dots,\sqrt{\tfrac{n}{\Delta_{n}}}}_{N_ {\beta_{n}}},\ \ \underbrace{\sqrt{n\Delta_{n}},\dots,\sqrt{n\Delta_{n}}}_{N_{\beta_{R}}},\ \ \underbrace{\sqrt{n},\dots,\sqrt{n}}_{N_{\sigma}}\Big{]}^{\top}.\]
The asymptotic normality immediately holds from the following two results - their proofs are shown in Appendices D.9 and D.10.
Assume that conditions (H)-II and (C1)-(C5) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\mathscr{I}_{n}\big{(}\theta^{\dagger}+\lambda(\hat{\theta}_{n}-\theta^{ \dagger})\big{)}\stackrel{{\mathbb{P}_{\pi_{\pi_{\pi_{\pi}}}}}}{{ \longrightarrow}}2\Gamma(\theta^{\dagger}),\]
uniformly in \(\lambda\in[0,1]\), where the matrix \(\Gamma(\theta^{\dagger})\) is defined as in (11) in the main text. Assume that conditions (H)-II and (C1)-(C5) hold. If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), with \(\Delta_{n}=o(n^{-1/2})\), then
\[\mathscr{C}_{n}(\theta^{\dagger})\stackrel{{\mathcal{L}_{\pi_{ \pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi_{\pi
### Proof of Technical Results
#### d.1.1 Proof of Lemma 4
We have that \(\frac{\Delta_{n}^{2}}{n}\ell_{n}(\theta)=\sum_{1\leq i\leq d}\mathscr{E}_{i}(\theta)\), \(\theta=(\beta_{S_{1}},\beta_{S_{2}},\beta_{R},\sigma)\in\Theta\), where we have set:
\[\mathscr{E}_{1}(\theta) =\frac{1}{n}\sum_{i=1}^{n}\eta_{S_{1}}(X_{i-1},\beta_{S_{1}})^{ \top}\Lambda_{S_{1}S_{1}}(X_{i-1},\theta)\eta_{S_{1}}(X_{i-1},\beta_{S_{1}});\] \[\mathscr{E}_{2}(\theta) =\frac{1}{n}\sum_{i=1}^{n}\sum_{1\leq j_{1},j_{2}\leq N}R_{1}^{j_ {1}j_{2}}(\Delta_{n}^{n},X_{i-1},\theta)\,m_{i}^{j_{1}}(\Delta_{n},\theta^{ \dagger})\,m_{i}^{j_{2}}(\Delta,\theta^{\dagger});\] \[\mathscr{E}_{3}(\theta) =\frac{1}{n}\sum_{i=1}^{n}\sum_{1\leq j\leq N}R_{2}^{j}(\Delta_{ n}^{n},X_{i-1},\theta)\,m_{i}^{j}(\Delta,\theta^{\dagger});\] \[\mathscr{E}_{4}(\theta) =\frac{1}{n}\sum_{i=1}^{n}R_{3}(\Delta_{n}^{n},X_{i-1},\theta),\]
for some functions \(R_{1}^{j_{1}j_{2}},R_{2}^{j},R_{3}\in\mathcal{S}\) and constants \(q_{1},q_{2},q_{3}\geq 1\). From Lemmas 2, 3, we immediately have that as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\),
\[\mathscr{E}_{1}(\theta) \xrightarrow{\mathbb{P}_{d}}\int\eta_{S_{1}}(x,\beta_{S_{1}})^{ \top}\Lambda_{S_{1}S_{1}}(x,\theta)\eta_{S_{1}}(x,\beta_{S_{1}})\nu_{\theta^{ \dagger}}(dx);\] \[\mathscr{E}_{k}(\theta) \xrightarrow{\mathbb{P}_{d}}0,\ \ 2\leq k\leq 4,\]
uniformly in \(\theta=(\beta_{S_{1}},\beta_{S_{2}},\beta_{R},\sigma)\in\Theta\), and the proof is now complete.
#### d.2.2 Proof of Lemma 5
We define \(\mathscr{F}:\Theta\to\mathbb{R}^{N_{\beta_{S_{1}}}\times N_{\beta_{S_{1}}}}\) as:
\[\mathscr{F}(\theta)=\frac{\Delta_{n}^{3}}{n}\partial_{\beta_{S_{1}}}^{2}\ell_ {n}\left(\theta\right),\ \ \ \theta\in\Theta.\]
\(\mathscr{F}(\theta)\) can be expressed as \(\mathscr{F}(\theta)=\sum_{1\leq k\leq 6}\mathscr{F}_{k}(\theta)\), where we have set, for \(1\leq j_{1},j_{2}\leq N_{\beta_{S_{1}}}\) with multi-index \(\mathbf{j}=(j_{1},j_{2})\),
\[[\mathscr{F}_{1}(\theta)]_{j_{1}j_{2}} =\frac{1}{n}\sum_{i=1}^{n}(\partial_{\beta_{S_{1}}}^{\beta_{S_{1 }}}V_{S_{1},0}(X_{S,i-1},\beta_{S_{1}}))^{\top}\Lambda_{S_{1}S_{1}}(X_{i-1}, \theta)\partial_{j_{2}}^{\beta_{S_{1}}}V_{S_{1},0}(X_{S,i-1},\beta_{S_{1}});\] \[[\mathscr{F}_{2}(\theta)]_{j_{1}j_{2}} =\frac{1}{n}\sum_{i=1}^{n}\eta_{S_{1}}(X_{i-1},\beta_{S_{1}})^{ \top}\partial_{\beta_{S_{1}}}^{\beta_{S_{1}}}\Lambda_{S_{1}S_{1}}(X_{i-1}, \theta)\,\eta_{S_{1}}(X_{i-1},\beta_{S_{1}});\] \[[\mathscr{F}_{3}(\theta)]_{j_{1}j_{2}} =-\frac{2}{n}\sum_{i=1}^{n}\eta_{S_{1}}(X_{i-1},\beta_{S_{1}})^{ \top}\partial_{\lambda}^{\beta_{S_{1}}}\big{\{}\Lambda_{S_{1}S_{1}}(X_{i-1}, \theta)V_{S_{1},0}(X_{S,i-1},\beta_{S_{1}})\big{\}};\] \[[\mathscr{F}_{4}(\theta)]_{j_{1}j_{2}} =\frac{1}{n}\sum_{i=1}^{n}\sum_{1\leq k_{1},k_{2}\leq N}R_{k}^{j_ {1}j_{2}}(\Delta_{n}^{n},X_{i-1},\theta)\,m_{i}^{k_{1}}(\Delta_{n},\theta^{ \dagger})\,m_{i}^{k_{2}}(\Delta_{n},\theta^{\dagger});\] \[[\mathscr{F}_{5}(\theta)]_{j_{1}j_{2}} =\frac{1}{n}\sum_{i=1}^{n}\sum_{1\leq k\leq N}R_{k}^{j_{1}j_{2}}( \Delta_{n}^{q_{2}},X_{i-1},\theta)\,m_{i}^{k}(\Delta_{n},\theta^{\dagger});\] \[[\mathscr{F}_{6}(\theta)]_{j_{1}j_{2}} =\frac{1}{n}\sum_{i=1}^{n}R^{j_{1}j_{2}}(\Delta_{n}^{q_{3}},X_{i- 1},\theta),\]
for some functions \(R^{i,j_{1}}_{k_{1},k_{2}},R^{i,j_{2}}_{k},R^{i,j_{2}}\in\mathcal{S}\) and constants \(q_{1},q_{2},q_{3}>0\). It follows from Lemmas 2, 3 and the consistency of estimator \(\hat{\beta}_{S_{1},n}\) that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\),
\[\Big{[}\mathscr{F}_{1}\big{(}\beta^{\dagger}_{S_{1}}-\lambda( \hat{\beta}_{S_{1},n}-\beta^{\dagger}_{S_{1}}),\beta_{S_{2}},\beta_{R},\sigma \big{)}\Big{]}_{j_{1}j_{2}}\] \[\qquad\xrightarrow{p_{\sigma}}2\int\big{(}\partial^{\beta_{S_{1 }}}_{j_{1}}V_{S_{1},0}(x_{S},\beta^{\dagger}_{S_{1}})\big{)}^{\top}\,\bar{ \Lambda}_{S_{1}S_{1}}\big{(}x,(\beta^{\dagger}_{S_{1}},\beta_{S_{2}},\sigma) \big{)}\partial^{\beta_{S_{1}}}_{j_{2}}V_{S_{1},0}(x_{S},\beta^{\dagger}_{S_{1 }})\nu_{\theta^{\prime}}(dx);\] \[\Big{[}\mathscr{F}_{k}\big{(}\beta^{\dagger}_{S_{1}}-\lambda( \hat{\beta}_{S_{1},n}-\beta^{\dagger}_{S_{1}}),\beta_{S_{2}},\beta_{R}, \sigma\big{)}\Big{]}_{j_{1}j_{2}}\xrightarrow{p_{\sigma}}0,\qquad 2\leq k\leq 6,\]
uniformly in \((\beta_{S_{1}},\beta_{R},\sigma)\in\Theta_{\beta_{S_{1}}}\times\Theta_{\beta_ {R}}\times\Theta_{\sigma}\) and \(\lambda\in[0,1]\). The proof is now complete.
### Proof of Lemma 6
We write \(\Sigma(x,\theta)\), \((x,\theta)\in\mathbb{R}^{N}\times\Theta\), in the form of the block matrix:
\[\Sigma(x,\theta)=\begin{bmatrix}\Sigma_{S_{1}S_{1}}(x,\theta)&\widetilde{ \Sigma}(x,\theta)\\ \widetilde{\Sigma}(x,\theta)^{\top}&\widetilde{\Sigma}(x,\theta)\end{bmatrix}, \tag{36}\]
where we have set:
\[\widetilde{\Sigma}(x,\theta)=\begin{bmatrix}\Sigma_{S_{1}S_{1}}(x,\theta), \,\Sigma_{S_{1}R}(x,\theta)\end{bmatrix},\quad\hat{\Sigma}(x,\theta)= \begin{bmatrix}\Sigma_{S_{1}S_{2}}(x,\theta)&\Sigma_{S_{2}R}(x,\theta)\\ \Sigma_{RS_{2}}(x,\theta)&\Sigma_{RR}(x,\theta)\end{bmatrix}. \tag{37}\]
Notice that under condition (H)-II, matrix \(\hat{\Sigma}(x,\theta)\) is invertible for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\). We write the inverse of \(\hat{\Sigma}(x,\theta)\) as:
\[\hat{\Sigma}^{-1}(x,\theta)=\hat{\Lambda}(x,\theta)=\begin{bmatrix}\hat{ \Lambda}_{S_{1}S_{2}}(x,\theta)&\hat{\Lambda}_{S_{2}R}(x,\theta)\\ \hat{\Lambda}_{RS_{2}}(x,\theta)&\hat{\Lambda}_{RR}(x,\theta)\end{bmatrix}.\]
Recall the notation for the inverse of \(\Sigma(x,\theta)\) in (24). Using the inverse formula for a block matrix, we obtain:
\[\Lambda_{S_{1}S_{1}}(x,\theta)=-\Lambda_{S_{1}S_{1}}(x,\theta)\Xi(x,\theta), \tag{38}\]
where we have set
\[\Xi(x,\theta)=\Sigma_{S_{1}S_{2}}(x,\theta)\hat{\Lambda}_{S_{2}S_{2}}(x,\theta )+\Sigma_{S_{1}R}(x,\theta)\hat{\Lambda}_{RS_{2}}(x,\theta). \tag{39}\]
From the block matrix representation of \(\Sigma(\Delta,x,\theta)\) in (8), we obtain
\[\Sigma_{S_{1}S_{2}}(x,\theta)=\tfrac{3}{8}\partial^{\top}_{S_{2}}V_{S_{1},0}(x _{S},\beta_{S_{1}})\Sigma_{S_{1}S_{2}}(x,\theta),\]
\[\Sigma_{S_{1}R}(x,\theta)=\tfrac{1}{3}\partial^{\top}_{S_{2}}V_{S_{1},0}(x_{S}, \beta_{S_{1}})\Sigma_{S_{2}R}(x,\theta).\]
We then have
\[\Xi(x,\theta) =\tfrac{3}{8}\partial^{\top}_{\alpha_{S_{1}}}V_{S_{1},0}(x_{S}, \beta_{S_{1}})\Sigma_{S_{2}S_{2}}(x,\theta)\hat{\Lambda}_{S_{2}S_{2}}(x,\theta)\] \[\qquad+\tfrac{1}{3}\partial^{\top}_{\alpha_{S_{2}}}V_{S_{1},0}(x _{S},\beta_{S_{1}})\Sigma_{S_{2}R}(x,\theta)\hat{\Lambda}_{RS_{2}}(x,\theta)\] \[=\tfrac{1}{24}\partial^{\top}_{\alpha_{S_{2}}}V_{S_{1},0}(x_{S}, \beta_{S_{1}})\Sigma_{S_{2}S_{2}}(x,\theta)\hat{\Lambda}_{S_{2}S_{2}}(x,\theta)+ \tfrac{1}{3}\partial^{\top}_{\alpha_{S_{2}}}V_{S_{1},0}(x_{S},\beta_{S_{1}})\] \[=\tfrac{1}{3}\partial^{\top}_{\alpha_{S_{2}}}V_{S_{1},0}(x_{S}, \beta_{S_{1}}). \tag{40}\]
In the above calculation we have used:
\[\Sigma_{S_{2}S_{2}}(x,\theta)\hat{\Lambda}_{S_{2}S_{2}}(x,\theta)+\Sigma_{S_{2}R} (x,\theta)\hat{\Lambda}_{RS_{2}}(x,\theta)=I_{N_{S_{2}}\times N_{S_{2}}};\]
\[\hat{\Lambda}_{S_{2}S_{2}}(x,\theta)=\left(\Sigma_{S_{2}S_{2}}(x,\theta)-\Sigma_ {S_{2}R}(x,\theta)\Sigma_{RR}^{-1}(x,\theta)\Sigma_{RS_{2}}(x,\theta)\right)^{ -1}=4\Sigma_{S_{2}S_{2}}^{-1}(x,\theta),\]
where matrix \(\Sigma_{S_{2}S_{2}}(x,\theta)\) is invertible under condition (H)-II as we have seen in the proof of Proposition 1 in Appendix B. Thus, from (38) and (40), we obtain
\[\Lambda_{S_{1}S_{2}}(x,\theta)=-\frac{1}{2}\Lambda_{S_{1}S_{1}}(x,\theta) \partial_{x_{2}}^{T}V_{S_{1},0}(x_{S},\beta_{S_{1}}) \tag{41}\]
and the proof is now complete.
### Proof of Lemma 7
We write \(\theta^{-S_{1}}\equiv\big{(}\beta_{S_{1}},\beta_{R},\sigma\big{)}\in\Theta_{ \beta_{S_{2}}}\times\Theta_{\beta_{R}}\times\Theta_{\sigma}\). It holds that
\[\frac{\Delta_{n}}{n}\,\ell_{n}(\hat{\beta}_{S_{1},n},\,\theta^{-S_{1}})=\sum_{ 1\leq k\leq T}\mathscr{Y}_{k}(\hat{\beta}_{S_{1},n},\,\theta^{-S_{1}}),\]
where we have set, for \(\theta\in\Theta\),
\[\mathscr{Y}_{1}(\theta)=\frac{\Delta_{n}}{n}\sum_{i=1}^{n}\eta_{S_{1},i-1}( \sqrt{\Delta_{n}^{3}},\beta_{S_{1}})^{\top}\Lambda_{S_{1}S_{1}}(X_{i-1},\theta )\eta_{S_{1},i-1}(\sqrt{\Delta_{n}^{3}},\beta_{S_{1}});\]
\[\mathscr{Y}_{2}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\sum_{1\leq j\leq N}R_{j}( \sqrt{\Delta_{n}},X_{i-1},\theta)\eta_{S_{1},i-1}^{j}(\sqrt{\Delta_{n}^{3}}, \beta_{S_{1}});\]
\[\mathscr{Y}_{3}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\sum_{1\leq j\leq N}R_{j,j_{2 }}(\Delta_{n},X_{i-1},\theta)\eta_{S_{1},i-1}^{j_{1}}(\sqrt{\Delta_{n}^{3}}, \beta_{S_{1}})m_{i}^{j_{1}}(\Delta_{n},\theta^{t});\]
\[\mathscr{Y}_{4}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\eta_{S_{2}}(X_{i-1},\beta_{ S_{2}})^{\top}\Lambda_{S_{2}S_{2}}(X_{i-1},\theta)\eta_{S_{2}}(X_{i-1},\beta_{S_{2}});\]
\[\mathscr{Y}_{5}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\sum_{1\leq j\leq N}\widetilde {R}_{j}(\Delta_{n},X_{i-1},\theta)\,m_{i}^{j}(\Delta_{n},\theta^{t});\]
\[\mathscr{Y}_{6}(\theta)=\frac{\Delta_{n}}{n}\sum_{i=1}^{n}\Bigl{\{}m_{i}( \Delta_{n},\theta^{t})^{\top}\Lambda(X_{i-1},\theta)\,m_{i}(\Delta_{n},\theta ^{t})+\log\lvert\Sigma(X_{i-1},\theta)\rvert\Bigr{\}},\]
for some functions \(R_{j},R_{j,j_{2}},\,\widetilde{R}_{j}\in\mathcal{S}\). Note that \(\eta_{S_{1},i-1}^{j}(\sqrt{\Delta_{n}^{3}},\beta_{S_{1},n})\) can be expressed as:
\[\eta_{S_{1},i-1}^{j}(\sqrt{\Delta_{n}^{3}},\beta_{S_{1},n})=\sum_{1\leq k\leq N _{S_{1}}}\frac{\eta_{S_{1},i-1}^{k}(X_{S_{1},i-1})}{|\beta_{S_{1}}^{k}-\beta_ {S_{1},n}^{k}|}\times\frac{|\beta_{S_{1}}^{k}-\beta_{S_{1},n}^{k}|}{\sqrt{ \Delta_{n}^{2}}}\Bigr{|},\]
where we have set:
\[\eta_{S_{1}}^{j_{1}[k]}(X_{S,i-1})\equiv V_{S_{1},0}^{j}(X_{S,i-1},\bar{\beta}_ {S_{1},n}^{[k-1]})-V_{S_{1},0}^{j}(X_{S,i-1},\bar{\beta}_{S_{1},n}^{[k]}), \quad 1\leq k\leq N_{S_{1}},\]
with the notation:
\[\bar{\beta}_{S_{1},n}^{[\ell]}=(\hat{\beta}_{S_{1},n}^{\downarrow},\ldots,\hat {\beta}_{S_{1},n}^{\ell},\bar{\beta}_{S_{1}}^{l,\ell+1},\ldots,\beta_{S_{1}}^{l,N_{S_{1}}}),\quad 1\leq\ell\leq N_{\beta_{S_{1}}}-1;\]
\[\bar{\beta}_{S_{1},n}^{[0]}=\bar{\beta}_{S_{1},n}^{\uparrow},\quad\bar{\beta}_{S _{1},n}^{\uparrow}=\bar{\beta}_{S_{1},n}.\]
From convergence (30), Lemma 2 and condition (C2) it follows that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\begin{split}\frac{1}{n}\sum_{i=1}^{n}f(X_{i-1},\theta)\eta_{\!S_{ i},i-1}^{j}(\sqrt{\Delta_{n}^{3}},\beta_{\!S_{i},n})\\ \xrightarrow[]{\xrightarrow[]{p_{\!el}}}\sum_{1\leq k\leq N_{ \partial S_{i}}}\left(\int f(x,\theta)\,\mathcal{G}_{k}^{\partial S_{i}}V_{S_{1 },0}(x_{S},\beta_{\!S_{i}}^{\dagger})\,\nu_{\theta^{\dagger}}(dx)\right)\times 0=0, \end{split} \tag{42}\]
uniformly in \(\theta\in\Theta\) for any \(f:\mathbb{R}^{N}\times\Theta\to\mathbb{R}\) satisfying the same property in Lemma 2. Thus, we obtain
\[\mathcal{G}_{k}(\hat{\beta}_{\!S_{1},n},\theta^{-S_{1}})\xrightarrow[]{p_{\!el }}0,\qquad 1\leq k\leq 3,\]
uniformly in \(\theta^{-S_{1}}\). For the other terms, we immediately have from Lemmas 2, 3 that
\[\mathcal{G}_{k}(\hat{\beta}_{\!S_{i},n},\theta^{-S_{1}})\xrightarrow[]{p_{\!el }}\int\eta_{\!S_{i}}(x,\beta_{\!S_{i}})\,\xrightarrow[]{p_{\!el}}0,\qquad k=5,6,\]
as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), and the proof is now complete.
### Proof of Lemma 8
(_Proof of (34)_). Since the proof of \(\frac{\sqrt{\Delta_{n}^{3}}}{n}\partial_{\beta_{\!S_{i}}}\ell_{n}(\beta_{\!S_ {i}}^{\dagger},\beta_{\!R},\sigma)\xrightarrow[]{p_{\!el}}0\) is identical with that in Section C.1.1, we will only show that, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\mathscr{K}(\beta_{\!S}^{\dagger},\beta_{\!R},\sigma):=\tfrac{\sqrt{\Delta_{n }}}{n}\partial_{\beta_{\!S_{i}}}\ell_{n}(\beta_{\!S}^{\dagger},\beta_{\!R}, \sigma)\xrightarrow[]{p_{\!el}}0, \tag{43}\]
uniformly in \((\beta_{R},\sigma)\in\Theta_{\beta_{R}}\times\Theta_{\sigma}\). It holds that \(\mathscr{K}(\beta_{\!S}^{\dagger},\beta_{\!R},\sigma)=\sum_{1\leq l\leq 3} \mathscr{K}(\beta_{\!S}^{\dagger},\beta_{\!R},\sigma)\), where we have set, for \(\theta\in\Theta\),
\[\mathscr{K}_{1}(\theta) =\tfrac{1}{n}\sum_{i=1}^{n}\sum_{1\leq j\leq N}R_{j}(1,X_{i-1}, \theta)\,m_{i}^{j}(\Delta_{n},\theta^{\dagger});\] \[\mathscr{K}_{2}(\theta) =\tfrac{1}{n}\sum_{i=1}^{n}\sum_{1\leq j_{i},j_{i}\leq N}R_{j_{1} j_{2}}(\sqrt{\Delta_{n}},X_{i-1},\theta)\,m_{i}^{j_{1}}(\Delta_{n},\theta^{ \dagger})\,m_{i}^{j_{1}}(\Delta_{n},\theta^{\dagger});\] \[\mathscr{K}_{3}(\theta) =\tfrac{1}{n}\sum_{i=1}^{n}R(\sqrt{\Delta_{n}},X_{i-1},\theta),\]
for \(R,R_{j},R_{j_{1}j_{2}}\in\mathcal{S}\). From Lemmas 2 and 3, we immediately have that \(\mathscr{K}_{i}(\beta_{\!S}^{\dagger},\beta_{\!R},\sigma)\xrightarrow[]{p_{\!el }}0,\ 1\leq i\leq 3\), as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), thus (43) holds.
(_Proof of (35)_). We define
\[\mathcal{Q}(\theta)=M_{\beta_{\!S},n}\,\partial_{\beta_{\!S}}^{2}\ell_{n}( \theta)\,M_{\beta_{\!S},n}=\begin{bmatrix}\mathcal{Q}_{\!S_{i}S_{i}}(\theta)& \mathcal{Q}_{\!S_{i}S_{i}}(\theta)\\ \mathcal{Q}_{\!S_{i}S_{i}}(\theta)&\mathcal{Q}_{\!S_{i}S_{i}}(\theta)\end{bmatrix},\]
where we have set:
\[\mathcal{Q}_{S_{1}S_{1}}(\theta)=\frac{\Delta_{S}^{3}}{n}\partial_{\beta_{\beta_{1 }}}^{2}\ell_{\alpha}(\theta),\quad\mathcal{Q}_{S_{1}S_{2}}(\theta)=\frac{\Delta_ {S}^{3}}{n}\partial_{\beta_{\beta_{2}}}\partial_{\beta_{\beta_{2}}}^{\top}\ell_{ \alpha}(\theta);\]
\[\mathcal{Q}_{S_{2}S_{1}}(\theta)=\mathcal{Q}_{S_{1}S_{2}}(\theta)^{\top},\quad \mathcal{Q}_{S_{2}S_{2}}(\theta)=\frac{\Delta_{S}}{n}\partial_{\beta_{\beta_{ 2}}}^{2}\ell_{\alpha}(\theta),\]
for \(\theta=(\beta_{S},\beta_{R},\sigma)\in\Theta\). From the proof of Lemma 5 in Appendix D.2 and the consistency of the estimator \(\beta_{S,n}\), we have that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\mathcal{Q}_{S_{1}S_{1}}(\beta_{\nu}^{\dagger}+\lambda(\hat{\beta}_{S,n}- \beta_{S}^{\dagger}),\beta_{R},\sigma)\]
\[\stackrel{{\mathbb{F}_{d^{*}_{1}}}}{{\longrightarrow}}2\int \partial_{\beta_{\beta_{1}}}\big{(}V_{S_{1}}(\alpha,\beta_{S_{1}}^{\dagger}) \big{)}^{\top}\Lambda_{S_{1}S_{1}}\big{(}x,(\beta_{S}^{\dagger},\sigma)\big{)} \partial_{\beta_{\beta_{1}}}^{\top}V_{S_{1},0}(x,\beta_{S_{2}}^{\dagger})\, \nu_{\theta\uparrow}(dx), \tag{44}\]
uniformly in \((\beta_{R},\sigma)\in\Theta_{\beta_{R}}\times\Theta_{\sigma}\) and \(\lambda\in[0,1]\). We will check the convergence of the two matrices \(\mathcal{Q}_{S_{1}S_{2}}(\theta)\) and \(\mathcal{Q}_{S_{2}S_{2}}(\theta)\).
We have \(\mathcal{Q}_{S_{1}S_{2}}(\theta)=\sum_{1\leq k\leq 4}\mathcal{Q}_{S_{1}S_{2} }(\theta)\), where we set for \(1\leq j_{1}\leq N_{\beta_{S_{1}}}\), \(1\leq j_{2}\leq N_{\beta_{S_{2}}}\),
\[\big{[}\mathcal{Q}_{S_{1}S_{2},1}(\theta)\big{]}_{j_{1}j_{2}}= \frac{2}{n}\sum_{i=1}^{n}\bigl{(}\partial_{\beta_{1}}^{\beta_{\beta_{1}}}V_{ S_{i},0}(X_{S_{i}-1},\beta_{S_{1}})\bigr{)}^{\top}\times\biggl{\{}\Lambda_{S_{1}S_{2} }\big{(}X_{i-1},\theta\big{)}\partial_{\beta_{2}}^{\beta_{\beta_{2}}}V_{S_{2},0}(X_{i-1},\beta_{S_{2}})\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\frac{1}{2}\Lambda_{S_{1} S_{1}}\big{(}X_{i-1},\theta\big{)}\partial_{\beta_{2}}^{\beta_{\beta_{2}}} \mathcal{L}V_{S_{1},0}(X_{i-1},\beta_{S})\biggr{\}};\]
\[\big{[}\mathcal{Q}_{S_{1}S_{2},2}(\theta)\big{]}_{j_{1}j_{2}}= \frac{1}{n}\sum_{i=1}^{n}\sum_{1\leq k_{1},k_{2}\leq N_{\beta_{1}}}R_{j_{1 }j_{2}}^{k_{1}k_{2}}(1,X_{i-1},\theta)\eta_{S_{1},i-1}^{k_{1}}(\Delta_{n}, \beta_{S_{1}})\eta_{S_{1}}^{k_{2}}(X_{i-1},\beta_{S_{1}});\] \[\big{[}\mathcal{Q}_{S_{1}S_{2},3}(\theta)\big{]}_{j_{1}j_{2}}= \frac{1}{n}\sum_{i=1}^{n}\sum_{\begin{subarray}{c}1\leq k_{1}S_{1} \leq N_{1}\\ 1\leq k_{2}\leq N_{2}\end{subarray}}\widetilde{R}_{j_{1}j_{2}}^{k_{1}k_{2}}(1, X_{i-1},\theta)\eta_{S_{1}}^{k_{1}}(X_{i-1},\beta_{S_{1}})b^{k_{2}}(X_{i-1},\beta_{S});\]
\[\big{[}\mathcal{Q}_{S_{1}S_{2},4}(\theta)\big{]}_{j_{1}j_{2}}= \frac{1}{n}\sum_{i=1}^{n}\biggl{\{}\sum_{1\leq k_{1},k_{2}\leq N}\widetilde{R }_{j_{1}j_{2}}^{k_{1}k_{2}}(\Delta_{n},X_{i-1},\theta)m_{i}^{k_{1}}(\Delta_{n}, \theta^{\dagger})m_{i}^{k_{2}}(\Delta_{n},\theta^{\dagger})\] \[\qquad\qquad\qquad\qquad+\sum_{1\leq k\leq N}\widetilde{R}_{j_{1 }j_{2}}^{k}(\Delta_{n},X_{i-1},\theta)m_{i}^{k}(\Delta_{n},\theta^{\dagger})+ \widetilde{R}(\Delta_{n},X_{i-1},\theta)\biggr{\}},\]
for some functions \(R_{j_{1}j_{2}}^{k_{1}k_{2}}\), \(\widetilde{R}_{j_{1}j_{2}}^{k_{1}k_{2}}\), \(\widetilde{R}_{j_{1}j_{2}}^{k_{1}k_{2}}\), \(\widetilde{R}_{j_{1}j_{2}}^{k_{2}}\), \(\widetilde{R}_{j_{1}j_{2}}^{k}\), \(\widetilde{R}\in\mathcal{S}\), and the function \(b:\mathbb{R}^{N}\times\Theta_{\beta_{S}}\to\mathbb{R}^{N_{S}}\) is defined as:
\[b(x,\beta_{S})=\left[\frac{\mathbb{I}}{2}\mathcal{L}V_{S_{1},0}(x,\beta_{S})- \frac{1}{2}\mathcal{L}V_{S_{1},0}(x,\beta_{S}^{\dagger})\right],\]
Notice that for any \(\theta\in\Theta\),
\[[\mathcal{Q}_{S_{1}S_{2},1}(\theta)]_{j_{1}j_{2}}=0,\quad 1\leq j_{1}\leq N_{\beta_{S_{1}}}, \quad 1\leq j_{2}\leq N_{\beta_{S_{2}}}\]
because it follows from Lemma 6 that
\[\Lambda_{S_{1}S_{2}}\big{(}x,\theta\big{)}\partial_{\beta_{2}}^{\beta_{\beta_{2}}}V_ {S_{2},0}(x,\beta_{S_{2}})+\frac{1}{2}\Lambda_{S_{1}S_{1}}\big{(}x,\theta \big{)}\partial_{\beta_{2}}^{\beta_{\beta_{2}}}\mathcal{L}V_{S_{1},0}(x,\beta_{S})\] \[\qquad\qquad=\Phi(x,\theta)\,\partial_{\beta_{2}}^{\beta_{\beta_{2}}}V_ {S_{2},0}(x,\beta_{S_{2}})=0,\]
for any \(x\in\mathbb{R}^{N}\), \(1\leq j_{2}\leq N_{\beta_{2}}\), where \(\Phi(x,\theta)\) is defined as in (33). From Lemmas 2, 3 and the consistency of \(\hat{\beta}_{S,n}\) with convergence (30), we obtain:
\[\big{[}\mathcal{Q}_{S_{1}S_{2},k}(\beta_{S}^{t}+\lambda(\hat{\beta}_{S,n}- \beta_{S}^{t}),\beta_{R},\sigma)\big{]}_{j_{1}j_{2}}\stackrel{{ \eta_{\text{\tiny{jet}}}}}{{\longrightarrow}}0,\qquad 2\leq k\leq 4,\]
uniformly in \((\beta_{R},\sigma)\in\Theta_{\beta_{R}}\times\Theta_{\sigma}\) and \(\lambda\in[0,1]\).
Finally, we consider the term \(\mathcal{Q}_{S_{1}S_{2}}(\theta)\). It holds that \(\mathcal{Q}_{S_{1}S_{2}}(\theta)=\sum_{1\leq k\leq 4}\mathcal{Q}_{S_{1}S_{2},k}(\theta)\), where we set, for \(1\leq j_{1},j_{2}\leq N_{\beta_{2}}\),
\[\big{[}\mathcal{Q}_{S_{1}S_{2},1}(\theta)\big{]}_{j_{1}j_{2}} =\frac{1}{n}\sum_{i=1}^{n}\Bigl{\{}\sum_{1\leq k_{1}S_{1}}R_{j_{1 }j_{2}}^{k}(1,X_{i-1},\theta)\eta_{S_{1},i-1}^{k}(\Delta_{n},\beta_{S_{1}})\] \[\qquad\qquad+\sum_{1\leq k_{1},k_{1}\leq N_{\beta_{1}}}R_{j_{1}j_ {2}}^{k_{1}k_{2}}(1,X_{i-1},\theta)\eta_{S_{1},i-1}^{k_{1}}(\Delta_{n},\beta_{ S_{1}})\eta_{S_{1},i-1}^{k_{1}}(\Delta_{n},\beta_{S_{1}})\] \[\qquad\qquad+\sum_{\begin{subarray}{c}1\leq k_{1}\leq N_{\beta _{1}}\\ 1\leq k_{1}\leq N\end{subarray}}\widetilde{R}_{j_{1}j_{2}}^{k_{1}k_{2}}(1,X_{i- 1},\theta)\eta_{S_{1},i-1}^{k_{1}}(\sqrt{\Delta_{n}},\beta_{S_{1}})m_{i}^{k_{ 2}}(\Delta_{n},\theta^{\dagger})\Bigr{\}};\] \[\big{[}\mathcal{Q}_{S_{1}S_{2},2}(\theta)\big{]}_{j_{1}j_{2}} =\frac{2}{n}\sum_{i=1}^{n}\Bigl{\{}\frac{1}{4}\bigl{(}\beta_{j_{1}}^{ \beta_{2}}\mathcal{L}V_{S_{1},0}(X_{i-1},\theta)\bigr{)}^{\top}\Lambda_{S_{1}S _{1}}(X_{i-1},\theta)\partial_{j_{2}}^{\beta_{2}}\mathcal{L}V_{S_{1},0}(X_{i- 1},\theta)\] \[\qquad\qquad+\frac{1}{2}\bigl{(}\beta_{j_{1}}^{\beta_{2}}\mathcal{ L}V_{S_{1},0}(X_{i-1},\theta)\bigr{)}^{\top}\Lambda_{S_{1}S_{2}}(X_{i-1},\theta) \,\partial_{j_{2}}^{\beta_{2}}V_{S_{2},0}(X_{i-1},\beta_{S_{2}})\] \[\qquad\qquad+\frac{1}{2}\bigl{(}\beta_{j_{1}}^{\beta_{2}}V_{S_{1},0}(X_{i-1},\beta_{S_{2}})\bigr{)}^{\top}\Lambda_{S_{1}S_{2}}(X_{i-1},\theta) \,\partial_{j_{2}}^{\beta_{2}}\mathcal{L}V_{S_{1},0}(X_{i-1},\theta)\] \[\qquad\qquad+\bigl{(}\beta_{j_{1}}^{\beta_{2}}V_{S_{2},0}(X_{i- 1},\beta_{S_{2}})\bigr{)}^{\top}\Lambda_{S_{1}S_{2}}\bigl{(}X_{i-1},\theta) \,\partial_{j_{2}}^{\beta_{2}}V_{S_{2},0}(X_{i-1},\beta_{S_{2}})\Bigr{\}};\] \[\big{[}\mathcal{Q}_{S_{1}S_{2},3}(\theta)\big{]}_{j_{1}j_{2}} =\frac{1}{n}\sum_{i=1}^{n}\Bigl{\{}\sum_{1\leq k_{1},k_{2}\leq N_{2 }}\widetilde{R}_{j_{1}j_{2}}^{k_{1}k_{2}}(1,X_{i-1},\theta)\eta_{S_{2}}^{k_{1}} (X_{i-1},\beta_{S_{2}})\eta_{S_{2}}^{k_{2}}(X_{i-1},\beta_{S_{2}})\] \[\qquad\qquad+\sum_{1\leq k\leq N_{2}}\widetilde{R}_{j_{1}j_{2}}^{ k_{2}}(1,X_{i-1},\theta)\eta_{S_{2}}^{k_{1}}(X_{i-1},\beta_{S_{2}})\Bigr{\}};\] \[\big{[}\mathcal{Q}_{S_{1}S_{2},4}(\theta)\big{]}_{j_{1}j_{2}} =\frac{1}{n}\sum_{i=1}^{n}\Bigl{\{}\sum_{1\leq k_{1},k_{2}\leq N} \widetilde{R}_{j_{1}j_{2}}^{k_{1}k_{2}}(\Delta_{n},X_{i-1},\theta)m_{i}^{k_{ 1}}(\Delta_{n},\theta^{\dagger})m_{i}^{k_{1}}(\Delta_{n},\theta^{\dagger})\] \[\qquad\qquad+\sum_{1\leq k\leq N}\overline{R}_{j_{1}j_{2}}^{k}( \sqrt{\Delta_{n}},X_{i-1},\theta)m_{i}^{k}(\Delta_{n},\theta^{\dagger})+R_{j_{1}j _{2}}(\sqrt{\Delta_{n}},X_{i-1},\theta)\Bigr{\}},\] for some functions \(R_{j_{1}j_{2}}^{k}\), \(R_{j_{1}j_{2}}^{k_{1}k_{2}}\), \(\widetilde{R}_{j_{1}j_{2}}^{k_{1}k_{2}}\), \(\overline{R}_{j_{1}j_{2}}^{k_{1}k_{2}}\), \(\widetilde{R}_{j_{1}j_{2}}^{k}\), \(\widetilde{R}_{j_{1}j_{2}}^{k_{1}k_{2}}\), \(\overline{R}_{j_{1}j_{2}}^{k}\), \(\overline{R}_{j_{1}j_{2}}^{k}\), \(R_{j_{1}j_{2}}^{k}\in\mathcal{S}\). Note that due to Lemma 6,
\[[Q_{S_{1}S_{2},2}(\theta)]_{j_{1}j_{2}} =\frac{2}{n}\sum_{i=1}^{n}\bigl{\{}\partial_{j_{1}}^{\beta_{S_{1}}}V _{S_{2},0}(X_{i-1},\beta_{S_{2}})\bigr{)}^{\top}\Lambda_{S_{1}S_{2}}(X_{i-1}, \theta)\,\partial_{j_{2}}^{\beta_{2}}\mathcal{L}V_{S_{1},0}(X_{i-1},\theta)\] \[\qquad+\frac{2}{n}\sum_{i=1}^{n}\bigl{(}\partial_{j_{1}}^{\beta_ {S_{1}}}V_{S_{2},0}(X_{i-1},\beta_{S_{2}})\bigr{)}^{\top}\Lambda_{S_{1}S_{2}} \bigl{(}X_{i-1},\theta\bigr{)}\,\partial_{j_{2}}^{\beta_{2}}V_{S_{2},0}(X_{i-1}, \beta_{S_{2}}).\]
We obtain from Lemma 2, (42) and consistency of \(\hat{\beta}_{S,n}\) that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\begin{array}{l}\left[\mathcal{Q}_{S_{S}S_{2},2}(\beta_{S}^{ \dagger}+\lambda(\beta_{S,n}-\beta_{S}^{\dagger}),\beta_{R},\sigma)\right]_{j_{ 1}j_{2}}\\ \\ \xrightarrow{\mathbb{P}_{\sharp\sharp}}2\int\frac{1}{2}\big{(}\partial_{j_{1 }}^{\beta_{S_{2}}}V_{S_{2},0}(x,\beta_{S_{2}}^{\dagger})\big{)}^{\top}\Lambda _{S_{2}S_{1}}(x,(\beta_{S}^{\dagger},\beta_{R},\sigma)\big{)}\partial_{j_{2}}^ {\beta_{R}}\mathcal{L}V_{S_{1},0}(x,(\beta_{S}^{\dagger},\beta_{R},\sigma))\nu_ {\theta\sharp}(dx)\\ \\ \hskip 14.226378pt+2\int\big{(}\partial_{j_{1}}^{\beta_{\phi_{2}}}V_{S_{ 2},0}(x,\beta_{S_{2}}^{\dagger})\big{)}^{\top}\Lambda_{S_{2}S_{2}}(x,(\beta_{ S}^{\dagger},\beta_{R},\sigma)\big{)}\partial_{j_{2}}^{\beta_{S_{2}}}V_{S_{2},0}(x, \beta_{S_{2}}^{\dagger})\nu_{\theta\sharp}(dx);\\ \\ \left[\mathcal{Q}_{S_{S}S_{2},k}(\beta_{S}^{\dagger}+\lambda(\beta_{S,n}-\beta_ {S}^{\dagger}),\beta_{R},\sigma)\right]_{j_{1}j_{2}}\xrightarrow{\mathbb{P}_{ \sharp\sharp}}0,\qquad k=1,3,4.\end{array} \tag{45}\]
The proof is complete by applying the following result to (44) and (45).
**Lemma 13**: _Assume that condition (H)-II holds. We have that, for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\):_
\[\Lambda_{S_{1}S_{1}}(x,\theta)=720\,a_{S_{1}}^{-1}(x,\theta); \tag{46}\] \[\Lambda_{S_{2}S_{2}}(x,\theta)=12a_{S_{2}}^{-1}(x,\theta)-\tfrac{ 1}{2}\Lambda_{S_{2}S_{1}}(x,\theta)\partial_{S_{2}}^{\top}V_{S_{1},0}(x_{S}, \beta_{S_{1}}). \tag{47}\]
#### d.5.1 Proof of Lemma 13
(_Proof of (46)_). First, we note that the matrices \(\Sigma_{S_{1}S_{1}}(x,\theta)\), \(\Sigma_{S_{2}S_{2}}(x,\theta)\), \(\Sigma_{RR}(x,\theta)\) are invertible for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\) under condition (H)-II as we have seen in proof of Proposition 1. Due to the block expression of matrix \(\Sigma(x,\theta)\) in (36), we have:
\[\Lambda_{S_{1}S_{1}}(x,\theta)=\Big{(}\Sigma_{S_{1}S_{1}}(x,\theta)-\widetilde {\Sigma}(x,\theta)\hat{\Lambda}(x,\theta)\widetilde{\Sigma}(x,\theta)^{\top} \Big{)}^{-1}, \tag{48}\]
where \(\hat{\Lambda}(x,\theta)\), the inverse of matrix \(\hat{\Sigma}(x,\theta)\) given in (37), has the following block expression:
\[\hat{\Lambda}(x,\theta)=\begin{bmatrix}\hat{\Lambda}_{S_{2}S_{2}}(x,\theta)& \hat{\Lambda}_{S_{2}R}(x,\theta)\\ \hat{\Lambda}_{S_{2}R}(x,\theta)^{\top}&\hat{\Lambda}_{RR}(x,\theta)\end{bmatrix} \tag{49}\]
where we have set:
\[\hat{\Lambda}_{S_{1}S_{2}}(x,\theta)=4\Sigma_{S_{2}S_{2}}^{-1}(x, \theta),\quad\hat{\Lambda}_{S_{1}R}(x,\theta)=-4\Sigma_{S_{2}S_{2}}^{-1}(x, \theta)\Sigma_{S_{2}R}(x,\theta)\Sigma_{RR}^{-1}(x,\sigma);\] \[\hat{\Lambda}_{RR}(x,\theta)=\Sigma_{RR}^{-1}(x,\sigma)+4\Sigma_{ RR}^{-1}(x,\sigma)\Sigma_{RS_{2}}(x,\theta)\Sigma_{S_{2}S_{2}}^{-1}(x,\theta) \Sigma_{S_{2}R}(x,\theta)\Sigma_{RR}^{-1}(x,\sigma).\]
We then have:
\[\hat{\Lambda}(x,\theta)\widetilde{\Sigma}(x,\theta)^{\top}=\begin{bmatrix} \frac{1}{2}(\partial_{S_{2}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{1}}))^{\top}\\ -\frac{1}{2}\big{(}\partial_{S_{2}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{1}}) \partial_{x_{R}}^{\top}V_{S_{2},0}(x,\beta_{S_{1}})\big{)}^{\top}\end{bmatrix},\]
where we used (40) for the upper block matrix, while the lower one is obtained via:
\[\hat{\Lambda}_{S_{2}R}(x,\theta)^{\top}\Sigma_{S_{2}S_{2}}(x,\theta)+ \hat{\Lambda}_{RR}(x,\theta)\Sigma_{RS_{1}}(x,\theta)\] \[=-4\Sigma_{RR}^{-1}(x,\sigma)\Sigma_{RS_{1}}(x,\theta)\Sigma_{S_{ 2}S_{2}}^{-1}(x,\theta)\Sigma_{S_{2}S_{1}}(x,\theta)+\Sigma_{RR}^{-1}(x, \sigma)\Sigma_{RS_{1}}(x,\theta)\] \[\qquad+4\Sigma_{RR}^{-1}(x,\sigma)\Sigma_{RS_{1}}(x,\theta) \Sigma_{S_{2}S_{2}}^{-1}(x,\theta)\Sigma_{S_{2}R}(x,\theta)\Sigma_{SR_{2}}^{-1 }(x,\sigma)\Sigma_{RS_{1}}(x,\theta)\] \[=-\tfrac{3}{4}\big{(}\partial_{x_{2}}^{\top}V_{S_{2},0}(x, \beta_{S_{1}})\big{)}^{\top}\big{(}\partial_{x_{2}}^{\top}V_{S_{1},0}(x_{S}, \beta_{S_{1}})\big{)}^{\top}\] \[\qquad+\tfrac{1}{6}\big{(}\partial_{x_{2}}^{\top}V_{S_{2},0}(x, \beta_{S_{2}})\big{)}^{\top}\big{(}\partial_{x_{2}}^{\top}V_{S_{1},0}(x_{S}, \beta_{S_{1}})\big{)}^{\top}\] \[\qquad\qquad+\tfrac{1}{6}\big{(}\partial_{x_{2}}^{\top}V_{S_{2},0 }(x,\beta_{S_{1}})\big{)}^{\top}\Sigma_{S_{2}S_{2}}^{-1}(x,\theta)\Sigma_{S_{2 }S_{2}}(x,\theta)\big{(}\partial_{x_{2}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{1} })\big{)}^{\top}\] \[=-\tfrac{1}{12}\big{(}\partial_{x_{2}}^{\top}V_{S_{1},0}(x_{S}, \beta_{S_{1}})\partial_{x_{2}}^{\top}V_{S_{2},0}(x,\beta_{S_{2}})\big{)}^{\top}.\]
Thus, we obtain:
\[\Big{(}\Sigma_{S_{1}S_{1}}(x,\theta)-\widetilde{\Sigma}(x,\theta)\hat{\Lambda }(x,\theta)\widetilde{\Sigma}(x,\theta)^{\top}\Big{)}^{-1}=\big{(}\tfrac{1}{3 6}\Sigma_{S_{1}S_{1}}(x,\theta)\big{)}^{-1}=720\,a_{S_{1}}^{-1}(x,\theta),\]
and now the proof of (46) is complete.
(Proof of (47)).: From the block expression of the matrix \(\Sigma(x,\theta)\) in (36), we obtain:
\[\begin{bmatrix}\Lambda_{S_{2}S_{2}}(x,\theta)&\Lambda_{S_{2}R}(x,\theta)\\ \Lambda_{RS_{2}}(x,\theta)&\Lambda_{RR}(x,\theta)\end{bmatrix}=\hat{\Lambda}(x,\theta)+\hat{\Lambda}(x,\theta)\widetilde{\Sigma}(x,\theta)^{\top}\Lambda_{S_ {1}S_{1}}(x,\theta)\widetilde{\Sigma}(x,\theta)\hat{\Lambda}(x,\theta).\]
Thus, we have:
\[\Lambda_{S_{2}S_{2}}(x,\theta) =\hat{\Lambda}_{S_{2}S_{2}}(x,\theta)+\Xi(x,\theta)^{\top} \Lambda_{S_{1}S_{1}}(x,\theta)\Xi(x,\theta)\] \[=4\Sigma_{S_{2}S_{2}}^{-1}(x,\theta)+\tfrac{1}{4}\big{(} \partial_{x_{2}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{1}})\big{)}^{\top}\Lambda_{S _{1}S_{1}}(x,\theta)\partial_{x_{2}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{1}})\] \[=12a_{S_{2}}^{-1}(x,\theta)-\tfrac{1}{2}\Lambda_{S_{2}S_{1}}(x, \theta)\partial_{x_{2}}^{\top}V_{S_{1},0}(x_{S},\beta_{S_{1}}),\]
where \(\Xi(x,\theta)\) is defined as in (39), and we made use of (40), (41). The proof of (47) is now complete.
### Proof of Lemma 9
It holds that \(\frac{1}{n}\ell_{n}(\theta)=\sum_{1\leq k\leq 4}\mathcal{P}_{k}(\theta)\), \(\theta\in\Theta\), where we have set:
\[\mathcal{P}_{1}(\theta) =\tfrac{1}{n}\sum_{i=1}^{n}\biggl{\{}\sum_{1\leq j_{1},j_{2}\leq N _{3}}R_{j_{1}j_{2}}^{S_{1}S_{1}}(1,X_{i-1},\theta)\eta_{S_{1},i-1}^{j_{1}}( \sqrt{\Delta_{n}^{3}},\beta_{S_{1}})\eta_{S_{1},i-1}^{j_{1}}(\sqrt{\Delta_{n}^ {3}},\beta_{S_{1}})\] \[\qquad+\sum_{\begin{subarray}{c}1\leq j_{1}\leq N_{3}\\ 1\leq j_{2}\leq N_{2}\end{subarray}}R_{j_{1}j_{2}}^{S_{1}S_{1}}(1,X_{i-1}, \theta)\eta_{S_{1},i-1}^{j_{1}}(\sqrt{\Delta_{n}^{3}},\beta_{S_{1}})\eta_{S_{ 2},i-1}^{j_{2}}(\sqrt{\Delta_{n}},\beta_{S_{2}})\] \[\qquad+\sum_{1\leq j_{1},j_{2}\leq N_{3}}R_{j_{1}j_{2}}^{S_{1}S_{2 }}(1,X_{i-1},\theta)\eta_{S_{2},i-1}^{j_{1}}(\sqrt{\Delta_{n}},\beta_{S_{2}}) \eta_{S_{2},i-1}^{j_{2}}(\sqrt{\Delta_{n}},\beta_{S_{2}})\biggr{\}};\] \[\mathcal{P}_{2}(\theta) =\tfrac{1}{n}\sum_{i=1}^{n}\biggl{\{}\sum_{\begin{subarray}{c}1\leq j _{1}\leq N_{3}\\ 1\leq j_{2}\leq N^{\prime}\end{subarray}}R_{j_{1}j_{2}}^{S_{1}}(1,X_{i-1}, \theta)\eta_{S_{1},i-1}^{j_{1}}(\sqrt{\Delta_{n}^{3}},\beta_{S_{1}})m_{1}^{j_{2}} (\Delta_{n},\theta^{\dagger})\]
\[+\sum_{\begin{subarray}{c}1\leq j_{1}\leq N_{\beta}\\ 1\leq j_{2}\leq N\end{subarray}}R_{j_{1}j_{2}}^{S_{1}}(1,X_{i-1},\theta)\eta_{S _{2},i-1}^{j_{1}}(\sqrt{\Delta_{n}},\beta_{S_{2}})m_{1}^{j_{2}}(\Delta_{n}, \theta^{\dagger})\] \[+\sum_{1\leq j\leq N_{\beta_{1}}}R_{j}^{S_{1}}(\sqrt{\Delta_{n}},X _{i-1},\theta)\eta_{S_{1},i-1}^{j}(\sqrt{\Delta_{n}^{3}},\beta_{S_{1}})\] \[+\sum_{1\leq j\leq N_{\beta_{2}}}R_{j}^{S_{2}}(\sqrt{\Delta_{n}},X _{i-1},\theta)\eta_{S_{2},i-1}^{j}(\sqrt{\Delta_{n}},\beta_{S_{2}})\bigg{\}};\] \[\mathcal{F}_{3}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\biggl{\{}m_{i}( \Delta_{n},\theta^{\dagger})^{\top}\Lambda\bigl{(}X_{i-1},(\beta_{S},\sigma) \bigr{)}m_{i}(\Delta_{n},\theta^{\dagger})+\log\bigl{|}\Sigma\bigl{(}X_{i-1}, (\beta_{S},\sigma)\bigr{)}\bigr{|}\biggr{\}};\] \[\mathcal{F}_{4}(\theta)=\frac{1}{n}\sum_{i=1}^{n}\biggl{\{}\sum_{1 \leq j\leq N}R_{j}(\sqrt{\Delta_{n}},X_{i-1},\theta)m_{i}^{j}(\Delta_{n}, \theta^{\dagger})+R(\Delta_{n},X_{i-1},\theta)\biggr{\}},\]
where \(R_{i,j_{1}}^{S_{1},S_{1}}\), \(R_{i,j_{2}}^{S_{2},S_{2}}\), \(R_{j,j_{2}}^{S_{1}}\), \(R_{j,j_{2}}^{S_{2}}\), \(R_{j,j_{2}}^{S_{1}}\), \(R_{j,j_{2}}^{S_{2}}\), \(R_{j,j_{2}}\), \(R_{j,j_{2}}\), \(R\in\mathcal{S}\). From Lemmas 2, 3 and the convergence (30) and (31), we have:
\[\mathcal{F}_{k}\bigl{(}\hat{\beta}_{S,n},\beta_{R},\sigma\bigr{)} \xrightarrow{\tau_{\sigma}^{\dagger}}0,\qquad k=1,2,4;\] \[\mathcal{F}_{3}\bigl{(}\hat{\beta}_{S,n},\beta_{R},\sigma\bigr{)} \xrightarrow{\tau_{\sigma}^{\dagger}}\int\Bigl{\{}\mathrm{tr}\bigl{(} \Lambda\bigl{(}x,(\beta_{S}^{\dagger},\sigma)\bigr{)}\Sigma\bigl{(}x,(\beta_{ S}^{\dagger},\sigma^{\dagger})\bigr{)}\bigr{)}+\log\bigl{|}\Sigma\bigl{(}x,(\beta_{ S}^{\dagger},\sigma)\bigr{)}\bigr{|}\Bigr{\}}\,\nu_{\theta^{\dagger}}(dx),\]
as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), uniformly in \((\beta_{R},\sigma)\in\Theta_{\beta_{R}}\times\Phi_{\sigma}\). The proof is now complete.
### Proof of Lemma 10
The matrix-valued function \(\Sigma(x,\theta)\) and its inverse \(\Lambda(x,\theta)\) depend on \(\beta_{S}=(\beta_{S_{1}},\beta_{S_{2}})\in\Theta_{\beta_{S}}\) and \(\sigma\in\Theta_{\sigma}\) but not on \(\beta_{R}\in\Theta_{\beta_{R}}\) in terms of the parameter \(\theta\in\Theta\). We then define
\[\mathscr{L}(\theta)=\tfrac{1}{n\Delta_{n}}\ell_{n}(\theta)-\tfrac{1}{n\Delta_{n }}\ell_{n}\bigl{(}\beta_{S},\beta_{R}^{\dagger},\sigma\bigr{)},\quad\theta=( \beta_{S},\beta_{R},\sigma)\in\Theta,\]
and \(\mathscr{L}(\theta)\) is expressed as \(\mathscr{L}(\theta)=\sum_{1\leq k\leq 3}\mathscr{U}_{k}(\theta)\), where we have set:
\[\mathscr{U}_{1}(\theta)=\tfrac{1}{n\Delta_{n}}\sum_{i=1}^{n} \biggl{\{}\bigl{(}m_{i}(\Delta_{n},(\beta_{S},\beta_{R}^{\dagger},\sigma))-m_{i }(\Delta_{n},\theta^{\dagger})\bigr{)}^{\top}\Lambda(X_{i-1},(\beta_{S}, \sigma)\bigr{)}\] \[\qquad\qquad\qquad\qquad\times\bigl{(}m_{i}(\Delta_{n},\theta)-m_{ i}(\Delta_{n},(\beta_{S},\beta_{R}^{\dagger},\sigma))\bigr{)}\biggr{\}};\] \[\mathscr{U}_{2}(\theta)=\tfrac{1}{n\Delta_{n}}\sum_{i=1}^{n} \biggl{\{}\bigl{(}m_{i}(\Delta_{n},\theta)-m_{i}(\Delta_{n},\theta^{\dagger}) \bigr{)}^{\top}\Lambda(X_{i-1},(\beta_{S},\sigma)\bigr{)}\] \[\qquad\qquad\qquad\qquad\qquad\times\bigl{(}m_{i}(\Delta_{n}, \theta)-m_{i}(\Delta_{n},(\beta_{S},\beta_{R}^{\dagger},\sigma))\bigr{)} \biggr{\}};\] \[\mathscr{U}_{3}(\theta)=\tfrac{2}{n\Delta_{n}}\sum_{i=1}^{n}m_{i} (\Delta_{n},\theta^{\dagger})^{\top}\Lambda(X_{i-1},(\beta_{S},\sigma))\bigl{(} m_{i}(\Delta_{n},\theta)-m_{i}(\Delta_{n},(\beta_{S},\beta_{R}^{\dagger}, \sigma))\bigr{)}.\]
We will derive the limit of the terms \(\mathscr{U}_{k}(\theta)\), \(1\leq k\leq 3\) evaluated at \(\theta=(\hat{\beta}_{S,n},\,\beta_{R},\,\hat{\vartheta}_{n})\) by utilising the following result whose proof is given in Section D.8:
**Lemma 14**: _Assume that condition (H)-II holds. For any \(\theta=(\beta_{S},\beta_{R},\sigma)\in\Theta\) and \(1\leq i\leq N\), we have that_
\[\Lambda(X_{i-1},(\beta_{S},\sigma))\big{(}m_{i}(\Delta_{n},\theta)-m_{i}(\Delta _{n},(\beta_{S},\beta_{R}^{\dagger},\sigma))\big{)}=\sqrt{\Delta_{n}}\begin{bmatrix} \mathbf{0}_{N_{S_{i}}}\\ \mathbf{0}_{N_{S_{i}}}\\ b_{R}\end{bmatrix}+R(\sqrt{\Delta_{n}^{3}},X_{i-1},\theta),\]
_for \(\mathbb{R}^{N}\)-valued function \(R\) with \(R^{j}\in\mathcal{S}\), \(1\leq j\leq N\), where the \(N_{R}\)-dimensional vector \(b_{R}\) is specified as:_
\[b_{R}\equiv a_{R}^{-1}(X_{i-1},\sigma)\,\eta_{R}(X_{i-1},\beta_{R}).\]
We first consider the term \(\mathscr{U}_{1}(\theta)\). Making use of Lemma 14, we have
\[\mathscr{U}_{1}(\theta) =\tfrac{1}{n}\bigg{\{}\sum_{i=1}^{n}\bigg{\{}\sum_{1\leq j\leq N_ {S_{i}}}\eta_{S_{i},i-1}^{j}(\sqrt{\Delta_{n}},\beta_{S_{i}})R_{S_{i}}^{j}( \sqrt{\Delta_{n}},X_{i-1},\theta) \tag{50}\] \[\qquad+\sum_{1\leq j\leq N_{S_{i}}}\eta_{S_{i},i-1}^{j}(\sqrt{ \Delta_{n}},\beta_{S_{i}})R_{S_{i}}^{j}(\sqrt{\Delta_{n}},X_{i-1},\theta)+R( \sqrt{\Delta_{n}},X_{i-1},\theta)\bigg{\}},\]
where \(R_{S_{i}}^{j}\), \(R_{S_{i}}^{j}\), \(R\in\mathcal{S}\). From Lemma 2 and the limits (30), (31), we obtain that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\),
\[\mathscr{U}_{1}\big{(}\hat{\beta}_{S,n},\,\beta_{R},\,\hat{\sigma}_{n}\big{)} \stackrel{{\mathbb{P}_{\mathcal{H}^{+}}}}{{\longrightarrow}}0,\]
uniformly in \(\beta_{R}\in\Theta_{\beta_{R}}\). For term \(\mathscr{U}_{2}(\theta)\), again Lemma 14 yields
\[\mathscr{U}_{2}(\theta)=\tfrac{1}{n}\sum_{i=1}^{n}\eta_{R}(X_{i-1},\beta_{R}) ^{\top}a_{R}^{-1}(X_{i-1},\sigma)\eta_{R}(X_{i-1},\beta_{R})+\widetilde{ \mathscr{U}_{2}}(\theta),\]
where \(\widetilde{\mathscr{U}_{2}}(\theta)\) is given in the form of the right-hand-side of formula (50). We then obtain
\[\mathscr{U}_{1}\big{(}\hat{\beta}_{S,n},\,\beta_{R},\,\hat{\sigma}_{n}\big{)} \stackrel{{\mathbb{P}_{\mathcal{H}^{+}}}}{{\longrightarrow}} \int\eta_{R}(x,\beta_{R})^{\top}\,a_{R}^{-1}(x,\sigma^{\dagger})\,\eta_{R}(x, \beta_{R})\nu_{\mathcal{H}^{\dagger}}(dx),\]
as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), uniformly in \(\beta_{R}\in\Theta_{\beta_{R}}\). For the third term \(\mathscr{U}_{3}(\theta)\), it follows from Lemma 14 that
\[\mathscr{U}_{3}(\theta)= \tfrac{1}{n\sqrt{\Delta_{n}}}\sum_{i=1}^{n}\sum_{N_{S}+1\leq j\leq N }m_{i}^{j}(\Delta_{n},\theta^{\dagger})R^{j}(1,X_{i-1},\theta)\] \[+\tfrac{1}{n}\sum_{i=1}^{n}\sum_{1\leq j\leq N}m_{i}^{j}(\Delta_{ n},\theta^{\dagger})\widetilde{R}^{j}(\sqrt{\Delta_{n}},X_{i-1},\theta),\]
where \(R^{j}\), \(\widetilde{R}^{j}\in\mathcal{S}\). From Lemma 3, we have that, if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\mathscr{U}_{3}(\hat{\beta}_{S,n},\beta_{R},\hat{\sigma}_{n})\stackrel{{ \mathbb{P}_{\mathcal{H}^{+}}}}{{\longrightarrow}}0,\]
uniformly in \(\beta_{R}\in\Theta_{\beta_{R}}\). The proof is now complete.
### Proof of Lemma 14
We have
\[m_{i}(\Delta_{n},\theta)-m_{i}(\Delta_{n},(\beta_{S},\beta_{R}^{ \dagger},\sigma))\] \[=\begin{bmatrix}\frac{\sqrt{\Delta_{n}}}{6}\,\partial_{x_{\beta_{ 2}}}^{\top}V_{S_{1},0}(X_{S,i-1},\beta_{S_{1}})\partial_{x_{R}}^{\top}V_{S_{2},0 }(X_{i-1},\beta_{S_{2}})\,\eta_{R}(X_{i-1},\beta_{R})\\ \frac{\sqrt{\Delta_{n}}}{2}\,\partial_{x_{\beta_{2}}}^{\top}V_{S_{1},0}(x, \beta_{S_{1}})\eta_{R}(X_{i-1},\beta_{R})\\ \sqrt{\Delta_{n}}\,\eta_{R}(X_{i-1},\beta_{R})\end{bmatrix}+R(\sqrt{\Delta_{n}^ {\overline{\alpha}}},X_{i-1},\theta)\] \[=\sqrt{\Delta_{n}}\begin{bmatrix}\Sigma_{S_{i}R}(X_{i-1},\theta) \\ \Sigma_{S_{i}R}(X_{i-1},\theta)\\ \Sigma_{RR}\big{(}X_{i-1},\sigma\big{)}\end{bmatrix}a_{R}^{-1}(X_{i-1},\sigma) \eta_{R}(X_{i-1},\beta_{R})+R(\sqrt{\Delta_{n}^{\overline{\alpha}}},X_{i-1}, \theta),\]
for \(\mathbb{R}^{N}\)-valued function \(R\) with \(R^{j}\in\mathcal{S}\), \(1\leq j\leq N\), where we used for \((x,\theta)\in\mathbb{R}^{N}\times\Theta\),
\[\mathcal{L}V_{S_{1},0}(x,\theta)=\partial_{x_{\beta_{1}}}^{\top}V_{S_{1},0}(x, \beta_{S_{1}})V_{R,0}(x,\beta_{R})+v_{S_{2}}(x,\beta_{S},\sigma);\]
\[\mathcal{L}^{2}V_{S_{1},0}(x,\theta)=\partial_{x_{\beta_{1}}}^{\top}V_{S_{1},0 }(x_{S},\beta_{S_{1}})\partial_{x_{R}}^{\top}V_{S_{1},0}(x,\beta_{S_{1}})V_{R, 0}(x,\beta_{R})+v_{S_{1}}(x,\beta_{S},\sigma),\]
with some functions \(v_{S_{1}}:\mathbb{R}^{N}\times\Theta_{\theta_{S}}\times\Theta_{\sigma}\to \mathbb{R}^{N_{S_{2}}}\) and \(v_{S_{1}}:\mathbb{R}^{N}\times\Theta_{\beta_{R}}\times\Theta_{\sigma}\to \mathbb{R}^{N_{S_{1}}}\) that are independent of \(\beta_{R}\in\Theta_{\beta_{R}}\). Thus, it follows that
\[\Lambda\big{(}X_{i-1},(\beta_{S},\sigma)\big{)}\big{(}m_{i}(\Delta_{n},\theta )-m_{i}(\Delta_{n},(\beta_{S},\beta_{R}^{\dagger},\sigma))\big{)}\]
\[=\sqrt{\Delta_{n}}\begin{bmatrix}b_{S_{1}}\\ b_{S_{2}}\\ b_{R}\end{bmatrix}a_{R}^{-1}(X_{i-1},\sigma)\eta_{R}(X_{i-1},\beta_{R})+ \widetilde{R}(\sqrt{\Delta_{n}^{\overline{\alpha}}},X_{i-1},\theta),\]
for \(\widetilde{R}^{j}\in\mathcal{S}\), \(1\leq j\leq N\), where we have set:
\[\begin{bmatrix}b_{S_{1}}\\ b_{S_{2}}\\ b_{R}\end{bmatrix}=\Lambda\big{(}X_{i-1},\theta\big{)}\cdot\begin{bmatrix} \Sigma_{S_{1}R}(X_{i-1},\theta)\\ \cdot\begin{bmatrix}\Sigma_{S_{2}R}(X_{i-1},\theta)\\ \Sigma_{S_{2}R}(X_{i-1},\theta)\\ \Sigma_{RR}\big{(}X_{i-1},\sigma\big{)}\end{bmatrix}=\begin{bmatrix}\mathbf{0}_{N _{S_{1}}\times N_{R}}\\ \mathbf{0}_{N_{S_{2}}\times N_{R}}\\ I_{N_{R}\times N_{R}}\end{bmatrix},\]
since it holds \(\Lambda(x,\theta)\Sigma(x,\theta)=I_{N\times N}\) for each \((x,\theta)\in\mathbb{R}^{N}\times\Theta\). The proof is now complete.
### Proof of Lemma 11
Recall \(N_{\theta}=N_{\beta}+N_{\sigma}\), where \(N_{\beta}=N_{\beta_{S}}+N_{\beta_{R}}\) with \(N_{\beta_{S}}=N_{\beta_{S_{2}}}+N_{S_{\beta_{S}}}\). In this section we make use of the notation \(\beta=(\beta_{S},\beta_{R})\in\Theta_{\beta_{S}}\times\Theta_{\beta_{R}}\) with \(\beta_{S}=(\beta_{S_{1}},\beta_{S_{2}})\). We note again that the matrices \(\Sigma(x,\theta)\) and \(\Lambda(x,\theta)\) do not depend on the parameter \(\beta_{R}\). Since we have seen the convergences of the matrices \(\mathcal{Q}_{S_{1}S_{1}}(\theta)\), \(\mathcal{Q}_{S_{1}S_{2}}(\theta)\), \(\mathcal{Q}_{S_{1}S_{2}}(\theta)\), \(\mathcal{Q}_{S_{1}S_{2}}(\theta)\) evaluated at \(\theta=\theta^{\dagger}+\lambda(\theta_{n}-\theta^{\dagger})\), \(\lambda\in[0,1]\) in Appendix D.5, we prove that as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), the following convergences hold uniformly in \(\lambda\in[0,1]\):
* For \(1\leq i\leq N_{\beta}\), \(N_{\beta_{R}}+1\leq j\leq N_{\beta}\), \[\big{[}\mathscr{I}_{n}\big{(}\theta^{\dagger}+\lambda(\hat{\theta}_{n}-\theta^{ \dagger})\big{)}\big{]}_{ij}\] (51) \[\xrightarrow[\xrightarrow[\sigma]{p_{\sigma}}\begin{cases}2\int \left(\partial_{i}^{\beta}V_{R,0}(x,\beta_{R}^{\dagger})\right)^{\top}a_{R}^{-1} (x,\sigma^{\dagger})\,\partial_{j}^{\beta}V_{R,0}(x,\beta_{R}^{\dagger})\,\nu_{ \theta^{\dagger}}(dx),&N_{\beta_{S}}+1\leq i,j\leq N_{\theta};\\ 0,&(\text{otherwise}).\end{cases}\]
\[\left[\mathscr{I}_{n}\big{(}\theta^{\dagger}+\lambda(\hat{\theta}_{n}-\theta^{ \dagger})\big{)}\right]_{ij}\stackrel{{\frac{p_{\theta^{\dagger}}}{ 2}}}{{\longrightarrow}}0. \tag{52}\]
\(\text{(c)}\): For \(N_{\beta}+1\leq i,j\leq N_{\theta}\),
\[\left[\mathscr{I}_{n}\big{(}\theta^{\dagger}+\lambda(\hat{\theta}_{n}-\theta^{ \dagger})\big{)}\right]_{ij}\stackrel{{\frac{p_{\theta^{\dagger}} }{2}}}{{\longrightarrow}}\int\text{tr}\big{(}\theta^{\dagger}_{i}\Sigma(x, \theta^{\dagger})\,\Lambda(x,\theta^{\dagger})\,\partial^{s}_{j}\Sigma(x, \theta^{\dagger})\,\Lambda(x,\theta^{\dagger})\big{)}\nu_{\theta^{\dagger}}(dx), \tag{53}\]
#### d.9.1 Proof of (51)
We consider the following three cases separately:
\(\text{(a1)}\): \(1\leq i\leq N_{\beta_{1}},\ N_{\beta_{2}}+1\leq j\leq N_{\beta}\); \(\text{(a2)}\): \(N_{\beta_{1}}+1\leq i\leq N_{\beta_{2}}\), \(N_{\beta_{2}}+1\leq j\leq N_{\beta}\); \(\text{(a3)}\): \(N_{\beta_{2}}+1\leq i,j\leq N_{\beta}\).
In case (a1), we have for \(\theta=(\beta_{S_{1}},\beta_{S_{2}},\beta_{R},\sigma)\in\Theta\),
\[\left[\mathscr{I}_{n}\big{(}\theta\big{)}\right]_{ij} =\tfrac{1}{n}\sum_{m=1}^{n}H_{ij}(X_{m-1},\theta)+\tfrac{1}{n}\sum _{m=1}^{n}R(\Delta_{n},X_{m-1},\theta) \tag{54}\] \[+\tfrac{1}{n}\sum_{m=1}^{n}\sum_{1\leq k\leq N_{\beta_{1}}}R_{k}( 1,X_{m-1},\theta)\big{(}V^{k}_{S_{1}},0(X_{m-1},\beta^{\dagger}_{S_{1}})-V^{k} _{S_{1},0}(X_{m-1},\beta_{S_{1}})\big{)},\]
for some \(R_{k},R\in\mathcal{S}\), \(1\leq k\leq N_{S_{1}}\), where we have defined \(H_{ij}(x,\theta)\), \((x,\theta)\in\mathbb{R}^{N}\times\Theta\) as:
\[H_{ij}(x,\theta) =\tfrac{1}{3}\big{(}\partial^{s}_{1}V_{S_{1},0}(x_{S_{1}},\beta_{ S_{1}})\big{)}^{\top}\Lambda_{S_{1}S_{1}}(x,\theta)\,\partial^{s}_{j}\mathcal{L}^{ 2}V_{S_{1},0}(x,\theta)\] \[\quad+\big{(}\partial^{s}_{1}V_{S_{1},0}(x_{S_{1}},\beta_{S_{1}}) \big{)}^{\top}\Lambda_{S_{1}S_{2}}(x,\theta)\,\partial^{s}_{j}\mathcal{L}V_{S_ {2},0}(x,\theta)\] \[\quad+2\big{(}\partial^{s}_{1}V_{S_{1},0}(x_{S_{1}},\beta_{S_{1}} )\big{)}^{\top}\Lambda_{S_{1}R}(x,\theta)\,\partial^{s}_{j}V_{R,0}(x,\beta_{R}).\]
Noticing that for \(N_{\beta_{\beta}}+1\leq j\leq N_{\beta}\),
\[\partial^{\beta}_{j}\mathcal{L}V_{S_{1},0}(x,\theta)=\partial^{\top}_{x_{S_{ 1}}}V_{S_{1},0}(x,\beta_{S_{1}})\partial^{s}_{j}V_{R,0}(x,\beta_{R});\]
\[\partial^{\beta}_{j}\mathcal{L}^{2}V_{S_{1},0}(x,\theta)=\partial^{\top}_{x_{S_ {1}}}V_{S_{1},0}(x,\beta_{S_{1}})\partial^{\top}_{x_{R}}V_{S_{2},0}(x,\beta_{S_ {2}})\partial^{\beta}_{j}V_{R,0}(x,\beta_{R}),\]
we have \(H_{ij}(x,\theta)=0\) for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\) since
\[H_{ij}(x,\theta) =2\big{(}\partial^{s}_{1}V_{S_{1},0}(x,\beta_{S_{1}})\big{)}^{\top} \widetilde{H}_{ij}(x,\theta)\,\partial^{s}_{j}V_{R,0}(x,\beta_{R}),\]
where
\[\widetilde{H}_{ij}(x,\theta) =\big{\{}\Lambda_{S_{1}S_{1}}(x,\theta)\Sigma_{S_{1}R}(x,\theta)+ \Lambda_{S_{1}S_{1}}(x,\theta)\Sigma_{S_{1}R}(x,\theta)+\Lambda_{S_{1}R}(x, \theta)\Sigma_{RR}(x,\sigma)\big{\}}a^{-1}_{R}(x,\sigma)\] \[=\mathbf{0}_{N_{\beta_{1}}\times N_{R}}.\]
Thus, due to the consistency of the estimator and Lemma 2, we immediately obtain from (54) that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[[\mathscr{I}_{n}\big{(}\theta^{\dagger}+\lambda(\hat{\theta}_{n}-\theta^{ \dagger})\big{)}]_{ij}\stackrel{{\frac{p_{\theta^{\dagger}}}{2}}}{{ \longrightarrow}}0,\]
uniformly in \(\lambda\in[0,1]\) for \(1\leq i\leq N_{\beta_{1}}\), \(N_{\beta_{\beta}}+1\leq j\leq N_{\beta}\).
Subsequently, we consider the case (a2). We have
\[\left[\mathscr{I}_{n}\big{(}\theta\big{)}\right]_{ij} =\tfrac{1}{n}\sum_{m=1}^{n}\widetilde{H}_{ij,1}(X_{m-1},\theta)+ \tfrac{1}{n}\sum_{m=1}^{n}\widetilde{H}_{ij,2}(X_{m-1},\theta)\] \[\quad+\tfrac{1}{n}\sum_{m=1}^{n}\sum_{1\leq k\leq N_{S_{1}}}R_{k}( 1,X_{m-1},\theta)\eta^{k}_{S_{1},m-1}(\Delta_{n},\theta)\] \[\quad+\tfrac{1}{n}\sum_{m=1}^{n}\sum_{N_{S_{1}}+1\leq k\leq N_{S_{ 1}}}R_{k}(1,X_{m-1},\theta)\big{(}V^{k}_{S,0}(X_{m-1},\beta^{t}_{S})-V^{k}_{S, 0}(X_{m-1},\beta_{S})\big{)}\] \[\quad+\tfrac{1}{n}\sum_{m=1}^{n}R(\Delta_{n},X_{m-1},\theta),\]
for some \(R_{k},R\in\mathcal{S}\), \(1\leq k\leq N_{S}\), where we have defined \(\widetilde{H}_{ij,k}(x,\theta)\), for \((x,\theta)\in\mathbb{R}^{N}\times\Theta\), \(k=1,2\), as:
\[\widetilde{H}_{ij,1}(x,\theta) \equiv\tfrac{1}{6}\big{(}\partial_{i}^{3}\mathcal{C}V_{S_{1},0}(x,\theta)\big{)}^{\top}\Lambda_{S_{1}S_{1}}(x,\theta)\,\partial_{j}^{3} \mathcal{L}^{2}V_{S_{1},0}(x,\theta)\] \[\quad+\tfrac{1}{2}\big{(}\partial_{i}^{3}\mathcal{C}V_{S_{1},0}( x,\theta)\big{)}^{\top}\Lambda_{S_{1}S_{1}}(x,\theta)\,\partial_{j}^{3}\mathcal{C}V_{S_{ 2},0}(x,\theta)\] \[\quad+\big{(}\partial_{i}^{3}\mathcal{C}V_{S_{1},0}(x,\theta) \big{)}^{\top}\Lambda_{S_{1}R}(x,\theta)\,\partial_{j}^{3}V_{R,0}(x,\beta_{R});\] \[\widetilde{H}_{ij,2}(x,\theta) \equiv\big{(}\tfrac{1}{3}\partial_{i}^{3}V_{S_{1},0}(x,\beta_{S_{ 1}})\big{)}^{\top}\Lambda_{S_{1}S_{1}}(x,\theta)\,\partial_{j}^{3}\mathcal{L }^{2}V_{S_{1},0}(x,\theta)\] \[\quad+\big{(}\partial_{i}^{3}V_{S_{1},0}(x,\beta_{S_{1}})\big{)} ^{\top}\Lambda_{S_{1}S_{1}}(x,\theta)\,\partial_{j}^{3}\mathcal{C}V_{S_{2},0} (x,\theta)\] \[\quad+2\big{(}\partial_{i}^{3}V_{S_{1},0}(x,\beta_{S_{1}})\big{)} ^{\top}\Lambda_{S_{1}R}(x,\theta)\,\partial_{j}^{3}V_{R,0}(x,\beta_{R}).\]
We then have \(\widetilde{H}_{ij,k}(x,\theta)=0\) for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\), \(k=1,2\), from the same argument as in case (a1). Thus, making use of Lemma 2, convergence (30) with condition (C2) and the consistency of the estimator, we obtain (51) in case (a2).
Finally, we consider the case (a3). We have
\[\left[\mathscr{I}_{n}\big{(}\theta\big{)}\right]_{ij} =\tfrac{1}{n}\sum_{k=1}^{n}\bar{H}_{ij,1}(X_{k-1},\theta)+\tfrac{1 }{n}\sum_{k=1}^{n}\bar{H}_{ij,2}(X_{k-1},\theta)+\tfrac{1}{n}\sum_{k=1}^{n} \bar{H}_{ij,3}(X_{k-1},\theta)\] \[\quad+\tfrac{1}{n\sqrt{\Delta_{n}}}\sum_{k=1}^{n}R(1,X_{k-1}, \theta)\,m_{k}(\Delta_{n},\theta^{t})\] \[\quad+\tfrac{1}{n\sqrt{\Delta_{n}}}\sum_{k=1}^{n}\big{(}m_{k}( \Delta_{n},\theta)-m_{k}(\Delta_{n},\theta^{t})\big{)}^{\top}\Lambda(X_{k-1}, \theta)\,\partial_{(i,j)}^{3}v(X_{k-1},\theta)\] \[\quad+\tfrac{1}{n}\sum_{k=1}^{n}\widetilde{R}(\sqrt{\Delta_{n}},X_{k-1},\theta),\]
for \(R\), \(\widetilde{R}\in\mathcal{S}\), where we have set, for \((x,\theta)\in\mathbb{R}^{N}\times\Theta\),
\[\begin{split}\bar{H}_{ij,1}(x,\theta)&\equiv\tfrac{1}{ 18}\big{(}\partial_{i}^{\beta}\mathcal{L}^{2}V_{S_{1},0}(x,\theta)\big{)}^{ \top}\Lambda_{S_{1}S_{1}}(x,\theta)\,\partial_{j}^{\beta}\mathcal{L}^{2}V_{S_{ 1},0}(x,\theta)\\ &\quad+\tfrac{1}{6}\big{(}\partial_{i}^{\beta}\mathcal{L}^{2}V_{S_ {1},0}(x,\theta)\big{)}^{\top}\Lambda_{S_{1}S_{2}}(x,\theta)\,\partial_{j}^{ \beta}\mathcal{L}V_{S_{2},0}(x,\theta)\\ &\quad+\tfrac{1}{4}\big{(}\partial_{i}^{\beta}\mathcal{L}^{2}V_{S _{1},0}(x,\theta)\big{)}^{\top}\Lambda_{S_{1}R}(x,\theta)\,\partial_{j}^{ \beta}V_{R,0}(x,\beta_{R});\end{split} \tag{55}\]
\[\begin{split}\bar{H}_{ij,2}(x,\theta)&\equiv\tfrac{1} {6}\big{(}\partial_{i}^{\beta}\mathcal{L}V_{S_{1},0}(x,\theta)\big{)}^{\top} \Lambda_{S_{2}S_{2}}(x,\theta)\,\partial_{j}^{\beta}\mathcal{L}^{2}V_{S_{1},0 }(x,\theta)\\ &\quad+\tfrac{1}{2}\big{(}\partial_{i}^{\beta}\mathcal{L}V_{S_{ 2},0}(x,\theta)\big{)}^{\top}\Lambda_{S_{2}S_{2}}(x,\theta)\,\partial_{j}^{ \beta}\mathcal{L}V_{S_{2},0}(x,\theta)\\ &\quad+\big{(}\partial_{i}^{\beta}\mathcal{L}V_{S_{2},0}(x, \theta)\big{)}^{\top}\Lambda_{S_{2}R}(x,\theta)\,\partial_{j}^{\beta}V_{R,0}( x,\beta_{R});\end{split} \tag{56}\]
\[\begin{split}\bar{H}_{ij,3}(x,\theta)&\equiv\tfrac{1} {3}\big{(}\partial_{i}^{\beta}V_{R,0}(x,\beta_{R})\big{)}^{\top}\Lambda_{RS_{ 1}}(x,\theta)\,\partial_{j}^{\beta}\mathcal{L}^{2}V_{S_{1},0}(x,\theta)\\ &\quad+\big{(}\partial_{i}^{\beta}V_{R,0}(x,\beta_{R})\big{)}^{ \top}\Lambda_{RS_{2}}(x,\theta)\,\partial_{j}^{\beta}\mathcal{L}V_{S_{2},0}( x,\theta)\\ &\quad+2\big{(}\partial_{i}^{\beta}V_{R,0}(x,\beta_{R})\big{)}^{ \top}\Lambda_{RR}(x,\theta)\,\partial_{j}^{\beta}V_{R,0}(x,\beta_{R}),\end{split} \tag{57}\]
and
\[v(x,\theta)\equiv\Big{[}\tfrac{1}{6}\mathcal{L}^{2}V_{S_{1},0}(x,\theta)^{ \top},\,\tfrac{1}{2}CV_{S_{1},0}(x,\theta)^{\top},\,V_{R,0}(x,\beta_{R})^{ \top}\Big{]}^{\top}.\]
Notice that for any \((x,\theta)\in\mathbb{R}^{N}\times\Theta\),
\[\begin{split}\bar{H}_{ij,1}(x,\theta)=0,\qquad\bar{H}_{ij,2}(x, \theta)=0;\\ \bar{H}_{ij,3}(x,\theta)&=2\big{(}\partial_{i}^{ \beta}V_{R,0}(x,\beta_{R})\big{)}^{\top}a_{R}^{-1}(x,\sigma)\,\partial_{j}^{ \beta}V_{R,0}(x,\beta_{R}).\end{split}\]
Furthermore, it follows that
\[\begin{split}&\tfrac{1}{n\sqrt{\Delta_{n}}}\sum_{k=1}^{n}\big{(}m _{k}(\Delta_{n},\theta)-m_{k}(\Delta_{n},\theta^{\dagger})\big{)}^{\top} \Lambda(X_{k-1},\theta)\partial_{(i,j)}^{\beta}v(X_{k-1},\theta)\\ &=\tfrac{1}{n}\sum_{k=1}^{n}\big{(}V_{R,0}(X_{k-1},\beta_{R}^{ \dagger})-V_{R,0}(X_{k-1},\beta_{R})\big{)}^{\top}a_{R}^{-1}(X_{k-1},\sigma) \partial_{(i,j)}^{\beta}V_{R,0}(X_{k-1},\beta_{R}),\end{split}\]
where we made use of similar arguments in the proof of Lemma 14 (Section D.8) for the term \(\Lambda(X_{k-1},\theta)\partial_{(i,j)}^{\beta}v(X_{k-1},\theta)\). Hence, exploiting Lemmas 2, 3 and the consistency of estimator \(\bar{\theta}_{n}\), we obtain that:
\[\Big{[}\mathscr{I}_{n}\big{(}\theta^{\dagger}+\lambda(\bar{\theta}_{n}-\theta^{ \dagger})\big{)}\Big{]}_{ij}\xrightarrow{\mathscr{I}_{n}}2\int\big{(} \partial_{i}^{\beta}V_{R,0}(x,\beta_{R}^{\dagger})\big{)}^{\top}a_{R}^{-1}(x, \sigma^{\dagger})\,\partial_{j}^{\beta}V_{R,0}(x,\beta_{R}^{\dagger})\,\nu_{ \theta^{\dagger}}(dx),\]
as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), for \(N_{\beta}+1\leq i,j\leq N_{\theta}\). The proof of (51) is now complete.
#### d.9.2 Proof of (52)
We show (52) when \(1\leq i\leq N_{\beta_{2i}}\) and \(N_{\beta}+1\leq j\leq N_{\theta}\). The convergence for the other cases can be deduced from a similar argument used in the proof of (51) so we omit the proof. We have
\[\big{[}\mathscr{I}_{n}\big{(}\theta\big{)}\big{]}_{ij} =\frac{\sqrt{\lambda_{1}^{2}}}{n}\sum_{k=1}^{n}\sum_{1\leq k_{1},k _{2}\leq N}R_{k_{i}k_{2}}(1,X_{k-1},\theta)m_{k}^{k_{1}}(\Delta_{n},\theta^{ \dagger})m_{k}^{k_{2}}(\Delta_{n},\theta^{\dagger})\] \[+\frac{\sqrt{\lambda_{1}^{2}}}{n}\sum_{k=1}^{n}\sum_{1\leq k_{1}, k_{2}\leq N}\bigg{\{}\widetilde{R}_{k_{1}k_{2}}(1,X_{k-1},\theta)\big{(}m_{k}^{ k_{1}}(\Delta_{n},\theta)-m_{k}^{k_{1}}(\Delta_{n},\theta^{\dagger})\big{)}\] \[\qquad\qquad\qquad\qquad\times\big{(}m_{k}^{k_{2}}(\Delta_{n}, \theta)-m_{k}^{k_{2}}(\Delta_{n},\theta^{\dagger})\big{)}\bigg{\}}\] \[+\frac{1}{n}\sum_{k=1}^{n}\sum_{1\leq k_{1}\leq N}R_{k_{1}}(1,X_{ k-1},\theta)\big{(}m_{k}^{k_{1}}(\Delta_{n},\theta)-m_{k}^{k_{1}}(\Delta_{n}, \theta^{\dagger})\big{)}\] \[+\frac{1}{n}\sum_{k=1}^{n}R(\sqrt{\Delta_{n}},X_{k-1},\theta),\]
for some \(R_{k_{1}k_{2}},\widetilde{R}_{k_{1}k_{2}},R_{k_{1}},R\in\mathcal{S}\). Thus, we immediately obtain (52) for \(1\leq i\leq N_{\beta_{2i}}\), \(N_{\beta}+1\leq j\leq N_{\theta}\) from Lemma 2, 3 and (30)-(31).
#### d.9.3 Proof of (53)
It holds that for \(N_{\beta}+1\leq i,j\leq N_{\theta}\), \(\theta=(\beta_{S_{1}},\beta_{S_{2}},\beta_{R},\sigma)\in\Theta\),
\[\big{[}\mathscr{I}_{n}\big{(}\theta\big{)}\big{]}_{ij} =\frac{1}{n}\sum_{k=1}^{n}m_{k}(\Delta_{n},\theta^{\dagger})^{ \top}\,\partial_{(i,j)}^{\prime}\Lambda(X_{k-1},\theta)\,m_{k}(\Delta_{n}, \theta^{\dagger})\] \[\quad+\frac{1}{n}\sum_{k=1}^{n}\big{\{}\partial_{(i,j)}^{\prime} \log\lvert\Sigma(X_{k-1},\theta)\rvert\] \[\quad+\frac{1}{n}\sum_{k=1}^{n}\sum_{1\leq k_{1},k_{2}\leq N}R_{k _{1}k_{2}}(1,X_{k-1},\theta)m_{k}^{k_{1}}(\Delta_{n},\theta^{\dagger})\big{(}m _{k}^{k_{2}}(\Delta_{n},\theta^{\dagger})-m_{k}^{k_{2}}(\Delta_{n},\theta) \big{)}\] \[\qquad+\frac{1}{n}\sum_{k=1}^{n}\sum_{1\leq k_{1},k_{2}\leq N}\bigg{\{} \widetilde{R}_{k_{1}k_{2}}(1,X_{k-1},\theta)\] \[\qquad\qquad\qquad\times\big{(}m_{k}^{k_{1}}(\Delta_{n},\theta^{ \dagger})-m_{k}^{k_{1}}(\Delta_{n},\theta)\big{(}m_{k}^{k_{2}}(\Delta_{n}, \theta^{\dagger})-m_{k}^{k_{2}}(\Delta_{n},\theta)\big{)}\bigg{\}}\] \[\quad+\frac{1}{n}\sum_{k=1}^{n}\sum_{1\leq k_{1}\leq N}R_{k_{1}}( \sqrt{\Delta_{n}},X_{k-1},\theta)\big{(}m_{k}^{k_{1}}(\Delta_{n},\theta^{ \dagger})-m_{k}^{k_{1}}(\Delta_{n},\theta)\big{)}\] \[\quad+\frac{1}{n}\sum_{k=1}^{n}R(\Delta_{n},X_{k-1},\theta),\]
for some \(R_{k_{1}k_{2}},\widetilde{R}_{k_{1}k_{2}},R_{k_{1}},R\in\mathcal{S}\). Making use of Lemmas 2-3, (30), (31) and the consistency of the estimator, we obtain as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\),
\[\big{[}\mathscr{I}_{n}\big{(}\theta^{\dagger}+\lambda(\hat{\theta}_{n}-\theta^{ \dagger})\big{)}\big{]}_{ij} \stackrel{{\mathbb{P}_{d\uparrow}}}{{\longrightarrow}}\int \Big{\{}\mathrm{tr}\big{(}\partial_{(i,j)}^{\sigma}\Lambda(x,\theta^{\dagger}) \,\Sigma(x,\theta^{\dagger})\big{)}+\partial_{(i,j)}^{\sigma}\log\lvert\Sigma(x, \theta^{\dagger})\rvert\Big{\}}\nu_{\theta^{\dagger}}(dx)\]
\[=\int\mathrm{tr}\big{(}\partial_{t}^{\sigma}\Sigma(x,\theta^{\dagger})\, \Lambda(x,\theta^{\dagger})\partial_{f}^{\sigma}\Sigma(x,\theta^{\dagger})\, \Lambda(x,\theta^{\dagger})\big{)}\nu_{\theta^{\dagger}}(dx),\]
uniformly in \(\lambda\in[0,1]\), where we applied the following two formulae to the above equation:
\[\partial_{(i,j)}^{\sigma}\log[\Sigma(x,\theta^{\dagger})]=- \mathrm{tr}\big{(}\partial_{(i,j)}^{\sigma}\Lambda(x,\theta^{\dagger})\,\Sigma (x,\theta^{\dagger})\big{)}-\mathrm{tr}\big{(}\partial_{t}^{\sigma}\Sigma(x, \theta^{\dagger})\partial_{f}^{\sigma}\Lambda(x,\theta^{\dagger})\big{)};\] \[\mathrm{tr}\big{(}\partial_{t}^{\sigma}\Sigma(x,\theta^{\dagger} )\partial_{f}^{\sigma}\Lambda(x,\theta^{\dagger})\big{)}=-\mathrm{tr}\big{(} \partial_{t}^{\sigma}\Sigma(x,\theta^{\dagger})\,\Lambda(x,\theta^{\dagger}) \partial_{f}^{\sigma}\Sigma(x,\theta^{\dagger})\,\Lambda(x,\theta^{\dagger}) \big{)}.\]
The proof is now complete.
### Proof of Lemma 12
We write \(\xi_{n}^{k}(\theta)=\sum_{i=1}^{n}\zeta_{i}^{k}(\theta),\,\theta\in\Theta,\; 1\leq k\leq N_{\theta}\), where we have set:
\[\zeta_{i}^{k}(\theta)\equiv\big{[}M_{n}^{-1}\big{]}_{kk}\times \partial_{k}^{\theta}\big{\{}m_{i}(\Delta_{n},\theta)^{\top}\Lambda(X_{i-1}, \theta)m_{i}(\Delta_{n},\theta)+\log|\Sigma(X_{i-1},\theta)|\big{\}}.\]
To prove the assertion, it suffices to show, from Theorem 3.2 and 3.4 in Hall and Heyde (1980), that:
* If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\) with \(\Delta_{n}=o(n^{-1/2})\), then \[\sum_{i=1}^{n}\mathbb{E}_{\theta^{\dagger}}\big{[}\zeta_{i}^{k}( \theta^{\dagger})|\mathcal{F}_{t_{i-1}}\big{]}\stackrel{{ \mathbb{P}_{\theta^{\dagger}}}}{{\longrightarrow}}0,\quad 1\leq k\leq N_{\theta}.\] (58)
* If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then \[\sum_{i=1}^{n}\mathbb{E}_{\theta^{\dagger}}\big{[}\zeta_{i}^{k_{1}}(\theta^{ \dagger})\zeta_{i}^{k_{2}}(\theta^{\dagger})|\mathcal{F}_{t_{i-1}}\big{]} \stackrel{{\mathbb{P}_{\theta^{\dagger}}}}{{\longrightarrow}}4 \Gamma(\theta^{\dagger})\big{]}_{k_{1}k_{2}},\quad 1\leq k_{1},k_{2}\leq N_{\theta}.\] (59)
* If \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then \[\sum_{i=1}^{n}\mathbb{E}_{\theta^{\dagger}}\big{[}\zeta_{i}^{k_{1}}(\theta^{ \dagger})\zeta_{i}^{k_{2}}(\theta^{\dagger})\zeta_{i}^{k_{3}}(\theta^{\dagger} )\zeta_{i}^{k_{4}}(\theta^{\dagger})|\mathcal{F}_{t_{i-1}}\big{]}\stackrel{{ \mathbb{P}_{\theta^{\dagger}}}}{{\longrightarrow}}0,\quad 1\leq k_{1},k_{2},k_{3},k_{4}\leq N _{\theta}.\] (60)
In what follows, we will check convergences (58) and (59). One can prove (60) following similar arguments and by noticing that the left-hand-side of (60) involves \(1/n^{2}\).
#### d.10.1 Proof of (58)
We recall (27) and (28) that are immediately obtained from the definition of \(m_{i}(\Delta_{n},\theta^{\dagger})\) in (23), that is, for \(1\leq k_{1},k_{2}\leq N\),
\[\mathbb{E}_{\theta^{\dagger}}\big{[}m_{i}^{k_{1}}(\Delta_{n},\theta^{\dagger})| \mathcal{F}_{t_{i-1}}\big{]}=R_{1}(\sqrt{\Delta_{n}^{3}},X_{i-1},\theta^{ \dagger}); \tag{61}\]
\[\mathbb{E}_{\theta^{\dagger}}\big{[}m_{i}^{k_{1}}(\Delta_{n},\theta^{\dagger})m _{i}^{k_{2}}(\Delta_{n},\theta^{\dagger})|\mathcal{F}_{t_{i-1}}\big{]}=[ \Sigma(X_{i-1},\theta^{\dagger})]_{k_{1}k_{2}}+R_{2}(\Delta_{n},X_{i-1},\theta^ {\dagger}) \tag{62}\]
for \(R_{1},R_{2}\in\mathcal{S}\). We then write \(\zeta_{i}^{k}(\theta),\,\theta\in\Theta,\,1\leq i\leq n,\,1\leq k\leq N_{\theta}\) as \(\zeta_{i}^{k}(\theta)=\zeta_{i,1}^{k}(\theta)+\zeta_{i,2}^{k}(\theta)\), where we have set:
\[\zeta_{i,1}^{k}(\theta)=2\big{[}M_{n}^{-1}\big{]}_{kk}\times \big{\{}(\partial_{k}^{\theta}m_{i}(\Delta_{n},\theta))^{\top}\Lambda(X_{i-1}, \theta)\,m_{i}(\Delta_{n},\theta)\big{\}};\] \[\zeta_{i,2}^{k}(\theta)=\big{[}M_{n}^{-1}\big{]}_{kk}\times \big{\{}\partial_{k}^{\theta}\log|\Sigma(X_{i-1},\theta)|+m_{i}(\Delta_{n}, \theta)^{\top}\partial_{k}^{\theta}\Lambda(X_{i-1},\theta)\,m_{i}(\Delta_{n}, \theta)\big{\}}.\]
Exploiting (61), we obtain
\[\mathbb{E}_{\mathfrak{\mu}}\left[\zeta_{i,1}^{k}(\theta^{t})|\mathcal{ F}_{t_{i-1}}\right] =\tfrac{1}{\sqrt{n}}\sum_{j=1}^{N}R_{k}^{j}(1,X_{i-1},\theta^{t})\, \mathbb{E}_{\mathfrak{\mu}}\big{[}m_{i}^{j}(\Delta_{n},\theta^{t})|\mathcal{F}_ {t_{i-1}}\big{]}\] \[=\tfrac{1}{n}\widetilde{R}_{k}^{j}(\sqrt{n\Delta_{n}^{3}},X_{i-1},\theta^{t})\]
for \(R_{k}^{j}\), \(\widetilde{R}_{k}^{j}\in\mathcal{S}\). Thus, from Lemma 2, we obtain
\[\sum_{i=1}^{n}\mathbb{E}_{\mathfrak{\mu}}\left[\zeta_{i,1}^{k}( \theta^{t})|\mathcal{F}_{t_{i-1}}\right]=\tfrac{1}{n}\sum_{i=1}^{n}\widetilde {R}_{k}^{j}(\sqrt{n\Delta_{n}^{3}},X_{i-1},\theta^{t})\xrightarrow{\mathbb{P} _{\mathfrak{\mu}}}0,\]
if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\) and \(\Delta_{n}=o(n^{-1/2})\). Next, we consider the second term \(\zeta_{i,2}^{k}(\theta)\). First, notice that for \(N_{\beta_{\beta}}+1\leq k\leq N_{\beta}\), \(\zeta_{i,2}^{k}(\theta^{t})=0\), since \(\Sigma(X_{i-1},\theta)\) and \(\Lambda(X_{i-1},\theta)\) are independent of \(\beta_{R}\in\Theta_{\beta_{R}}\). For \(1\leq k\leq N_{\beta_{\beta}}\) and \(N_{\beta}+1\leq k\leq N_{\beta}\), we apply (62) to obtain
\[\mathbb{E}_{\mathfrak{\mu}}[\zeta_{i,2}^{k}(\theta^{t})|\mathcal{ F}_{t_{i-1}}] =[M_{n}^{-1}]_{\mathrm{k}k}\times\left\{\partial_{k}^{\theta} \log\left[\Sigma(X_{i-1},\theta^{t})\right]+\mathrm{tr}\big{(}\partial_{k}^{ \theta}\Lambda(X_{i-1},\theta^{t})\Sigma(X_{i-1},\theta^{t})\big{)}\right\}\] \[\qquad+\tfrac{1}{n}R_{k}(\sqrt{n\Delta_{n}^{2}},X_{i-1},\theta^{ t})\] \[=\tfrac{1}{n}R_{k}(\sqrt{n\Delta_{n}^{2}},X_{i-1},\theta^{t})\]
for \(R_{k}\in\mathcal{S}\), where we used:
\[\partial_{k}^{\theta}\log|\Sigma(x,\theta)|=-\mathrm{tr}\big{(} \partial_{k}^{\theta}\Lambda(x,\theta)\Sigma(x,\theta)\big{)},\quad(x,\theta) \in\mathbb{R}^{N}\times\Theta. \tag{63}\]
Thus, we have from Lemma 2 that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\) with \(\Delta_{n}=o(n^{-1/2})\), then
\[\sum_{i=1}^{n}\mathbb{E}_{\mathfrak{\mu}^{t}}\left[\zeta_{i,2}^{k}(\theta^{t})| \mathcal{F}_{t_{i-1}}\right]=\tfrac{1}{n}\sum_{i=1}^{n}R_{k}(\sqrt{n\Delta_{n}^ {2}},X_{i-1},\theta^{t})\xrightarrow{\mathbb{P}_{\mathfrak{\mu}}}0,\]
and the proof of (58) is now complete.
#### d.10.2 Proof of (59)
For simplicity, we write
\[\mathscr{H}_{k_{1}k_{2}}(\theta^{t}) =\sum_{i=1}^{n}\mathbb{E}_{\mathfrak{\mu}^{t}}\left[\zeta_{i}^{k_ {1}}(\theta^{t})\zeta_{i}^{k_{2}}(\theta^{t})|\mathcal{F}_{t_{i-1}}\right].\]
We have that for \(1\leq k_{1}\leq N_{\beta_{\beta}}\), \(N_{\beta_{\beta}}+1\leq k_{2}\leq N_{\beta}\), \(N_{\beta}+1\leq k_{3}\leq N_{\theta}\),
\[\sqrt{n}\zeta_{i}^{k_{1}}(\theta^{t}) =-2\,\mu_{k_{1}}(X_{i-1},\theta^{t})^{\top}\Lambda(X_{i-1},\theta^ {t})\,m_{i}(\Delta_{n},\theta^{t})\] \[\qquad+\sum_{1\leq i_{1},j_{1}\leq N}R_{k_{1}}^{j_{1}j_{2}}(\sqrt {\Delta_{n}},X_{i-1},\theta^{t})m_{i}^{j_{1}}(\Delta_{n},\theta^{t})m_{i}^{j_{ 1}}(\Delta_{n},\theta^{t}) \tag{64}\] \[\qquad+\sum_{1\leq j_{1}\leq N}R_{k_{1}}^{j_{1}}(\sqrt{\Delta_{n} },X_{i-1},\theta^{t})m_{i}^{j_{1}}(\Delta_{n},\theta^{t});\] \[\sqrt{n}\zeta_{i}^{k_{2}}(\theta^{t}) =-2\,\mu_{k_{2}}(X_{i-1},\theta^{t})^{\top}\Lambda(X_{i-1},\theta^ {t})m_{i}(\Delta_{n},\theta^{t}); \tag{65}\]
\[\sqrt{n}\zeta_{i}^{k_{\beta}}(\theta^{\dagger})=m_{i}(\Delta_{n},\theta^{ \dagger})^{\top}\left(\theta_{k_{\beta}}^{\beta}\Lambda(X_{i-1},\theta^{\dagger}) \right)m_{i}(\Delta_{n},\theta^{\dagger})+\partial_{k_{\beta}}^{\theta}\log| \Sigma(X_{i-1}\theta^{\dagger})|\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\). For \(N_{\beta_{k}}+1\leq k_{1},k_{2}\leq N_{\theta}\), it follows that if \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\), then
\[\mathscr{Y}_{k_{1}k_{2}}(\theta^{\dagger})\stackrel{{\mathbb{P}_ {\theta^{\dagger}}}}{{\longrightarrow}}\mathscr{Y}_{k_{1}k_{2},1}(\theta^{ \dagger})+\mathscr{Y}_{k_{1}k_{2},2}(\theta^{\dagger})+\mathscr{Y}_{k_{1}k_ {2},3}(\theta^{\dagger}),\]
where we have set:
\[\mathscr{Y}_{k_{1}k_{2},1}(\theta^{\dagger}) \equiv\sum_{1\leq j_{1},j_{2},j_{3},j_{4}\leq N}\int\{[\theta_{k _{1}}^{\theta}\Lambda(x,\theta^{\dagger})]_{j_{1}j_{2}}[\partial_{k_{2}}^{ \theta}\Lambda(x,\theta^{\dagger})]_{j_{3}j_{4}}\mathscr{W}_{j_{1}j_{3}j_{4}}( x,\theta^{\dagger})\,\nu_{\theta^{\dagger}}(dx);\] \[\mathscr{Y}_{k_{1}k_{2},2}(\theta^{\dagger}) \equiv\sum_{1\leq j_{1},j_{2}\leq N}\int\{[\partial_{k_{1}}^{ \theta}\Lambda(x,\theta^{\dagger})]_{j_{1}j_{2}}[\Sigma(x,\theta^{\dagger})]_{ j_{1}j_{2}}\partial_{k_{2}}^{\theta}\log|\Sigma(x,\theta^{\dagger})|\] \[\qquad\qquad\qquad\qquad+[\partial_{k_{2}}^{\theta}\Lambda(x, \theta^{\dagger})]_{j_{1}j_{2}}[\Sigma(x,\theta^{\dagger})]_{j_{1}j_{2}} \partial_{k_{1}}^{\theta}\log|\Sigma(x,\theta^{\dagger})|\}\nu_{\theta^{ \dagger}}(dx)\] \[=-\sum_{1\leq j_{1},j_{2},j_{3},j_{4}\leq N}\int\{[\partial_{k_{1} }^{\theta}\Lambda(x,\theta^{\dagger})]_{j_{1}j_{2}}[\Sigma(x,\theta^{\dagger})] _{j_{1}j_{2}}[\partial_{k_{2}}^{\theta}\Lambda(x,\theta^{\dagger})]_{j_{3}j_ {4}}[\Sigma(x,\theta^{\dagger})]_{j_{3}j_{4}}\] \[\qquad\qquad\qquad+[\partial_{k_{2}}^{\theta}\Lambda(x,\theta^{ \dagger})]_{j_{1}j_{2}}[\Sigma(x,\theta^{\dagger})]_{j_{1}j_{2}}[\partial_{k_{ 1}}^{\theta}\Lambda(x,\theta^{\dagger})]_{j_{3}j_{4}}[\Sigma(x,\theta^{\dagger })]_{j_{3}j_{4}}\}\nu_{\theta^{\dagger}}(dx);\] \[\mathscr{Y}_{k_{1}k_{2},3}(\theta^{\dagger}) \equiv\int\partial_{k_{1}}^{\theta}\log|\Sigma(x,\theta^{\dagger })|\partial_{k_{2}}^{\theta}\log|\Sigma(x,\theta^{\dagger})|\nu_{\theta^{ \dagger}}(dx)\] \[=\sum_{1\leq j_{1},j_{2},j_{3},j_{4}\leq N}\int[\partial_{k_{1}}^ {\theta}\Lambda(x,\theta^{\dagger})]_{j_{1}j_{2}}[\Sigma(x,\theta^{\dagger})]_{ j_{1}j_{2}}[\partial_{k_{2}}^{\theta}\Lambda(x,\theta^{\dagger})]_{j_{3}j_{4}} [\Sigma(x,\theta^{\dagger})]_{j_{3}j_{4}}\nu_{\theta^{\dagger}}(dx),\]
with \(\mathscr{W}_{j_{1}j_{2}j_{3}j_{4}}:\mathbb{R}^{N}\times\Theta\to\mathbb{R}\) defined as follows, for \((x,\theta)\in\mathbb{R}^{N}\times\Theta\),
\[\mathscr{W}_{j_{1}j_{2}j_{3}j_{4}}(x,\theta) =[\Sigma(x,\theta)]_{j_{1}j_{2}}[\Sigma(x,\theta)]_{j_{3}j_{4}}+[ \Sigma(x,\theta)]_{j_{1}j_{3}}[\Sigma(x,\theta)]_{j_{2}j_{4}}\] \[\qquad\qquad+[\Sigma(x,\theta)]_{j_{1}j_{4}}[\Sigma(x,\theta)]_{ j_{2}j_{3}}.\]
Notice that we have used (63) in the computation of \(\mathscr{Y}_{k_{1}k_{2},3}(\theta^{\dagger})\) and \(\mathscr{Y}_{k_{1}k_{2},3}(\theta^{\dagger})\). Thus, we have
\[\sum_{m=1}^{3}\mathscr{Y}_{k_{1}k_{2},m}(\theta^{\dagger}) =\sum_{1\leq j_{1},j_{2},j_{3},j_{4}\leq N}\int\Bigl{\{}[\partial_{k_{1}}^{ \theta}\Lambda(x,\theta^{\dagger})]_{j_{1}j_{2}}[\partial_{k_{2}}^{\theta} \Lambda(x,\theta^{\dagger})]_{j_{3}j_{4}}\] \[\qquad\qquad\times\bigl{(}[\Sigma(x,\theta^{\dagger})]_{j_{1}j_{ 2}}[\Sigma(x,\theta^{\dagger})]_{j_{2}j_{4}}+[\Sigma(x,\theta^{\dagger})]_{j_{1 }j_{4}}[\Sigma(x,\theta^{\dagger})]_{j_{2}j_{3}}\bigr{)}\Bigr{\}}\nu_{\theta^{ \dagger}}(dx)\] \[=2\int\operatorname{tr}\bigl{(}\partial_{k_{1}}^{\theta}\Sigma(x, \theta^{\dagger})\,\Lambda(x,\theta^{\dagger})\,\partial_{k_{2}}^{\theta}\Sigma(x, \theta^{\dagger})\,\Lambda(x,\theta^{\dagger})\bigr{)}\,\nu_{\theta^{\dagger}}(dx),\]
where in the second equality, we have used the following formula:
\[[\partial_{k}^{\theta}\Lambda(\theta^{\dagger})]_{j_{1}j_{2}}=-\sum_{1\leq j_{3}j _{4}\leq N}[\Lambda(x,\theta^{\dagger})]_{j_{1}j_{2}}[\partial_{k}^{\theta} \Sigma(x,\theta^{\dagger})]_{j_{3}j_{4}}[\Lambda(x,\theta^{\dagger})]_{j_{4}j_{3}}, N_{\beta}+1\leq k\leq N_{\theta}.\]
Furthermore, for other cases of \(1\leq k_{1},k_{2}\leq N_{\theta}\), it holds that:
\[\mathscr{Y}_{k_{1}k_{2}}(x,\theta^{\dagger})=0,\]
where we have used Lemma 6 and that for \(N_{\beta_{s}}+1\leq k\leq N_{\beta}\),
\[\Lambda(x,\theta)\,\mu_{k}(x,\theta) =\Lambda(x,\theta)\begin{bmatrix}\Sigma_{S_{1}R}(x,\theta)\\ \Sigma_{S_{2}R}(x,\theta)\\ \Sigma_{RR}(x,\theta)\end{bmatrix}a_{R}^{-1}(x,\sigma)\partial_{k}^{\theta}V_{R,0}(x,\beta_{R})\] \[=\begin{bmatrix}\mathbf{0}_{N_{\beta_{1}}}\\ \mathbf{0}_{N_{\beta_{2}}}\\ a_{R}^{-1}(x,\sigma)\partial_{k}^{\beta}V_{R,0}(x,\beta_{R})\end{bmatrix}, \quad x\in\mathbb{R}^{N},\,\theta=(\beta_{S},\beta_{R},\sigma)\in\Theta.\]
The proof is now complete.
### Proof for Case Study in Section 4.2.1
From (16), we have \((\hat{\sigma}_{n})^{2}=F_{1,n}+F_{2,n}+F_{3,n}\), where
\[F_{1,n} \equiv\frac{6}{\Delta_{n}^{2}}\times\frac{1}{n}\sum_{i=0}^{n-1} \bigl{(}\hat{p}_{i+1}-\hat{p}_{i}-s_{i}\Delta_{n}+s_{i}\frac{\Delta_{n}^{2}}{ 2}\bigr{)}^{2};\] \[F_{2,n} \equiv-\frac{6}{\Delta_{n}^{2}}\times\frac{1}{n}\sum_{i=0}^{n-1} \bigl{(}\hat{p}_{i+1}-\hat{p}_{i}-s_{i}\Delta_{n}+s_{i}\frac{\Delta_{n}^{2}}{ 2}\bigr{)}\bigl{(}s_{i+1}-s_{i}-s_{i}\Delta_{n}\bigr{)};\] \[F_{3,n} \equiv\frac{2}{\Delta_{n}}\times\frac{1}{n}\sum_{i=0}^{n-1} \bigl{(}s_{i+1}-s_{i}-s_{i}\Delta_{n}\bigr{)}^{2}\]
with the hidden components \(\hat{p}_{i}\) estimated by numerical differentiation:
\[\hat{p}_{i}=\frac{q_{i+1}-q_{i}}{\Delta_{n}},\quad 0\leq i\leq n.\]
Since the rough component \(s_{t}\) follows the linear SDE, the solution is explicitly given as: for \(u\in[t_{i},t_{i+1})\)
\[s_{u}=s_{t_{i}}e^{-(u-t_{i})}+\sigma^{\dagger}\int_{t_{i}}^{u}e^{-(u-v)}dB_{u}\]
under the true parameter \(\sigma^{\dagger}\). Thus, we have
\[\hat{p}_{i}=p_{i}+\frac{s_{t}}{\Delta_{n}}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{ u}e^{-(v-t_{i})}dvdu+\frac{\sigma^{\dagger}_{n}}{\Delta_{n}}\int_{t_{i}}^{t_{i+1} }\int_{t_{i}}^{u}\int_{t_{i}}^{v}e^{-(v-w)}dB_{u}dvdu,\]
and then
\[\hat{p}_{i+1}-\hat{p}_{i}=p_{i+1}-p_{i}+\frac{1}{\Delta_{n}}\bigl{(}\Delta_{n }-(1-e^{-\Delta_{n}})\bigr{)}(s_{i+1}-s_{i})\]
\[+\frac{\sigma^{\dagger}}{\Delta_{n}}\int_{t_{i+1}}^{t_{i+2}}\int_{t_{i+1}}^{u} \int_{t_{i+1}}^{v}e^{-(v-w)}dB_{u}dvdu-\frac{\sigma^{\dagger}}{\Delta_{n}}\int _{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}\int_{t_{i}}^{v}e^{-(v-w)}dB_{u}dvdu,\]
where we used:
\[\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}e^{-(v-t_{i})}dvdu=\Delta_{n}-(1-e^{- \Delta_{n}}).\]
Since \(\Delta_{n}\in[0,1)\) is assumed to be small, we use the Taylor expansion for the terms \(e^{-\Delta_{n}}\), \(e^{-(v-w)}\) and the stochastic Taylor expansion of \((p_{i+1},s_{i+1})\) around \((p_{i},s_{i})\) under the true parameter \(\sigma^{\dagger}\) to obtain
\[\begin{split}\hat{p}_{i+1}&-\hat{p}_{i}=s_{i}\Delta_ {n}+\sigma^{\dagger}B_{t_{i+1}-t_{i}}\frac{\Delta_{n}}{2}+\sigma^{\dagger} \int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{v}du\\ &+\frac{\sigma^{\dagger}}{\Delta_{n}}\int_{t_{i+1}}^{t_{i+2}} \int_{t_{i+1}}^{u}\int_{t_{i+1}}^{v}dB_{w}dvdu-\frac{\sigma^{\dagger}}{\Delta _{n}}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{v}dB_{w}dvdu+\Delta_{n}^{2}\xi_{i}, \end{split} \tag{70}\]
where the random variable \(\{\xi_{i}\}_{i}\) appearsom in the remainder term satisfies \(\mathbb{E}|\xi_{i}|^{2}\leq C\) for some constant \(C>0\). We express the Gaussian random variables as:
\[B_{t_{i+1}-t_{i}}=\sqrt{\Delta_{n}}\times z_{i}^{(1)},\quad\int_{t_{i}}^{t_{i+1 }}\int_{t_{i}}^{u}dB_{v}du=\sqrt{\Delta_{n}^{3}}\times\Big{(}\frac{z_{i}^{(1)} }{2}+\frac{z_{i}^{(2)}}{2\sqrt{3}}\Big{)};\]
\[\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}\int_{t_{i}}^{v}dB_{w}dvdu=\sqrt{\Delta_ {n}^{5}}\times\Big{(}\frac{z_{i}^{(1)}}{6}+\frac{z_{i}^{(2)}}{4\sqrt{3}}+ \frac{z_{i}^{(3)}}{12\sqrt{6}}\Big{)},\]
where \(\{z_{i}^{(j)}\}_{0\leq i\leq n+1,j=1,2,3}\) is an i.i.d. sequence of standard normal random variables so that it holds
\[\mathbb{E}\big{[}(B_{t_{i+1}-t_{i}})^{2}\big{]}=\Delta_{n},\quad\mathbb{E} \Big{[}B_{t_{i+1}-t_{i}}\times\Big{(}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u} dB_{v}du\Big{)}\Big{]}=\frac{\Delta^{2}}{2};\]
\[\mathbb{E}\Big{[}B_{t_{i+1}-t_{i}}\times\Big{(}\int_{t_{i}}^{t_{i+1}}\int_{t_ {i}}^{u}dB_{w}dvdu\Big{)}\Big{]}=\frac{\Delta^{2}}{6},\quad\mathbb{E}\Big{[} \Big{(}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{v}du\Big{)}^{2}\Big{]}=\frac{ \Delta^{2}}{3};\]
\[\mathbb{E}\Big{[}\Big{(}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{v}du\Big{)} \times\Big{(}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{w}dvdu\Big{)}\Big{]}= \frac{\Delta^{4}}{8};\]
\[\mathbb{E}\Big{[}\Big{(}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^{u}dB_{w}dvdu\Big{)} ^{2}\Big{]}=\frac{\Delta^{5}}{20}.\]
Then, (70) is written as:
\[\hat{p}_{i+1}-\hat{p}_{i}=s_{i}\Delta_{n}+\sigma^{\dagger}\sqrt{ \Delta_{n}^{3}}\frac{z_{i}^{(1)}}{2}+\sigma^{\dagger}\sqrt{\Delta_{n}^{3}} \Big{(}\frac{z_{i}^{(1)}}{2}+\frac{z_{i}^{(2)}}{2\sqrt{3}}\Big{)}\] \[\qquad+\sigma^{\dagger}\sqrt{\Delta_{n}^{3}}\Big{(}\frac{z_{i}^{( 1)}}{6}+\frac{z_{i}^{(2)}}{4\sqrt{3}}+\frac{z_{i}^{(2)}}{12\sqrt{5}}\Big{)} -\sigma^{\dagger}\sqrt{\Delta_{n}^{3}}\Big{(}\frac{z_{i}^{(1)}}{6}+\frac{z_{i} ^{(2)}}{4\sqrt{3}}+\frac{z_{i}^{(2)}}{12\sqrt{5}}\Big{)}+\Delta_{n}^{2}\xi_{ i}\] \[\qquad=s_{i}\Delta_{n}+\sigma^{\dagger}\sqrt{\Delta_{n}^{3}} \Big{(}\frac{z_{i}^{(1)}}{6}+\frac{z_{i}^{(2)}}{4\sqrt{3}}+\frac{z_{i}^{(2)} }{12\sqrt{5}}\Big{)}+\Big{(}\frac{z_{i}^{(2)}}{6}+\frac{z_{i}^{(2)}}{4\sqrt{3} }-\frac{z_{i}^{(2)}}{12\sqrt{5}}\Big{)}\Big{\}}+\Delta_{n}^{2}\xi_{i}. \tag{71}\]
From the ergodicity of the process \(\{s_{t}\}_{t}\) and (71), we have that, as \(n\to\infty\), \(\Delta_{n}\to 0\) and \(n\Delta_{n}\to\infty\),
\[F_{1,n}=\frac{9(\sigma^{\dagger})^{2}}{n}\sum_{i=0}^{n-1}\Big{(} \frac{z_{i}^{(1)}}{6}+\frac{z_{i}^{(2)}}{4\sqrt{3}}+\frac{z_{i}^{(2)}}{12\sqrt {5}}+\frac{5}{8}z_{i}^{(1)}+\frac{z_{i}^{(2)}}{4\sqrt{3}}-\frac{z_{i}^{(2)}}{12 \sqrt{5}}\Big{)}^{2}+\frac{1}{n}\sum_{i=0}^{n}R_{i}^{(1)}(\Delta_{n})\] \[\stackrel{{\frac{p_{i}}{2}}}{{\longrightarrow}}\frac{23} {5}(\sigma^{\dagger})^{2}; \tag{72}\] \[F_{2,n}=-\frac{9(\sigma^{\dagger})^{2}}{n}\sum_{i=0}^{n-1}\Big{(} \frac{z_{i}^{(1)}}{6}+\frac{z_{i}^{(2)}}{4\sqrt{3}}+\frac{z_{i}^{(2)}}{12\sqrt {5}}+\frac{5}{8}z_{i}^{(1)}+\frac{z_{i}^{(2)}}{4\sqrt{3}}-\frac{z_{i}^{(2)}}{12 \sqrt{5}}\Big{)}z_{i}^{(1)}+\frac{1}{n}\sum_{i=0}^{n}R_{i}^{(2)}(\Delta_{n})\] \[\stackrel{{\frac{p_{i}}{2}}}{{\longrightarrow}}-5( \sigma^{\dagger})^{2}, \tag{73}\]
where each \(\{R_{i}^{(1)}(\Delta_{n})\}_{i}\) and \(\{R_{i}^{(2)}(\Delta_{n})\}_{i}\) is sequence of random variables such that for \(0\leq i\leq n\), \(j=1,2\),
\[\{\mathbb{E}[|R_{i}^{(j)}(\Delta_{n})|^{2}]\}\leq C\Delta_{n}\]
for some constant \(C>0\). Similarly, we have that
\[F_{3,n}\xrightarrow{p_{d}}2(\sigma^{\dagger})^{2}. \tag{74}\]
From (72), (73) and (74), we immediately obtain the convergence (17).
### Kalman Filter for Sub-Class of (Hypo-II)
For simplicity, we write \(x_{i}=(x_{S_{i},i},x_{S_{i},i},x_{R,i})\in\mathbb{R}^{N}=\mathbb{R}^{N_{S_{1}} }\times\mathbb{R}^{N_{S_{2}}}\times\mathbb{R}^{N_{R}}\) for the state of scheme (18) at time \(t_{i}\). Component \(x_{S_{1},i}\) is observable and \(h_{i}=(x_{S_{i},i},x_{R,i})\in\mathbb{R}^{N_{H}}\), \(N_{H}=N_{S_{1}}+N_{R}\), is the hidden component, in agreement with applications. Thus, scheme (18) is now expressed as
\[x_{i+1}=b(\Delta_{n},x_{S_{1},i},\theta)+A(\Delta_{n},x_{S_{1},i},\theta)h_{i} +w(\Delta_{n},\theta). \tag{75}\]
We set \(\Sigma(\Delta_{n},\theta)=\mathbb{E}\left[\,w(\Delta_{n},\theta)w(\Delta_{n}, \theta)^{\top}\right]\) and assume that \(h_{0}|x_{S_{i},0}\sim\mathscr{N}(m_{0},Q_{0})\) for some \(m_{0}\in\mathbb{R}^{N_{H}}\) and \(Q_{0}\in\mathbb{R}^{N_{H}\times N_{H}}\). Then, the filtering formula and the marginal likelihood are obtained as follows.
* _Filtering Recursion_: We have that \[h_{k}|x_{S_{1},0:k}\sim\mathscr{N}(m_{k},Q_{k}),\quad 0\leq k\leq n,\] (76) with the filter mean \(m_{k}\) and covariance \(Q_{k}\) given as: \[m_{k} =\mu_{H,k-1}+\Lambda_{HS_{1},k-1}\Lambda_{S_{1},k-1}^{-1}\left(x_{S_{1},k}- \mu_{S_{1},k-1}\right);\] \[Q_{k} =\Lambda_{HH,k-1}-\Lambda_{HS_{1},k-1}\Lambda_{S_{1},k-1}^{-1} \Lambda_{S_{1},H,k-1},\] where \(\mu_{H,k-1}\in\mathbb{R}^{N_{H}}\), \(\mu_{S_{1},k-1}\in\mathbb{R}^{N_{S_{1}}}\), \(\Lambda_{S_{1},k-1}\in\mathbb{R}^{N_{S_{1}}\times N_{S_{1}}}\), \(\Lambda_{S_{1},H,k-1}\in\mathbb{R}^{N_{S_{1}}\times N_{H}}\), \(\Lambda_{HS_{1},k-1}\in\mathbb{R}^{N_{H}\times N_{S_{1}}}\), \(\Lambda_{HH,k-1}\in\mathbb{R}^{N_{H}\times N_{S_{1}}}\), \(\Lambda_{HH,k-1}\in\mathbb{R}^{N_{H}\times N_{S_{1}}}\) are found via the following equations: \[\mu_{k-1} =\begin{bmatrix}\mu_{S_{1},k-1}\\ \mu_{H,k-1}\end{bmatrix}=b(\Delta_{n},x_{S_{1},k-1},\theta)+A(\Delta_{n},x_{S_ {1},k-1},\theta)m_{k-1};\] \[\Lambda_{k-1} =\begin{bmatrix}\Lambda_{S_{1},k-1}&\Lambda_{S_{1},H,k-1}\\ \Lambda_{HS_{1},k-1}&\Lambda_{HH,k-1}\end{bmatrix}\] \[=\Sigma(\Delta_{n},\theta)+A(\Delta_{n},x_{S_{1},k-1},\theta)\,Q_{k-1}\,A (\Delta_{n},x_{S_{1},k-1},\theta)^{\top}.\]
* _Marginal likelihood_: For a given initial distribution \(p_{\theta}(x_{S_{1},0})\), we have that \[p_{\theta}(x_{S_{1},0:n})=p_{\theta}(x_{S_{1},0})\times\prod_{k=1}^{n}p_{ \theta}(x_{S_{1},k}|x_{S_{1},0:k-1}),\] (77) where \(p_{\theta}(x_{S_{1},k}|x_{S_{1},0:k-1})\) is the density of \(x_{S_{1},k}\) given \(x_{S_{1},0:k-1}\) whose conditional distribution is given by: \[x_{S_{1},k}\,|\,x_{S_{1},0:k-1}\sim\mathscr{N}(\mu_{S_{1},k-1},\,\Lambda_{S_{1},k-1}).\]
### Derivation of Filter (76)
We assume that the filter in the previous time step is obtained as:
\[h_{k-1}|x_{S_{1},0:k-1}\sim\mathscr{N}(m_{k-1},Q_{k-1}). \tag{78}\]
It follows that
\[p_{\theta}(h_{k}|x_{S_{1},0:k})=\frac{p_{\theta}(x_{k}|x_{S_{1},0:k-1})}{p_{ \theta}(x_{S_{1},k}|x_{S_{1},0:k-1})},\]
and
\[p_{\theta}(x_{k}|x_{S_{1},0:k-1}) =\int p_{\theta}(x_{k},h_{k-1}|x_{S_{1},0:k-1})dh_{k-1}\] \[=\int p_{\theta}(x_{k}|x_{k-1})p_{\theta}(h_{k-1}|x_{S_{1},0:k-1}) dh_{k-1}. \tag{79}\]
From the definition of scheme (75), we have that
\[x_{k}|x_{k-1}\sim\mathscr{N}\big{(}b(\Delta_{n},x_{S_{1},k-1},\theta)+A( \Delta_{n},x_{S_{1},k-1},\theta)h_{k-1},\Sigma(\Delta_{n},\theta)\big{)}. \tag{80}\]
From (78), (79) and (80), we obtain
\[x_{k}|x_{S_{1},0:k-1}\sim\mathscr{N}\big{(}\mu_{k-1},\Lambda_{k-1}\big{)}, \tag{81}\]
where
\[\mu_{k-1}=b(\Delta_{n},x_{S_{1},k-1},\theta)+A(\Delta_{n},x_{S_{1},k-1}, \theta)m_{k-1};\]
\[\Lambda_{k-1}=\Sigma(\Delta_{n},\theta)+A(\Delta_{n},x_{S_{1},k-1},\theta)\, \mathcal{Q}_{k-1}\,A(\Delta_{n},x_{S_{1},k-1},\theta)^{\top}.\]
Finally, applying the conditional Gaussian distribution formula, we obtain (76).
### Derivation of the Marginal Likelihood (77)
From the marginal of the Gaussian distribution (81) for \(x_{k}|x_{S_{1},0:k-1}\), we immediately obtain:
\[x_{S_{1},k}|x_{S_{1},0:k-1}\sim\mathscr{N}\big{(}\mu_{S_{1},k-1},\,\Lambda_{S _{1}S_{1},k-1}\big{)},\]
where \(\mu_{S_{1},k-1}=\text{proj}_{1,N_{S_{1}}}(\mu_{k-1})\), and \(\Lambda_{S_{1}S_{1},k-1}=\big{[}\Lambda_{k-1}^{ij}\big{]}_{1\leq i,j\leq N_{S _{1}}}\).
|
2309.17237 | On a Rankin-Selberg integral of three Hermitian cusp forms | Let $K = \mathbb{Q}(i)$. In this work, we study the Petersson inner product
of a Hermitian Eisenstein series of Siegel type on the unitary group
$U_{5}(K)$, diagonally-restricted on $U_2(K)\times U_2(K)\times U_1(K)$,
against two Hermitian cuspidal eigenforms $F, G$ of degree $2$ and an elliptic
cuspidal eigenform $h$ (seen as a Hermitian modular form of degree 1), all
having weight $k \equiv 0 \pmod 4$. This consideration gives an integral
representation of a certain Dirichlet series, which will then have an analytic
continuation and functional equation, due to the one of the Eisenstein series.
By taking $F$ to belong in the Maass space, we are able to show that the
Dirichlet series possesses an Euler product. Moreover, its $p-$factor for an
inert prime $p$ can be essentially identified with the twist by $h$ of a degree
six Euler factor attached to $G$ by Gritsenko. The question of whether the same
holds for the primes that split remains unanswered here, even though we make
considerable steps in that direction too. Our paper is inspired by work of
Heim, who considered a similar question in the case of Siegel modular forms. | Thanasis Bouganis, Rafail Psyroukis | 2023-09-29T13:38:02Z | http://arxiv.org/abs/2309.17237v1 | # On a Rankin-Selberg Integral of Three Hermitian Cusp Forms
###### Abstract.
Let \(K=\mathbb{Q}(i)\). In this work, we study the Petersson inner product of a Hermitian Eisenstein series of Siegel type on the unitary group \(U_{5}(K)\), diagonally-restricted on \(U_{2}(K)\times U_{2}(K)\times U_{1}(K)\), against two Hermitian cuspidal eigenforms \(F,G\) of degree \(2\) and an elliptic cuspidal eigenform \(h\) (seen as a Hermitian modular form of degree \(1\)), all having weight \(k\equiv 0\pmod{4}\). This consideration gives an integral representation of a certain Dirichlet series, which will then have an analytic continuation and functional equation, due to the one of the Eisenstein series. By taking \(F\) to belong in the Maass space, we are able to show that the Dirichlet series possesses an Euler product. Moreover, its \(p-\)factor for an inert prime \(p\) can be essentially identified with the twist by \(h\) of a degree six Euler factor attached to \(G\) by Gritsenko. The question of whether the same holds for the primes that split remains unanswered here, even though we make considerable steps in that direction too. Our paper is inspired by work of Heim, who considered a similar question in the case of Siegel modular forms.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Hecke algebras and \(L-\)functions
* 4 Integral Representation and Dirichlet Series
* 5 Inert primes
* 5.1 Hecke operators and weak rationality theorems
* 5.2 Calculation of the Dirichlet Series
* 6 Split Primes
* 6.1 Relations between operators in \(\mathrm{GL}_{4}\) and factorisation
* 6.2 Hecke Operators and weak rationality theorems
* 6.3 Calculation of the Dirichlet series - First Part
* 6.4 Calculation of the Dirichlet Series - Second Part
* 6.5 Calculation of the Dirichlet Series - Third Part
* 6.6 Final expression for the Dirichlet series
* 7 Euler Product
## 1. Introduction
The spinor \(L-\)function attached to a cuspidal (holomorphic) Siegel eigenform \(G\) of degree two has been an object of intense study in the literature. It was the seminal work of Andrianov in [1] who first obtained an integral representation of such an \(L-\)function and from there derived a functional equation and established its analytic properties. However, the integral representation obtained by Andrianov does not allow one (or at least it is not known how) to obtain algebraicity properties of the critical values. The difficulty seems to be related to the fact that the integral representation involves Eisenstein series which are defined over symmetric spaces which do not have a structure of a Shimura variety. Later work, such as the one of Kohnen and Skoruppa in [15], obtained other integral representations using Eisenstein series over the Siegel upper space but of weight zero and hence not holomorphic (or nearly holomorphic). One should also mention here that if on the other hand \(G\) is taken non-holomorphic, there is recent work of Loeffler,
Pilloni, Skinner and Zerbes in [18], where they obtain algebraicity results and construct \(p-\)adic \(L-\)functions.
It is perhaps surprising that a seemingly more complicated object, the twist of the spinor \(L-\)function of \(G\) (from now on always assumed holomorphic) by an elliptic cusp form \(h\) (that is \(\operatorname{Sp}_{4}\times\operatorname{GL}_{2}\)) does afford an integral representation, which allows one to not only study analytic properties but also algebraic (and \(p-\)adic). Actually, there are (at least) two different such integral representations. The one obtained by Furusawa in [4], uses an Eisenstein series over a unitary group and its restriction to \(\operatorname{Sp}_{4}\). There is a series of works based on this idea, most notably by Saha in [19], generalising the work of Furusawa.
The other integral representation of the twist was obtained by Heim in [13] in his effort to answer a question posed by Garrett on the possibility of extending the doubling method to more copies of the group (here the symplectic). Indeed, Heim managed to show that the twisted spinor \(L-\)function can be obtained by integrating the restriction of a Siegel type Eisenstein series of degree \(5\) to \(\mathbb{H}_{2}\times\mathbb{H}_{2}\times\mathbb{H}_{1}\) against the (degree two) cusp form \(G\), another degree two cusp form \(F\) in the Maass space and an elliptic cusp form \(h\), all being Hecke eigenforms for their corresponding Hecke algberas. This integral expression was later exploited systematically by Bocherer and Heim in [3], in order to establish various algebraicity properties and lift various restrictions on the weights of the Siegel and elliptic modular forms by the use of differential operators.
Shortly after the work of Andrianov on the spinor \(L-\)function in [1], Gritsenko, in a series of papers, extended Andrianov's approach of the use of parabolic Hecke algebras to the study of a degree \(6\)\(L-\)function attached to a cuspidal Hermitian eigenform of degree two, where the underlying imaginary quadratic field is taken to be the field of Gaussian numbers \(K:=\mathbb{Q}(i)\). Indeed in [11], Gritsenko first defined such an \(L-\)function and in the later work of [8], he obtained the analogue construction of Kohnen and Skoruppa but with a quite different approach. Both integral representations allowed him to obtain a functional equation and study the analytic properties. However, as in the case of the symplectic group, neither of the above integral representations could be used to to derive algebraicity properties, due to the Eisenstein series involved (only of real analytic nature).
In this paper, we ask whether the phenomenon observed in the case of the twisted Siegel spinor \(L-\)function carries over to the unitary one, namely whether the degree \(6\)\(L-\)function considered by Gritsenko, twisted by an elliptic cusp form \(h\), affords an integral representation which allows one to study algebraic properties of the twisted \(L-\)function. Here the elliptic cusp form \(h\) is seen as a Hermitian form of degree one (i.e. of \(U_{1}\)). Given the similarities between \(\operatorname{Sp}_{4}\) and \(U_{2}\), we investigate the possibility of extending the idea of Heim to the unitary setting. That is, we study the Petersson inner product of a Hermitian Eisenstein series of Siegel type on the unitary group \(U_{5}(K)\), diagonally-restricted on \(U_{2}(K)\times U_{2}(K)\times U_{1}(K)\), against two Hermitian cusp forms \(F,G\) of degree \(2\) and a Hermitian cusp form \(h\) of degree \(1\), all being Hecke eigenforms for their corresponding Hecke algebras. This consideration gives an integral representation of a certain Dirichlet series, which will then have an analytic continuation and functional equation, due to the one of the Eisenstein series. By taking \(F\) to belong in the Maass space, we are able to show in Theorem 7.1 that the Dirichlet series possesses an Euler product. Moreover, its \(p-\)factor for an inert prime \(p\) can be essentially identified with the degree \(12\)\(p-\)factor of \(Z_{G\otimes h}\), the degree \(6\)\(L-\)function attached to \(G\), twisted by the Satake parameters of \(h\). This is shown in Theorem 5.13. The question of whether the same holds for the primes that split in \(K\) remains unanswered here, even though we make considerable steps in that direction too. In particular, we have obtained all the essential ingredients, i.e. the factorization of polynomials in parabolic Hecke algebras, the necessary rationality theorems, relations between Hecke operators as well as the main computations of the Dirichlet series. However, performing the last few calculations seems very complicated. Our progress is summarised in Theorem 6.21. The case of
the only ramified prime, namely \(2\), is not discussed here but we should say that our calculations for the case of inert give essentially all ideas required to compute the Euler factor also in this case.
The reader may have already recognised that the choice of \(U_{2}\) is not as random. Indeed, thanks to the so called accidental isomorphisms between "small" orthogonal groups and other classical groups, the spinor \(L-\)function of \(\operatorname{Sp}_{4}\) can be identified with the standard \(L-\)function attached to a holomorphic modular form of \(\operatorname{SO}(2,3)\). Similarly, the \(L-\)function studied by Gritsenko is closely related to the standard \(L-\)function of an orthogonal group of signature \((2,4)\). Therefore, the twist we are studying here is nothing else than the two dimensional twist of this standard \(L-\)function. Such twists have been studied before in [5] but these methods can only be used for analytic results. If, on the other hand, one is interested in algebraicity results of special values, these methods cannot (or at least is not known how) used to obtain such results because of the use of orthogonal groups which do not correspond to Shimura varieties. On the other hand, the approach taken here (as a triple product) is known to give algebraicity results (see Proposition 4.2 and the discussion there), very much in the way that Heim and Bocherer obtained their algebraicity results in the case of the two dimensional twist of the spinor \(L-\)function attached to a degree two Siegel modular form. This possibility of obtaining algebraicity results is the main motivation for the present work.
**Notation.** In the following we will always use the following notation. We will denote by \(K=\mathbb{Q}(i)\), the Gaussian field. Let also \(\mathcal{O}_{K}=\mathbb{Z}[i]\) denote its ring of integers. Let also
\[J_{n}=\begin{pmatrix}0_{n}&-1_{n}\\ 1_{n}&0_{n}\end{pmatrix}.\]
For a complex number \(z\), we denote by \(e(z):=e^{2\pi iz}\). We will use the bracket notation \(A[B]:=\overline{B}^{t}AB\) for complex matrices \(A,B\). Finally, for a polynomial \(U\) with coefficients Hecke operators and \(G\) a Hecke eigenform, we write \(U_{G}\) for the polynomial obtained when we substitute the operators with the corresponding eigenvalues.
## 2. Preliminaries
We start with some definitions. Everything below can be found in [17] or more generally in [21]. For the material on Jacobi forms we refer to [8].
**Definition 2.1**.: Let \(R\) be either \(K\), \(\mathcal{O}_{K}\) or \(\mathbb{C}\) and fix an embedding \(R\hookrightarrow\mathbb{C}\). We write \(U_{n}(R)\) for the \(R-\)points of the unitary group of degree \(n\geq 1\). That is,
\[U_{n}(R)=\{g\in\operatorname{GL}_{2n}(R)\mid J_{n}[g]=J_{n}\}.\]
Hence, for an element \(\begin{pmatrix}A&B\\ C&D\end{pmatrix}\in U_{n}(R)\) with \(n\times n\) matrices \(A,B,C,D\), these satisfy the relations
\[\overline{A}^{t}C=\overline{C}^{t}A,\overline{D}^{t}B=\overline{B}^{t}D,A \overline{D}^{t}-\overline{B}^{t}C=1_{n}.\]
**Definition 2.2**.: The upper Hermitian plane of degree \(n\) is defined by
\[\mathbb{H}_{n}=\{Z=X+iY\in\operatorname{M}_{n}(\mathbb{C})|\overline{X}^{t}=X,\overline{Y}^{t}=Y>0\}.\]
We fix an embedding \(K\hookrightarrow\mathbb{C}\). Then, an element \(g=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\in U_{n}(K)\hookrightarrow U_{n}(\mathbb{C})\) of the unitary group acts on the above upper half plane via the action
\[Z\longmapsto g\langle Z\rangle:=(AZ+B)(CZ+D)^{-1}.\]
The usual factor of automorphy is denoted by \(j(g,Z)=\det(CZ+D)\).
We will be writing \(\Gamma_{n}\) for the Hermitian modular group, that is \(\Gamma_{n}=U_{n}(\mathcal{O}_{K})\).
**Definition 2.3**.: By a Hermitian modular form of weight \(k\in\mathbb{Z}\), we mean a holomorphic function \(F:\mathbb{H}_{n}\longrightarrow\mathbb{C}\) such that
\[F\left(g\langle Z\rangle\right)=j(g,Z)^{-k}F(Z),\]
for all \(g\in\Gamma_{n}\).
If \(n=1\) we further require that \(F\) is holomorphic at infinity.
It is well known that the set of all such forms constitutes a finite dimensional space, which we call \(M_{n}^{k}\).
Thanks to our assumption for \(n=1\) and because of the Kocher principle for \(n\geq 2\), each such \(F\) admits a Fourier expansion
\[F(Z)=\sum_{N}a(N)e(\operatorname{tr}(NZ))\]
where \(N\) runs through all the semi-integral non-negative Hermitian matrices
\[N\in\left\{(n_{ij})\geq 0\mid n_{ii}\in\mathbb{Z},n_{ij}=\overline{n_{ji}} \in\frac{1}{2}\mathcal{O}_{K}\right\}.\]
\(F\) is called a **cusp form** if \(a(N)\neq 0\) only for \(N\) positive definite. We denote the space of cusp forms by \(S_{n}^{k}\).
**Definition 2.4**.: The Petersson inner product for two Hermitian modular forms \(F,G\) is given by
\[\int_{\Gamma_{n}\backslash\mathbb{H}_{n}}F(Z)\overline{G(Z)}(\det Y)^{k}d^{*}Z\]
where \(d^{*}Z=(\det Y)^{-2n}dXdY\) with \(Z=X+iY\).
It is well known that such an integral is always defined when one of the \(F\) or \(G\) is a cusp form. By now partitioning
\[Z=\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix},\]
with \(\tau\in\mathbb{H}_{n-1},\omega\in\mathbb{H}_{1}\) and \(z_{1},z_{2}\in\mathbb{C}\), we can consider the Fourier expansion of \(F\) with respect to the variable \(\omega\):
\[F(Z)=\sum_{m=1}^{\infty}\phi_{m}(\tau,z_{1},z_{2})e(m\omega).\]
The functions \(\phi_{m}:\mathbb{H}_{1}\times\mathbb{C}^{2}\longrightarrow\mathbb{C}\) are called the Fourier\(-\)Jacobi coefficients of \(F\).
For \(R\) as above we consider the following subgroups of \(U_{n}(R)\):
\[P_{n,r}(R) =\left\{\begin{pmatrix}*&*\\ 0_{n-r,n+r}&*\end{pmatrix}\in U_{n}(R)\right\},\] \[C_{n,r}(R) =\left\{\begin{pmatrix}*&*\\ 0_{n+r,n-r}&*\end{pmatrix}\in U_{n}(R)\right\}.\]
In particular when \(R=\mathcal{O}_{K}\), we write \(P_{n,r},C_{n,r}\). Let also \(\Gamma_{n}=U_{n}\cap M_{2n}(\mathcal{O}_{K})\).
We will be particularly interested in the parabolic subgroup \(P_{n,n-1}(K)\leq U_{n}(K)\), mainly because we can treat Fourier\(-\)Jacobi forms as a special type of modular forms with respect to this subgroup. Let us make this more explicit. We first start with the definition of the slash operator.
**Definition 2.5**.: Let \(n\geq 1\) and \(k\) be any integer. Then, for any function \(F\) on \(\mathbb{H}_{n+1}\) and a matrix \(g=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\in U_{n+1}(K)\), we define
\[(F\mid_{k}g)(Z):=\det(CZ+D)^{-k}F(g\langle Z\rangle).\]
We now write \(\Gamma_{n,1}=P_{n+1,n}(\mathcal{O}_{K})\), the group of integral points of the parabolic \(P_{n+1,n}\). We then have the following definition
**Definition 2.6**.: Let \(n\geq 1\). A holomorphic function \(F\) on \(\mathbb{H}_{n+1}\) is a modular form of weight \(k\) with respect to the parabolic subgroup \(\Gamma_{n,1}\) if the following conditions hold:
* \(F\mid_{k}M=F\) for any \(M\in\Gamma_{n,1}\)
* The function \(F(Z)\) is bounded in the domain \(\operatorname{Im}(Z)\geq c>0\).
We note here that we can relax the second condition if \(n\geq 2\). This again follows by Kocher's principle. The space of all such forms will be denoted by \(M^{k}_{n,1}\). Again, each such \(F\) has a Fourier expansion as above and we call \(F\) a cusp form if \(a(N)\neq 0\) only for positive definite matrices \(N\).
We can now give the definition of Fourier\(-\)Jacobi forms:
**Definition 2.7**.: A holomorphic function \(\phi\) on \(\mathbb{H}_{n}\times\mathbb{C}^{n}\times\mathbb{C}^{n}\) is said to be a Jacobi form of genus \(n\), weight \(k\) and index \(m\) if the function \(\tilde{\phi}\left(\begin{pmatrix}\tau&z_{1}\\ z_{2}^{t}&\omega\end{pmatrix}\right):=\phi(\tau,z_{1},z_{2})e(m\omega)\), where \(\omega\in\mathbb{H}_{1}\) is chosen so that \(\begin{pmatrix}\tau&z_{1}\\ z_{2}^{t}&\omega\end{pmatrix}\in\mathbb{H}_{n+1}\), is a modular form with respect to the group \(\Gamma_{n,1}\). The space of such forms is denoted by \(J^{n}_{k,m}\) and we will call \(\tilde{\phi}\) a \(P-\)form.
Let us now restrict ourselves to the case \(n=2\):
**Definition 2.8**.: The Petersson inner product of two Fourier\(-\)Jacobi forms \(\phi_{m},\psi_{m}\in J^{2}_{k,m}\) is defined as
\[\langle\phi_{m},\psi_{m}\rangle=\int_{\mathcal{F}^{J}}\phi_{m}(\tau,z_{1},z_{ 2})\overline{\psi_{m}(\tau,z_{1},z_{2})}v^{k}e^{-\pi m|z_{1}-\overline{z_{2}} |^{2}/v}d\mu,\]
where \(d\mu=v^{-4}dudvdx_{1}dy_{1}dx_{2}dy_{2}\) with \(\tau=u+iv\), \(z_{j}=x_{j}+iy_{j}\) for \(j=1,2\) and \(\mathcal{F}^{J}\) is a fundamental domain for the action of \(P_{2,1}\) on \(\mathbb{H}_{1}\times\mathbb{C}^{2}\).
We also have the following definition:
**Definition 2.9**.: Let \(\phi_{m},\psi_{m}\in J^{2}_{k,m}\) and denote by \(\tilde{\phi}_{m},\tilde{\psi}_{m}\) the \(P-\)forms obtained as in Definition 2.7. We then define
\[\langle\tilde{\phi}_{m},\tilde{\psi}_{m}\rangle_{\mathcal{A}}=\int_{\mathcal{ Q}_{1,1}}F(Z)\overline{G(Z)}(\det Y)^{k}d^{*}Z\]
where \(d^{*}Z=(\det Y)^{-4}dXdY\) is the invariant element for the action of the unitary group on \(\mathbb{H}_{2}\) and
\[\mathcal{Q}_{1,1}=\{Z\in\mathbb{H}_{2}|(\tau,z_{1},z_{2})\in\mathcal{F}^{J} \text{ and }|x_{\omega}|\leq 1/2\}.\]
There is a relation between the two inner products above, given in the following Lemma:
**Lemma 2.10**.: Let \(\phi_{m},\psi_{m}\in J^{2}_{k,m}\) and denote by \(\tilde{\phi}_{m},\tilde{\psi}_{m}\) the corresponding \(P-\)forms. Then
\[\langle\phi_{m},\psi_{m}\rangle=\beta_{k}t^{k-3}\langle\tilde{\phi}_{m},\tilde {\psi}_{m}\rangle_{\mathcal{A}}\]
some constant \(\beta_{k}\).
Proof.: We have
\[\langle F,G\rangle_{\mathcal{A}}=\int_{Q_{1,1}}\phi(\tau,z_{1},z_{2})e^{2\pi it \omega}\overline{\psi(\tau,z_{1},z_{2})}e^{-2\pi i\overline{\omega}}(\det Y)^{ k-4}dXdY.\]
Let now \(\tilde{y}_{\omega}=y_{\omega}-|z_{1}-\overline{z_{2}}|^{2}/4y_{\tau}\). Then \(\det Y=y_{\tau}\tilde{y}_{\omega}\). Hence, the above integral can be written as
\[\int_{\tilde{y}_{\omega}>0}\int_{\mathcal{F}^{J}}\int_{x_{\omega} \pmod{1}}\phi(\tau,z_{1},z_{2})e^{-4\pi t(\tilde{y}_{\omega}+|z_{1}-\overline {z_{2}}|^{2}/4y_{\tau})}\overline{\psi(\tau,z_{1},z_{2})}(y_{\tau}\tilde{y}_{ \omega})^{k-4}d\tau dz_{1}dz_{2}d\tilde{y}_{\omega}dx_{\omega}\] \[=\langle\phi,\psi\rangle\int_{\tilde{y}_{\omega}>0}e^{-4\pi t \tilde{y}_{\omega}}\tilde{y}_{\omega}^{k-4}d\tilde{y}_{\omega}=(4\pi t)^{3-k }\Gamma(k-3)\langle\phi,\psi\rangle\]
so the result follows with \(\beta_{k}=(4\pi)^{k-3}\Gamma(k-3)^{-1}\)
## 3. Hecke algebras and \(L-\)functions
In this section, we give an account of a general Hecke theory we will need. We follow Gritsenko in [8]. In order to simplify the exposition, we restrict ourselves to the case \(n=2\). We start by defining the groups of similitude:
\[S^{2}=\{g\in M_{4}(K)\mid J_{2}[g]=\mu(g)J_{2},\mu(g)>0\}\]
\[S^{2}_{p}=\{g\in S^{2}\cap M_{4}(\mathcal{O}[p^{-1}])\mid\mu(g)=p^{\delta}, \delta\in\mathbb{Z}\}.\]
It is then well-known that the sets \(H(\Gamma_{2},S^{2}),H(\Gamma_{2},S^{2}_{p})\) are Hecke pairs and we can define the corresponding Hecke rings, which we will denote by \(H^{2}\) and \(H^{2}_{p}\). From [8, Corollary 2.2], we can decompose the global Hecke ring into the tensor product of \(p-\)rings as follows:
\[H(\Gamma_{2},S^{2})=\bigotimes_{p}H(\Gamma_{2},S^{2}_{p})\]
We start with a very general lemma regarding the embeddings of Hecke algebras.
**Lemma 3.1**.: Let \((\Gamma_{0},S_{0})\) and \((\Gamma,S)\) be two Hecke pairs. We assume that
\[\Gamma_{0}\subset\Gamma,\ \ \Gamma S_{0}=S,\ \ \Gamma\cap S_{0}S_{0}^{-1} \subset\Gamma_{0}\]
Then, given an arbitrary element \(X\in H(\Gamma,S)\), according to the second condition, we can write it as
\[X=\sum_{i}a_{i}(\Gamma g_{i})\]
with \(g_{i}\in S_{0}\). Then, if we set
\[\epsilon(X)=\sum_{i}a_{i}(\Gamma_{0}g_{i})\]
then \(\epsilon\) does not depend on the selection of the elements \(g_{i}\in S_{0}\) and is an embedding (as a ring homomorphism) of the Hecke algebra \(H(\Gamma,S)\) to \(H(\Gamma_{0},S_{0})\).
Proof.: See page 2890 in [8].
It is also well-known that each \(p-\)ring is isomorphic to the Hecke ring over the corresponding local field, and the structure of these rings depends on the decomposition of the prime \(p\) in \(\mathcal{O}_{K}\). In order to work locally, we give the following definitions:
\[K_{p}=K\otimes\mathbb{Q}_{p},\mathcal{O}_{p}=\mathcal{O}_{K}\otimes\mathbb{Z} _{p},\Phi_{p}=(2i)^{-1}\begin{pmatrix}0&-E_{2}\\ E_{2}&0\end{pmatrix}\]
which denote the algebra over \(\mathbb{Q}_{p}\), the maximal lattice and a Hermitian form on the vector space \(K_{p}\) respectively. We also define the unitary group \(G^{2}_{p}\) and a maximal compact subgroup \(U^{2}_{p}\) by
\[G^{2}_{p}=\{g\in\operatorname{GL}_{4}(K_{p})\mid g^{*}\Phi_{p}g=\mu(g)\Phi_{p },\mu(g)\in\mathbb{Q}_{p}^{*}\}\]
\[U^{2}_{p}=\{g\in G^{2}_{p}\cap M_{4}(\mathcal{O}_{p})\mid\mu(g)\in\mathbb{Z}_ {p}^{*}\}\]
where \(g^{*}=(g_{ji})^{\sigma}\) with \(\sigma\) is the canonical involution of the algebra \(K_{p}\), determined by the behaviour of the prime \(p\) in \(K\) (split, inert or ramified).
We then have the following proposition:
**Proposition 3.2**.: The local Hecke ring \(H(U^{2}_{p},G^{2}_{p})\) is isomorphic to the \(p-\)ring \(H(\Gamma_{2},S^{2}_{p})\).
Proof.: See [8, Proposition 2.3].
Let us now recall the definition of the so-called spherical or Satake mapping. We follow [8], [9] and [20]. We need to distinguish between the cases \(p\) is inert or \(p=2\) and \(p\) splits.
In the first case, we know that given \(g\in G^{2}_{p}\) we have the double coset decomposition
\[U^{(2)}_{p}gU^{(2)}_{p}=\sum_{i}U^{(2)}_{p}M^{(m_{i})}N_{i},\]
where \(N_{i}\) is a unipotent matrix, \(m_{i}=(m_{i_{1}},m_{i_{2}};m_{i_{0}})\) and
\[M^{(m_{i})}=\begin{pmatrix}p^{m_{i_{0}}}(\overline{D}^{t})^{-1}&0\\ 0&D\end{pmatrix},D=\operatorname{diag}(\pi^{m_{i_{1}}},\pi^{m_{i_{2}}}),\]
with \(\pi=p\) if \(p\) is inert or \(\pi=(1+i)\) if \(p=2\). We then define
\[\Phi:H(U_{p}^{(2)},G_{p}^{(2)})\longrightarrow\mathbb{Q}^{W_{4}}[x_{0}^{\pm 1 },x_{1}^{\pm 1},x_{2}^{\pm 1}]\]
via
\[\Phi(U_{p}^{(2)}gU_{p}^{(2)})=\sum_{i}x_{0}^{m_{i_{0}}}\prod_{j=1}^{2}(x_{j}q^ {-j})^{m_{i_{j}}}\]
where the ring \(\mathbb{Q}^{W_{2}}[x_{0}^{\pm 1},x_{1}^{\pm 1},x_{2}^{\pm 1}]\) denotes the ring of polynomials invariant with respect to the permutation of the variables \(x_{0},x_{1},x_{2}\) under the transformations \(w^{(j)},j=1,2\), defined by
\[x_{0}\longmapsto p^{-1}x_{0}x_{i}^{e},x_{i}\longmapsto p^{2/e}x_{i}^{-1},x_{j} \longmapsto x_{j},(i\neq j)\]
with \(q\) denoting the number of elements in the residue field \(\mathbb{Q}(i)\otimes\mathbb{Q}_{p}\) and \(e\) is the ramification index of the prime \(p\).
For the case of decomposable \(p\), the definition of the spherical mapping is different. In particular, there is an isomorphism
\[\rho:H(U_{p}^{2},G_{p}^{2})\longrightarrow H(\operatorname{GL}_{4}(\mathbb{Z} _{p}),\operatorname{GL}_{4}(\mathbb{Q}_{p}))[x^{\pm 1}]\]
(see [8, Proposition 2.4]). We can then define the Satake mapping \(\Omega\) for \(H(\operatorname{GL}_{4}(\mathbb{Z}_{p}),\operatorname{GL}_{4}(\mathbb{Q}_{p}))\) in an analogous way as for the case \(p\) inert or \(p=2\). Let us describe it here. Given an element \(X\in H(\operatorname{GL}_{4}(\mathbb{Z}_{p}),\operatorname{GL}_{4}(\mathbb{Q} _{p}))\), we know that we can write it as
\[X=\sum_{i}a_{i}\operatorname{GL}_{4}(\mathbb{Z}_{p})g_{i}\]
where \(g_{i}=\begin{pmatrix}p^{d_{i1}}&*&\cdots&*\\ 0&p^{d_{i2}}&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&p^{d_{4i}}\end{pmatrix}\) and \(a_{i}\in\mathbb{C}\).
Then the mapping \(\Omega\) given by
\[\Omega(X)=\sum_{i}\prod_{j=1}^{4}(x_{j}p^{-j})^{d_{ij}}\]
defines an isomorphism between \(H(\operatorname{GL}_{4}(\mathbb{Z}_{p}),\operatorname{GL}_{4}(\mathbb{Q}_{p}))\) and \(\mathbb{Q}^{\operatorname{sym}}[x_{1}^{\pm 1},\cdots,x_{4}^{\pm 1}]\) of symmetric polynomials.
We then define the Satake mapping \(\Phi\) in this case as the composition \(\Phi=\Omega\circ\rho\).
Let us now define the parabolic Hecke algebras we will need. Let \(S^{1,1},S^{1,1}_{p},\Gamma_{1,1}\) denote the intersection of the groups \(S^{2},S^{2}_{p},\Gamma_{2}\) with the parabolic subgroup \(P_{2,1}\) respectively. Again, the pairs \((\Gamma_{1,1},S^{1,1})\) and \((\Gamma_{1,1},S^{1,1}_{p})\) are Hecke pairs and we can then define the Hecke rings
\[H^{1,1}=H_{\mathbb{Q}}(\Gamma_{1,1},S^{1,1}),H_{p}^{1,1}=H_{\mathbb{Q}}(\Gamma _{1,1},S^{1,1}_{p}).\]
Since \(\Gamma_{2}S^{1,1}_{p}=S^{2}_{p}\) and after writing an element \(X\in H_{p}^{2}\) as
\[X=\sum_{i}a_{i}\Gamma_{2}g_{i}\]
with \(g_{i}\in S^{2}_{p}\) we can define an embedding
\[X\longmapsto\epsilon(X)=\sum_{i}a_{i}\Gamma_{1,1}g_{i}\]
using Lemma 3.1.
Given the way the Fourier-Jacobi forms are defined, these will be acted upon by elements of the parabolic Hecke algebra \(H(\Gamma_{1,1},S^{1,1})\). The structure of these rings again depends on the decomposition of the prime \(p\) in \(\mathcal{O}_{K}\).
If \(p\) is inert or \(p=2\), then the structure of the Hecke ring is constructed in a similar way as the corresponding ring for the symplectic group of genus \(2\), see [13], [6] for example.
In the case of a decomposable \(p\), however, the situation is quite different. This follows from the fact that
\[H(U_{p}^{2},G_{p}^{2})\cong H(\operatorname{GL}_{4}(\mathbb{Z}_{p}), \operatorname{GL}_{4}(\mathbb{Q}_{p}))[x^{\pm 1}]\]
This gives that the corresponding \(p-\)ring of the parabolic Hecke algebra is isomorphic to the ring of polynomials of one variable with coefficients from the Hecke ring of the parabolic subgroup
\[P_{1,2,1}=\left\{\begin{pmatrix}\pm 1&*&*\\ 0&g&*\\ 0&0&\pm 1\end{pmatrix}\in\operatorname{GL}_{4}(\mathbb{Z}_{p})\mid g\in \operatorname{GL}_{2}(\mathbb{Z}_{p})\right\}\]
Properties of this ring have been investigated in [10] and that's the ring where our calculations about Hecke operators are going to occur.
Finally, let us now describe the action of elements of the above Hecke algebras on modular forms. Let \(F\) denote any modular form with respect to the parabolic subgroup \(\Gamma_{1,1}\) and let \(g=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\in S^{1,1}\). We then define
\[(F\mid g)(Z):=\mu(g)^{2k-4}\text{det}(CZ+D)^{-k}F(g\langle Z\rangle).\]
Let now \(X=\Gamma_{1,1}\begin{pmatrix}*&*&*&*\\ *&a&*&*\\ *&0&*&*\\ 0&0&0&b\end{pmatrix}\Gamma_{1,1}=\sum_{i}\Gamma_{1,1}g_{i}\in H^{1,1}\), with \(g_{i}\in S^{1,1}\). Then
\[(F\mid X)(Z)=\sum_{i}(F\mid g_{i})(Z).\]
Gritsenko gave the following very convenient definition of the signature.
**Definition 3.3**.: The signature of \(X\) is defined as \(s(X)=b/a\).
Using the signature \(s:=s(X)\) of \(X\) we can now define its action on Fourier-Jacobi forms. This is the content of the following Proposition.
**Proposition 3.4**.: Let \(\phi\in J^{2}_{k,m}\) denote a Fourier-Jacobi form of index \(m\). Then, for \(Z=\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\in\mathbb{H}_{2}\), we define the action of \(X\) on \(J^{2}_{k,m}\) via
\[(\phi\mid X)(\tau,z_{1},z_{2}):=(\tilde{\phi}\mid X)(Z)e\left(-\frac{m}{s}\right)\]
with \(\tilde{\phi}(Z):=\phi(\tau,z_{1},z_{2})e(m\omega)\) and \(\omega\in\mathbb{C}\) with \(\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\in\mathbb{H}_{2}\). Then \(\phi\mid X\) belongs to \(J^{2}_{k,m/s}\) if \(m/s\) is an integer and is \(0\) otherwise.
Proof.: See [8, Lemma 4.1].
Now, if \(F\) is a Hermitian modular form, we can write
\[F\left(\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\right)=\sum_{m=0}^{\infty}\phi_{m}(\tau,z_{1},z_{2})e (m\omega)\]
For \(X\in H^{1,1}\) as above, we have that \(F\mid X\) is a modular form with respect to \(\Gamma_{1,1}\) and so we can write
\[(F\mid X)\left(\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\right)=\sum_{m=0}^{\infty}\psi_{m}(\tau,z_{1},z_{2})e (m\omega).\]
Therefore, there is an action of Hecke operators from \(H^{1,1}\) on the Fourier-Jacobi forms coming from a Hermitian modular form \(F\) via
\[\phi_{m}^{(F)}\mid\mid X=\psi_{m}^{(F|X)}.\]
We note here that this action is extended to \(P-\)forms in the obvious way.
Given now the definitions of the Hecke algebras above, we assume that \(G\in S_{2}^{k}\) is a Hecke eigenform for \(H^{2}\), i.e. it is an eigenfunction for all Hecke operators in \(H^{2}\). We remind the reader here that for a polynomial \(U\) with coefficients Hecke operators and \(G\) a Hecke eigenform, we write \(U_{G}\) for the polynomial obtained when we substitute the operators with their corresponding eigenvalues.
**Definition 3.5**.: The standard \(L-\)function attached to \(G\) (see also [21, Paragraph 20.6]) is defined as
\[Z_{G}^{(2)}(s)=\prod_{p\text{ inert or }p=2}Z_{p,G}^{(2)}(p^{-2s})^{-1}\prod_{p =\pi\pi}Z_{\pi,G}^{(2)}(p^{-s})^{-1}Z_{\pi,G}^{(2)}(p^{-s})^{-1}\]
where for each inert prime \(p\) or \(p=2\), \(Z_{p,G}^{(2)}(t)=\Phi^{-1}(z_{p,G}^{(2)}(t))\) and for \(p=\pi\overline{\pi}\), \(Z_{\pi,G}^{(2)}(t)=\Phi^{-1}(z_{\pi,G}^{(2)}(t))\) and \(Z_{\overline{\pi},G}^{(2)}(t)=\Phi^{-1}(z_{\overline{\pi},G}^{(2)}(t))\), where
\[z_{p,G}^{(2)}(t)=\begin{cases}\prod_{i=1}^{2}(1-p^{2}x_{i,p}t)(1-p^{4}x_{i,p} ^{-1}t)&\text{ if }p\text{ inert}\\ \prod_{i=1}^{2}(1-px_{i}t)(1-p^{2}x_{i}^{-1}t)&\text{ if }p=2\end{cases}\]
and
\[z_{\pi,G}^{(2)}(t)=\prod_{i=1}^{4}(1-p^{-1}x_{i,p}t),z_{\overline{\pi},G}^{(2 )}(t)=\prod_{i=1}^{4}(1-p^{4}x_{i,p}^{-1}t)\]
where \(x_{i,p}\) are the Satake parameters of \(G\).
**Definition 3.6**.: The \(L-\)function attached to \(G\) by Gritsenko in [8] is defined as
\[Q_{G}^{(2)}(s)=\prod_{p\text{ inert}}(1+p^{k-2-s})^{-2}Q_{p,G}^{(2)}(p^{-s})^ {-1}\prod_{p\text{ splits or }p=2}Q_{p,G}^{(2)}(p^{-s})^{-1}\]
where \(Q_{p,G}^{(2)}(t)=\Phi^{-1}(q_{p,G}^{(2)}(t))\) with
\[q_{p,G}^{(2)}(t)=\begin{cases}(1-x_{0,p}t)\prod_{r=1}^{2}\prod_{1\leq i_{1}<i _{2}\leq 2}(1-p^{-r}x_{i_{1},p}\cdots x_{i_{r},p}x_{0,p}t)&\text{ if }p\text{ is inert}\\ (1-x_{0,p}t)\prod_{r=1}^{2}\prod_{1\leq i_{1}<i_{2}\leq 2}(1-p^{-r}(x_{i_{1},p} \cdots x_{i_{r},p})^{2}x_{0,p}t)&\text{ if }p=2\\ \prod_{1\leq i<j\leq 4}(1-p^{-3}x_{i,p}x_{j,p}xt)&\text{ if }p\text{ splits}\end{cases}\]
Let us now define the so-called Maass space for the case of Hermitian cusp forms. We mainly follow [7] and for the definition we will use is [7, Lemma 2.4].
**Definition 3.7**.: The Maass space is the space
\[\left\{F\left(\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\right)=\sum_{m=1}^{\infty}\left(\phi(\tau,z_{1},z_{2} )|_{k}T_{-}(m)\right)e^{2\pi im\omega}m^{3-k}|\phi\in J_{k,1}^{2}\right\}\]
where \(T_{-}(m)=\sum_{a|b,ab=m}\Gamma_{1,1}\mathrm{diag}(a,m,b,1)\Gamma_{1,1}\) is an element of the parabolic Hecke algebra \(H^{1,1}\).
The main property of the Maass space is the following:
**Proposition 3.8**.: Let \(F\in S_{2}^{k}\) belong in the Maass space defined and assume \(F\) is an eigenfunction for the Hecke algebra \(H^{2}\). Then, there exists \(f\in S_{k-1}\left(\Gamma_{0}(4),\left(\frac{-4}{*}\right)\right)\), which is also an eigenfunction for its corresponding Hecke algebra, such that
\[Q_{F}^{(2)}(s)=\zeta(s-k+1)L\left(s-k+2,\left(\frac{-4}{*}\right)\right)\zeta( s-k+3)R_{f}(s)\]
where \(R_{f}(s)\) denotes the symmetric square function of \(f\) defined as follows. Let \(f(\tau)=\sum_{n\geq 1}a(n)e(n\tau)\) be the Fourier expansion of \(f\) and assume we write
\[1-a(p)t+\left(\frac{-4}{p}\right)p^{k-2}t^{2}=(1-\alpha_{p}t)\left(1-\beta_{p} \left(\frac{-4}{p}\right)t\right)\]
We then define
\[R_{f}(s)=(1-\alpha_{2}2^{-s})^{-1}(1-\beta_{2}2^{-s})^{-1}\prod_{p\neq 2} \left[(1-\alpha_{p}^{2}p^{-s})\left(1-\left(\frac{-4}{p}\right)\alpha_{p}\beta _{p}p^{-s}\right)(1-\beta_{p}^{2}p^{-s})\right]^{-1}\]
We call \(F\) the Maass lift of \(f\).
Proof.: See [7] or the Appendix in [8].
We end this section by giving a Lemma regarding the correspondence of elliptic cusp forms and Hermitian modular forms of degree \(1\), both as analytic objects as well as Hecke eigenforms.
**Lemma 3.9**.: A Hermitian modular form of degree \(1\) and weight \(k\) with \(k\equiv 0\pmod{4}\) can be considered as a classical modular form of the same weight(i.e. for the group \(SL_{2}(\mathbb{Z})\)) and vice versa. Also, a classical modular form which is a normalised eigenform for the Hecke algebra \(H(\mathrm{GL}_{2}(\mathbb{Z}),\mathrm{GL}_{2}(\mathbb{Q}))\) is also a normalised eigenform for \(H(\Gamma_{1},S^{1})\), when it is considered as a Hermitian modular form and vice versa.
Proof.: We have that \(\Gamma_{1}=SL_{2}(\mathbb{Z})\cdot\{\alpha\cdot 1_{2}\mid\alpha\in\mathcal{O}_{K}^ {\times}\}\) and the corresponding upper half planes are the same. So, holomorphicity is equivalent(including infinity). Now, for the invariance condition, the one direction is trivial, as \(SL_{2}(\mathbb{Z})\subseteq\Gamma_{1}\). For the other one, let \(\gamma\in\Gamma_{1}\) and write \(\gamma=\alpha\delta\) with \(\delta=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in SL_{2}(\mathbb{Z})\) and \(\alpha\in\mathcal{O}_{K}^{\times}\). Then
\[(F|_{k}\gamma)(Z)=(\alpha cz+\alpha d)^{-k}F(Z)=(F|_{k}\delta)(Z)\]
as \(k\equiv 0\pmod{4}\) and \(\mathcal{O}_{K}^{\times}=\{\pm 1,\pm i\}\).
Assume now that we start with a normalised (i.e \(a(1)=1\) in the Fourier expansion) Hermitian cusp form \(h\) of degree \(1\), which we further take to be an eigenform for the corresponding Hecke algebra. The canonical embedding of \(\mathrm{GL}_{2}(\mathbb{Q})\) into \(S^{1}\), the group of similitude of degree \(1\), induced from the embedding \(\mathbb{Q}\hookrightarrow K\), allows us to see \(h\) as a classical normalised Hecke eigenform.
But the converse is also true, that is, if we start with \(h\) a classical normalised Hecke eigenform, then it is also a normalised Hermitian eigenform of degree \(1\). Indeed, since the Hecke operators of the Hermitian Hecke algebra are normal, we know that the space of Hermitian cusp forms is diagonalizable with a basis of normalised eigenforms. But then this basis has to coincide with the basis derived by diagonalizing the action of the classical Hecke algebra, thanks to the multiplicity one theorem for the classical Hecke algebra.
**Remark**.: From now on, we will use the terms elliptic modular form(or classical modular form) and Hermitian modular form of degree \(1\) interchangeably.
## 4. Integral Representation and Dirichlet Series
Let \(n\geq 1\). We start with the following definition of the Eisenstein series we will need:
**Definition 4.1**.: Let \(F\in S_{r}^{k}\) with \(k\equiv 0\pmod{4}\). The Klingen-type Eisenstein series with respect to the parabolic subgroup \(C_{n,r}\) attached to \(F\) is given by:
\[E_{n,r}^{k}(Z,F;s)=\sum_{\gamma\in C_{n,r}\backslash\Gamma_{n}}F(\gamma \langle Z\rangle_{*})j(\gamma,Z)^{-k}\left(\frac{\det\operatorname{Im}\gamma \langle Z\rangle}{\det\operatorname{Im}\gamma\langle Z\rangle_{*}}\right)^{s}\]
where \(*\) denotes the lower right \(r\times r\) part of the matrix.
This series is absolutely and uniformly convergent for \(k+\operatorname{Re}(s)>2(n+r)\) (see [17, page 152].
Now, given \(F,G\in S_{n}^{k}\) and \(h\in S_{1}^{k}\), which are all eigenforms for their corresponding Hecke algebras, the expression of interest is
\[\Phi(F,G,h;s)=\left\langle\left\langle\left\langle E_{2n+1,0}^{k}\left( \begin{pmatrix}z_{1}&\\ &z_{2}\\ &&z_{3}\end{pmatrix};s\right),F(z_{3})\right\rangle,G(z_{2})\right\rangle,h(z_{ 1})\right\rangle\]
We have the following algebraic result:
**Proposition 4.2**.: With notation as above and assume further that \(F\),\(G\) and \(h\) have algebraic Fourier coefficients. For \(k>4n+2\) we have
\[\frac{\Phi(F,G,h,0)}{\langle F,F\rangle\langle G,G\rangle\langle h,h\rangle} \in\overline{\mathbb{Q}}.\]
Proof.: This can be shown exactly as [13, Theorem 1.9]. In the proof there, a result of Bocherer is used on the algebraic decomposition of the space of modular forms as an orthogonal product of the space of cusps forms and of the Eisenstein series, i.e.
\[M_{n}^{k}(\overline{\mathbb{Q}})=S_{n}^{k}(\overline{\mathbb{Q}})\oplus \operatorname{Eis}_{n}^{k}(\overline{\mathbb{Q}}).\]
Such a result is also available for unitary groups [21, Theorem 27.14].
Actually, one can give an even stronger statement of the proposition above, namely establish even a reciprocity law on the action of the absolute Galois group. The statement is similar to [13, Theorem 1.9] of Heim. The main point here is that we can establish an algebraicity result for special values of \(L-\)functions if we can relate the expression above to an Euler product expression. This is the main motivation of the present paper.
By using the well-known doubling method for unitary groups, as for example is studied in [21, Equation 24.29(a)], we know that the first inner product is related to a Klingen-type Eisenstein series as defined above. That is,
\[\left\langle E_{2n+1,0}^{k}\left(\begin{pmatrix}z_{1}&&\\ &z_{2}&\\ &&z_{3}\end{pmatrix};s\right),F(z_{3})\right\rangle=\nu(s)\frac{Z_{F}^{(n)}(2s +k-n)}{\prod_{i=0}^{2n-1}L(4s+2k,\chi^{i})}E_{n+1,n}^{k}\left(\begin{pmatrix} z_{1}&0\\ 0&z_{2}\end{pmatrix},F;s\right)\]
where \(Z_{F}^{(n)}\) is the standard \(L-\)function attached to \(F\), \(\chi\) is the non-trivial quadratic character attached to the extension \(K/\mathbb{Q}\) and \(\nu(s)\) is an expression involving Gamma factors (the explicit expression is given in [21]). So, our focus shifts to computing
\[\left\langle\left\langle E_{n+1,n}^{k}\left(\begin{pmatrix}W&0\\ 0&Z\end{pmatrix},F;s\right),G(Z)\right\rangle,h(W)\right\rangle.\]
Given the definition of the Eisenstein series, we will start by finding representatives for \(C_{n+1,n}\backslash\Gamma_{n+1}\). We begin by first finding representatives for \(C_{n+1,n}(K)\backslash U_{n}(K)\).
**Proposition 4.3**.: The left coset space \(C_{n+1,n}(K)\backslash U_{n}(K)\) has representatives
\[S_{1}=C_{1,0}(K)\backslash U_{1}(K)\times 1_{2n}\]
\[S_{2}=\pi_{n}\cdot(1_{2}\times C_{n,n-1}(K)\backslash U_{n}(K))\]
\[S_{3}=\xi_{n}\cdot(C_{1,0}(K)\backslash U_{1}(K)\times((T\times 1_{2(n-1)}) \cdot C_{n,n-1}(K)\backslash U_{n}(K))\]
where
\[\pi_{n}=\begin{pmatrix}0&1\\ 1&0\\ &&1_{n-1}\\ &&&0&1\\ &&&1&0\\ &&&&1_{n-1}\end{pmatrix},\xi_{n}=\begin{pmatrix}1&0\\ 1&1\\ &&1_{n-1}\\ &&&0&1\\ &&&1_{n-1}\end{pmatrix},T=\left\{\begin{pmatrix}a&0\\ 0&\overline{a}^{-1}\end{pmatrix}|a\in K^{\times}\right\}\]
Proof.: The proof proceeds in the same way as in [13, Proposition 2.1].
We now want to pull these representatives back to representatives for \(C_{n+1,n}\backslash\Gamma_{n+1}\).
**Corollary 4.4**.: The left coset space \(C_{n+1,n}\backslash\Gamma_{n+1}\) has representatives
\[T_{1}=C_{1,0}\backslash\Gamma_{1}\times 1_{2n}\]
\[T_{2}=\pi_{n}\cdot(1_{2}\times C_{n,n-1}\backslash\Gamma_{n})\]
\[T_{3}=\bigsqcup_{p,q}(\xi^{p,q}\times 1_{2(n-1)})\cdot(C_{1,0}\backslash \Gamma_{1}\times C_{n,n-1}\backslash\Gamma_{n})\]
where \(p,q\in\mathbb{Z}[i]\backslash\{0\}\) with \((p,q)=1\) and \(q=u+iv,u>0,v\geq 0\) and \(\xi^{p,q}=\begin{pmatrix}*&*&0&0\\ q&p&0&0\\ 0&0&\overline{p}&-\overline{q}\\ 0&0&*&*\end{pmatrix}\),
with \(\xi^{p,q}\times 1_{2(n-1)}\in\Gamma_{n+1}\).
Proof.: There is one to one correspondence between \(C_{n,r}(K)\backslash U_{n}(K)\) with \(C_{n,r}\backslash\Gamma_{n}\) for all \(r,n\), due to the fact that \(K\) has class number \(1\). It suffices then to pull \(S_{3}\) back.
We therefore need to find a matrix \(p_{a}\) in \(\bigcap_{r\leq n}C_{n+1,r}(K)\) such that
\[p_{a}\cdot\xi_{n}\cdot\left(1_{2}\times\begin{pmatrix}a&0\\ 0&\overline{a}^{-1}\end{pmatrix}\times 1_{2(n-1)}\right)\in\Gamma_{n+1}\]
We parametrize \(K\) as
\[\left\{\frac{p}{q}\mid(p,q)=1,q=u+iv,u>0,v\geq 0\right\}.\]
All these elements are different as \(p,q\) are in \(\mathbb{Z}[i]\) with the above conditions and their union is \(K\).
For \(a=p/q\) as above, we define
\[p_{p,q}=\begin{pmatrix}p^{-1}&y&0&0\\ 0&q&0&0\\ 0&0&\overline{p}&0\\ 0&0&l&\overline{q}^{-1}\end{pmatrix}\times 1_{2(n-1)}\]
with \(l,q\) chosen so that \(l\overline{q}\equiv 1\pmod{p}\) and then \(y=-q\overline{l}/p\). We can then show that this actually belongs to \(U_{n+1}\) and clearly to \(C_{n+1,n}(K)\). Hence, by computing the product, we obtain the required representatives.
Our aim is to first show an analogue of [13, Theorem 2.3].
We denote by \(Y=\operatorname{Im}Z=\dfrac{1}{2i}(Z-^{t}\overline{Z})\). Then \(\operatorname{Im}M\langle Z\rangle=Y[(CZ+D)^{-1}]\), so
\[\det\operatorname{Im}M\langle Z\rangle=|\det(CZ+D)|^{-2}\det\operatorname{Im} Z=|j(M,Z)|^{-2}\det\operatorname{Im}Z\]
**Proposition 4.5**.: Let \(k\equiv 0\pmod{4}\) and \(\operatorname{Re}(s)\gg 0\). Let also \(F\in S_{n}^{k}\), \(z_{1}\in\mathbb{H}\) and \(z_{2}\in\mathbb{H}_{n}\). We then have
\[E_{n+1,n}^{k}\left([z_{1},z_{2}],F;s\right)=E_{1}^{k}(z_{1},s)F(z _{2})+E_{n,n-1}^{k}(z_{2},F_{z_{1}};s)+\] \[+\delta(z_{1})^{s}\delta(z_{2})^{s}\sum_{p,q}\sum_{\begin{subarray} {c}\gamma_{1}\in C_{1,0}\backslash\Gamma_{1}\\ \gamma_{2}\in C_{n,n-1}\backslash\Gamma_{n}\end{subarray}}\chi^{k,s}(\gamma_{ 1},z_{1})\chi^{k,s}(\gamma_{2},z_{2})F\left(\begin{pmatrix}N(q)\gamma_{1} \langle z_{1}\rangle&0\\ 0&0\end{pmatrix}+(\gamma_{2}\langle z_{2}\rangle)\left[\begin{pmatrix} \overline{p}&0\\ 0&1_{n-1}\end{pmatrix}\right]\right)\times\] \[\times\delta\left(\begin{pmatrix}N(q)\gamma_{1}\langle z_{1} \rangle&0\\ 0&0\end{pmatrix}+(\gamma_{2}\langle z_{2}\rangle)\left[\begin{pmatrix} \overline{p}&0\\ 0&1_{n-1}\end{pmatrix}\right]\right)^{-s}\]
where \(F_{z_{1}}(\tau)=F([z_{1},\tau])\in S_{n-1}^{k}\), \(\chi^{k,s}(M,Z)=j(M,Z)^{-k}|j(M,Z)|^{-2s}\), \(\delta(Z)=\det\left(\operatorname{Im}Z\right)\) and \(p,q\) are summed as in Corollary 4.4.
Proof.: We can write
\[E_{n+1,n}^{k}\left([z_{1},z_{2}],F;s\right)=\delta(z_{1})^{s}\delta(z_{2})^{s} \sum_{M\in C_{n+1,n}\backslash\Gamma_{n+1}}\chi^{k,s}(M,[z_{1},z_{2}])F(M \langle[z_{1},z_{2}]\rangle_{*})\delta\left(M\langle[z_{1},z_{2}]\rangle_{*} \right)^{-s}.\]
For the representatives of \(T_{1},T_{2}\) the proof is exactly the same as in [13, Theorem 2.3].
For a representative \(M\) of \(T_{3}\), write \(M=(\xi\times 1_{2(n-1)})(\gamma_{1}\times\gamma_{2})\) with \(\gamma_{1}\in C_{1,0}\backslash\Gamma_{1}\) and \(\gamma_{2}\in C_{n,n-1}\backslash\Gamma_{n}\). We write \(\gamma_{2}\langle z_{2}\rangle=\begin{pmatrix}x_{1}&x_{2}\\ x_{3}&x_{4}\end{pmatrix}\) and we then have
\[(M\langle[z_{1},z_{2}]\rangle)_{*}=((\xi\times 1_{2(n-1)})[\gamma_{1}z_{1}, \gamma_{2}z_{2}])_{*}=\left(\begin{pmatrix}*&*&0\\ q&p&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}\gamma_{1}\langle z_{1}\rangle&0&0\\ 0&x_{1}&x_{2}\\ 0&x_{3}&x_{4}\end{pmatrix}\begin{pmatrix}*&\overline{q}&0\\ 0&\overline{p}&0\\ 0&0&1\end{pmatrix}\right)_{*}=\]
\[=\begin{pmatrix}N(q)\gamma_{1}\langle z_{1}\rangle+N(p)x_{1}&px_{2}\\ \overline{p}x_{3}&x_{4}\end{pmatrix}=\begin{pmatrix}N(q)\gamma_{1}\langle z _{1}\rangle&0\\ 0&0\end{pmatrix}+(\gamma_{2}\langle z_{2}\rangle)\left[\begin{pmatrix}\overline {p}&0\\ 0&1_{n-1}\end{pmatrix}\right]\]
Also,
\[j((\xi\times 1_{2(n-1)})(\gamma_{1}\times\gamma_{2}),[z_{1},z_{2}])=j(\xi\times 1 _{2(n-1)},[\gamma_{1}z_{1},\gamma_{2}z_{2}])j(\gamma_{1}\times\gamma_{2},[z_{1},z_{2}])\]
and \(j(\xi\times 1_{2(n-1)},[\gamma_{1}z_{1},\gamma_{2}z_{2}])=\det(D)\), where we denote \(\xi\) by \(\begin{pmatrix}A&0\\ 0&D\end{pmatrix}\).
By unitarity, we have \(D\overline{A}^{t}=1\) so \(\det D\cdot\overline{\det A}=1\) so \(N(\det(D))=1\) and \(\det(D)\) is in \(\mathbb{Z}[i]\), which shows that \(\det(D)\in\{\pm 1,\pm i\}\). As \(k\equiv 0\pmod{4}\), we get
\[\chi^{k,s}((\xi\times 1_{2(n-1)})(\gamma_{1}\times\gamma_{2}),[z_{1},z_{2}])=\chi^{k, s}(\gamma_{1},z_{1})\chi^{k,s}(\gamma_{2},z_{2})\]
and so the proposition follows.
From now on we work with \(n=2\). Write \(Z=\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\) and consider the Fourier-Jacobi expansion of \(F,G\) and the Fourier expansion of \(h\) as follows:
\[F(Z)=\sum_{m=1}^{\infty}\phi_{m}(\tau,z_{1},z_{2})e^{2\pi im\omega},G(Z)=\sum_ {m=1}^{\infty}\psi_{m}(\tau,z_{1},z_{2})e^{2\pi im\omega}\]
\[h(W)=\sum_{n=1}^{\infty}a_{n}e^{2\pi imW}\]
We state and prove the main theorem of this section, which will give us the Dirichlet series with which we are going to work with.
**Theorem 4.6**.: _Let \(k\equiv 0\pmod{4}\). Let \(F,G\in S_{2}^{k}\) and \(h\in S_{1}^{k}\). Then, for \(Re(s)\) large enough so that the Eisenstein series converges absolutely, we have_
\[\left\langle\left\langle E_{3,2}^{k}\left(\left(\begin{matrix}W&0\\ 0&Z\end{matrix}\right),F;s\right),G(Z)\right\rangle,h(W)\right\rangle=(4\pi)^ {-(2k+s-4)}\frac{\Gamma(2k+s-4)\Gamma(k+s-3)\Gamma(k+s-1)}{\Gamma(2k+2s-4)}D_{ F,G,h}(s)\]
_where we define the Dirichlet series_
\[D_{F,G,h}(s)=\sum_{p,q}\sum_{m=1}^{\infty}\langle\phi_{m}|U_{p},\psi_{mN(p)} \rangle\overline{a}_{\overline{mN(q)}}N(p)^{-(k+s-3)}N(q)^{-(k+s-1)}m^{-(2k+s -4)}\]
_with \(U_{p}\) the operator defined by_
\[U_{p}:J_{k,m}^{n} \longrightarrow J_{k,mp}^{n}\] \[\phi_{m}(\tau,z_{1},z_{2}) \longmapsto\phi_{m}(\tau,\overline{p}z_{1},pz_{2})\]
_and \(p,q\) summed as in Corollary 4.4._
Proof.: We will start dealing with the third part of the inner product, using the decomposition of Proposition 4.5. This can be written as(the summations are as above):
\[\int_{\Gamma_{1}\backslash\mathbb{H}_{1}}\int_{\Gamma_{2}\backslash\mathbb{H }_{2}}\delta(W)^{k+s}\delta(Z)^{k+s}\sum_{p,q}\sum_{\gamma_{1},\gamma_{2}} \chi^{k,s}(\gamma_{1},W)\chi^{k,s}(\gamma_{2},Z)\times\]
\[\times F\left(\left(\begin{matrix}N(q)\gamma_{1}\langle W\\ 0&0\end{matrix}\right)+(\gamma_{2}\langle Z\rangle)\left[\begin{matrix} \overline{p}&0\\ 0&1\end{matrix}\right]\right)\delta\left(\begin{matrix}N(q)\gamma_{1}\langle W \rangle&0\\ 0&0\end{matrix}\right)+(\gamma_{2}\langle Z\rangle)\left[\begin{matrix} \overline{p}&0\\ 0&1\end{matrix}\right]\right)^{-s}\times\]
\[\overline{G(Z)h(W)}d^{*}Wd^{*}Z\]
Now, using the automorphity condition for \(G,h\), we have
\[G(Z)=(G|_{k}\gamma_{2})(Z)=j(\gamma_{2},Z)^{-k}G(\gamma_{2}Z)\]
\[h(W)=(h|_{k}\gamma_{1})(W)=j(\gamma_{1},W)^{-k}h(\gamma_{1}W)\]
Also \(\delta(\gamma_{2}Z)=|j(\gamma_{2},Z)|^{-2}\delta(Z)\), \(\delta(\gamma_{1}W)=|j(\gamma_{1},W)|^{-2}\delta(W)\), so
\[\overline{G(Z)}\delta(Z)^{k+s}\chi^{k,s}(\gamma_{2},Z)=\overline{j(\gamma_{2 },Z)}^{-k}\overline{G(\gamma_{2}Z)}\delta(\gamma_{2}Z)^{k+s}|j(\gamma_{2},Z)| ^{2(k+s)}j(\gamma_{2},Z)^{-k}|j(\gamma_{2},Z)|^{-2s}\]
\[=\overline{G(\gamma_{2}Z)}\delta(\gamma_{2}Z)^{k+s}\]
and similarly for \(h\). Hence, by the usual "unfolding" trick, we obtain that the above integral equals
\[\int_{C_{1,0}\backslash\mathbb{H}_{1}}\int_{C_{2,1}\backslash\mathbb{H}_{2}} \sum_{p,q}F\left(\left(\begin{matrix}N(q)W&0\\ 0&0\end{matrix}\right)+Z\left[\begin{matrix}\overline{p}&0\\ 0&1\end{matrix}\right]\right)\delta\left(\begin{matrix}N(q)W&0\\ 0&0\end{matrix}\right)+Z\left[\begin{matrix}\overline{p}&0\\ 0&1\end{matrix}\right]\right)^{-s}\times\]
\[\times\overline{G(Z)h(W)}\delta(Z)^{k+s}\delta(W)^{k+s}d^{*}Zd^{*}W\]
We now rewrite the integral in the following form:
\[\int_{P_{1,0}\backslash\mathbb{H}_{1}}\int_{P_{2,1}\backslash\mathbb{H}_{2}} \sum_{p,q}F\left(\left(\begin{matrix}0&0\\ 0&N(q)W\end{matrix}\right)+Z\left[\begin{matrix}1&0\\ 0&\overline{p}\end{matrix}\right]\right)\delta\left(\left(\begin{matrix}0&0\\ 0&N(q)W\end{matrix}\right)+Z\left[\begin{matrix}1&0\\ 0&\overline{p}\end{matrix}\right]\right)^{-s}\times\]
\[\times\overline{G(Z)h(W)}\delta(Z)^{k+s}\delta(W)^{k+s}d^{*}Zd^{*}W\]
Now, a fundamental domain for the action of \(P_{1,0}\) on \(\mathbb{H}_{1}\) is
\[\mathcal{F}=\{z\in\mathbb{H}_{1}|z=x+iy,0\leq x\leq 1\}\]
while a fundamental domain for the action of \(P_{2,1}\) on \(\mathbb{H}_{2}\)
\[\left\{\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}|(\tau,z_{1},z_{2})\in\mathcal{F}^{J},y_{\omega}>|z_{1 }-\overline{z_{2}}|^{2}/4y_{\tau},0\leq x_{\omega}\leq 1\right\}\]
(as we can see for example in page 2907 of [8]). Hence, the above inner product equals
\[\sum_{p,q}\int_{\mathcal{F}^{J}}d\tau dz_{1}dz_{2}\int_{y_{\omega}>|z_{1}- \overline{z_{2}}|^{2}/4y_{\tau}}dy_{\omega}\int_{0}^{1}dx_{\omega}\int_{0}^{1} dx_{W}\int_{0}^{\infty}dy_{W}\delta(Z)^{k+s-4}\times\]
\[\times\sum_{m=1}^{\infty}\phi_{m}(\tau,\overline{p}z_{1},pz_{2})e^{2\pi im(N(q )W+N(p)\omega)}\sum_{n=1}^{\infty}\overline{a_{n}}e^{-2\pi in\overline{W}}\sum _{k=1}^{\infty}\overline{\psi_{k}(\tau,z_{1},z_{2})}e^{-2\pi i\overline{W}}\times\]
\[\times\delta\left(\begin{pmatrix}0&0\\ 0&N(q)W\end{pmatrix}+Z\begin{pmatrix}(1&0)\\ 0&\overline{p}\end{pmatrix}\right)\right)^{-s}\]
We now first perform the integration over \(x_{\omega}\) and \(x_{W}\). For \(x_{\omega}\) we have
\[\int_{0}^{1}e^{2\pi imN(p)x_{\omega}-2\pi ikx_{\omega}}dx_{\omega}\]
which is zero, unless \(k=mN(p)\), in which case the integral is \(1\).
Similarly for \(x_{W}\) the integral we need to compute is
\[\int_{0}^{1}e^{2\pi imN(q)x_{W}-2\pi inx_{W}}dx_{W}\]
which is again zero unless \(n=mN(q)\), in which case it is \(1\).
These are the only terms we need to integrate as the real parts of \(\omega\) and \(W\) do not appear as arguments of \(\delta\) by definition. We now substitute \(t=y_{\omega}-|z_{1}-\overline{z_{2}}|^{2}/4y_{\tau}\) and compute
\[\delta(Z)=\det\left(\frac{1}{2i}\left(Z-^{t}\overline{Z}\right)\right)=\cdots =y_{\tau}t\]
and
\[\delta\left(\begin{pmatrix}0&0\\ 0&N(q)W\end{pmatrix}+Z\begin{bmatrix}\begin{pmatrix}1&0\\ 0&\overline{p}\end{pmatrix}\end{bmatrix}\right)=\delta\left(\begin{pmatrix}\tau& \overline{p}z_{1}\\ pz_{2}&N(q)W+N(p)\tau^{\prime}\end{pmatrix}\right)=\cdots=y_{\tau}(N(q)y_{W}+N( p)t)\]
So, the integral becomes
\[\sum_{p,q}\int_{\mathcal{F}^{J}}\sum_{m=1}^{\infty}\phi_{m}(\tau,\overline{p}z _{1},pz_{2})\overline{\psi_{mN(p)}(\tau,z_{1},z_{2})}\overline{a_{mN(q)}}y_{ \tau}^{k-4}e^{-\pi m(|z_{1}-\overline{z_{2}}|^{2}/y_{\tau})}d\tau dz_{1}dz_{2}\times\]
\[\times\int_{0}^{\infty}dt\int_{0}^{\infty}dy_{W}t^{k+s-4}(N(q)y_{W}+N(p)t)^{-s }e^{-4\pi m(N(q)y_{W}+N(p)t)}y_{W}^{s+k-2}=\]
\[=\sum_{p,q}\sum_{m=1}^{\infty}\langle\phi_{m}|U_{p},\psi_{mN(p)}\rangle \overline{a_{mN(q)}}\int_{0}^{\infty}\int_{0}^{\infty}\frac{dt}{t}\frac{dy_{W }}{y_{W}}t^{k+s-3}y_{W}^{s+k-1}(N(q)y_{W}+N(p)t)^{-s}e^{-4\pi m(N(q)y_{W}+N(p)t )}=\]
\[=(4\pi)^{-(2k+s-4)}\frac{\Gamma(2k+s-4)\Gamma(k+s-3)\Gamma(k+s-1)}{\Gamma(2k+ 2s-4)}\times\]
\[\times\sum_{p,q}\sum_{m=1}^{\infty}\langle\phi_{m}|U_{p},\psi_{mN(p)}\rangle \overline{a_{mN(q)}}N(p)^{-(k+s-3)}N(q)^{-(k+s-1)}m^{-(2k+s-4)}\]
by using the fact that
\[\int_{0}^{\infty}\int_{0}^{\infty}\frac{dx}{x}\frac{dy}{y}x^{\alpha}y^{\beta} \left(\frac{xy}{x+y}\right)^{\gamma}e^{-(x+y)}=\frac{\Gamma(\alpha+\beta+ \gamma)\Gamma(\alpha+\gamma)\Gamma(\beta+\gamma)}{\Gamma(\alpha+\beta+2\gamma)}\]
and substituting \(x=4\pi mN(p)t\), \(y=4\pi mN(q)y_{W}\), \(\alpha=k-3\), \(\beta=k-1\), \(\gamma=s\).
Let us now deal with the second part of the integral. We want to compute
\[\int_{\Gamma_{1}\backslash\mathbb{H}_{1}}\int_{\Gamma_{2}\backslash\mathbb{H}_ {2}}\sum_{\gamma\in\mathcal{C}_{2,1}\backslash\Gamma_{2}}j(\gamma,Z)^{-k}F_{W} \left(\gamma\langle Z\rangle_{*}\right)\left(\frac{\delta(\gamma\langle Z \rangle)}{\delta(\gamma\langle Z\rangle_{*})}\right)^{s}\overline{G(Z)h(W)} \delta(Z)^{k}\delta(W)^{k}d^{*}Zd^{*}W\]
and using again the automorph condition for \(G\), we obtain by unfolding the integral
\[\int_{\Gamma_{1}\backslash\mathbb{H}_{1}}\int_{C_{2,1}\backslash\mathbb{H}_{2}}F_ {W}(Z_{*})\overline{G(Z)h(W)}\delta(Z)^{k+s}\delta(Z_{*})^{-s}d^{*}Zd^{*}W\]
We now write \(Z=\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\) and rewrite the inner integral as
\[\int_{P_{2,1}\backslash\mathbb{H}_{2}}F\left(\begin{pmatrix}\tau&0\\ 0&W\end{pmatrix}\right)\overline{G(Z)}\delta(Z)^{k+s}y_{\tau}^{-s}d^{*}Z\]
and by using the Fourier-Jacobi expansions and the fundamental domains mentioned above we get
\[\int_{\mathcal{F}^{J}}d\tau dz_{1}dz_{2}\int_{y_{\omega}>|z_{1}-\overline{z_{ 2}}|^{2}/4y_{r}}\int_{0}^{1}dx_{\omega}\sum_{m=1}^{\infty}\phi_{m}(\tau,0,0)e^ {2\pi imW}\sum_{n=1}^{\infty}\overline{\psi_{m}(\tau,z_{1},z_{2})}e^{-2\pi im \overline{\omega}}\delta(Z)^{k+s-4}y_{\tau}^{-s}\]
and we can see that this is zero by calculating the integral over \(x_{\omega}\), i.e.
\[\int_{0}^{1}e^{-2\pi imx_{\omega}}dx_{\omega}=0\]
Finally, we want to show that the first part of the integral is zero. We have
\[\langle\langle E_{1}(W;s)F(Z),G(Z)\rangle,h(W)\rangle=\langle F(Z),G(Z) \rangle\langle E_{1}(W;s),h(W)\rangle\]
and we will show the second inner product is zero. We have
\[\langle E_{1}(W;s),h(W)\rangle=\int_{\Gamma_{1}\backslash\mathbb{H}_{1}}\sum_ {\gamma\in C_{1,0}\backslash\Gamma_{1}}j(\gamma,W)^{-k}\delta(\gamma W)^{s} \overline{h(W)}\delta(W)^{k}d^{*}W\]
and by the usual unfolding trick
\[\int_{C_{1,0}\backslash\mathbb{H}_{1}}\delta(W)^{k+s}\overline{h(W)}d^{*}W= \int_{x=0}^{1}\int_{y=0}^{\infty}\sum_{n=1}^{\infty}\overline{a_{n}}e^{-2\pi in (x-iy)}y^{k-2}dxdy=0\]
by looking at the integral
\[\int_{0}^{1}e^{-2\pi inx}dx=0\]
for \(n\geq 1\). Combining all the above, the result follows.
## 5. Inert primes
This is the case which is closer to the situation considered by Heim [13]. Most of the ideas below are based on similar ideas appearing in Heim's work.
### Hecke operators and weak rationality theorems
Throughout this section, \(p\) is assumed to be a rational prime which remains prime in \(\mathcal{O}_{K}\).
Let us make a list of Hecke operators and relations between them that we are going to use in this case.
* \(T_{p}=\Gamma_{2}\mathrm{diag}(1,1,p,p)\Gamma_{2}\).
* \(T_{1,p}=\Gamma_{2}\mathrm{diag}(1,p,p^{2},p)\Gamma_{2}\).
* \(T^{J}(p)=\Gamma_{1,1}\mathrm{diag}(1,p,p^{2},p)\Gamma_{1,1}\).
* \(\Lambda_{-}(p)=\Gamma_{1,1}\mathrm{diag}(p,p^{2},p,1)\Gamma_{1,1}=\Gamma_{1,1 }\mathrm{diag}(p,p^{2},p,1)\).
* \(\Lambda_{+}(p)=\Gamma_{1,1}\mathrm{diag}(p,1,p,p^{2})\Gamma_{1,1}\).
* \(T_{-}(p)=\Gamma_{1,1}\mathrm{diag}(1,p,p,1)\Gamma_{1,1}\).
* \(T_{+}(p)=\Gamma_{1,1}\mathrm{diag}(1,1,p,p)\Gamma_{1,1}\).
* \(\nabla_{p}=\sum_{a\in\mathbb{Z}/p\mathbb{Z}}\Gamma_{1,1}\begin{pmatrix}p&0&0&0 \\ 0&p&0&a\\ 0&0&0&p\end{pmatrix}=\sum_{a\in\mathbb{Z}/p\mathbb{Z}}\nabla a\)
* \(\Delta_{p}=\Gamma_{1,1}\begin{pmatrix}p&0&0&0\\ 0&p&0&0\\ 0&0&p&0\\ 0&0&0&p\end{pmatrix}\Gamma_{1,1}=\Gamma_{1,1}\begin{pmatrix}p&0&0&0\\ 0&p&0&0\\ 0&0&p&0\\ 0&0&0&p\end{pmatrix}\)
Also, for any operator \(X(p)\), we write \(X^{r}(p)\) to denote \(\Delta_{p}^{-1}X(p)\).
We start with giving the images of some of the above operators under the Satake mapping, in the form of the following proposition:
**Proposition 5.1**.: Let \(\Phi\) denote the Satake mapping for the inert prime \(p\). We have
* \(\Phi(T_{p})=x_{0}+p^{-1}x_{0}x_{1}+p^{-1}x_{0}x_{2}+p^{-2}x_{0}x_{1}x_{2}=x_{0} (1+p^{-1}x_{1})(1+p^{-1}x_{2})\).
* \(\Phi(T_{1,p})=p^{-2}x_{0}^{2}x_{1}+p^{-2}x_{0}^{2}x_{2}+p^{-4}x_{0}^{2}x_{1}x_ {2}+p^{-4}x_{0}^{2}x_{1}x_{2}^{2}+p^{-6}(p^{2}+1)(p-1)x_{0}^{2}x_{1}x_{2}\).
* \(\Phi(\Delta_{p})=p^{-6}x_{0}^{2}x_{1}x_{2}\).
Proof.: Given the decomposition given in p. 677 of Krieg's paper [16], or comparing the decomposition of [8, Lemma 3.6] and using [14, Theorem 1], the proposition follows.
Let now \(D_{p}^{(2)}(X)=Z_{p}^{(2)}(p^{-3}X)\). We will now give the form and factorization of \(D_{p}^{(2)}(X)\).
**Proposition 5.2**.: We have
\[D_{p}^{(2)}(X)=1-B_{1}X+B_{2}X^{2}-B_{1}X^{3}+X^{4}\]
where
\[B_{1} =p^{-3}\Delta_{p}^{-1}(T_{1,p}-(p^{2}+1)(p-1)\Delta_{p})\] \[B_{2} =p^{-4}\Delta_{p}^{-1}(T_{p}^{2}-2pT_{1,p}-2p(p^{2}-p+1)\Delta_{p})\]
Proof.: This follows by direct verification, after applying the Satake isomorphism. We remind the reader here that \(Z_{p}^{(2)}(X)=\Phi^{-1}(z_{p}^{(2)}(X))\), where
\[z_{p}^{(2)}(X)=\prod_{i=1}^{2}(1-p^{4}x_{i,p}^{-1}X)(1-p^{2}x_{i,p}X)\]
This also gives the \(\Phi-\)image of \(D_{p}^{(2)}\).
We now have the following proposition regarding the factorisation of \(D_{p}^{(2)}\).
**Proposition 5.3**.: We have the following factorisation over \(H_{p}^{1,1}[X]\)
\[D_{p}^{(2)}(X)=(1-p^{-3}\Delta_{p}^{-1}\Lambda_{-}(p)X)S^{(2)}(X)(1-p^{-3} \Delta_{p}^{-1}\Lambda_{+}(p)X)\]
where
\[S^{(2)}(X)=S_{0}-S_{1}X+S_{2}X^{2}-S_{3}X^{3}\]
with
* \(S_{0}=1\)
* \(S_{1}=p^{-3}(T^{J,r}+\nabla_{p}^{r}-p(p^{2}-p+1))\)
* \(S_{2}=p^{-4}\Delta_{p}^{-1}T_{+}T_{-}-p^{-3}T^{J,r}-2p^{-3}\nabla_{p}^{r}-p^{ -2}(p-2)\)
* \(S_{3}=p^{-3}(\nabla_{p}^{r}-p)\)
Proof.: This can be verified directly, by using the following identities/relations which can be found in [8], or be proved directly.
* \(\epsilon(T_{1,p})=T^{J}(p)+\Lambda_{-}(p)+\Lambda_{+}(p)+\nabla_{p}-\Delta_{p}\)
* \(\epsilon(T_{p})=T_{-}(p)+T_{+}(p)\)
* \(T_{-}(p)T_{+}(p)=pT^{J}(p)+(p^{3}+p^{4})\Delta_{p}\)
* \(\Lambda_{-}(p)T_{+}(p)=p^{3}\Delta_{p}T_{-}(p)\)
* \(T_{-}(p)\Lambda_{+}(p)=p^{3}\Delta_{p}T_{+}(p)\)
* \(\Lambda_{-}(p)\Lambda_{+}(p)=p^{6}\Delta_{p}^{2}\)
* \(\Lambda_{-}^{r}\nabla_{p}^{r}=p\Lambda_{-}^{r}\)
* \(\nabla_{p}^{r}\Lambda_{+}^{r}=p\Lambda_{+}^{r}\)
where \(\epsilon\) denotes the embedding of \(H(\Gamma_{2},S^{2})\) to \(H(\Gamma_{1,1},S^{1,1})\), as described in 3.1.
We now have the following weak rationality theorems:
**Proposition 5.4**.: Let \(F\in S_{2}^{k}\) be a Hecke eigenform and \(m\geq 1\). Then
\[Q_{p,F}^{(2)}(X)\sum_{\delta\geq 0}\phi_{mp^{\delta}}|T_{+}(p^{\delta})X^{ \delta}=\left(\phi_{m}-\phi_{\frac{m}{p}}|T_{-}(p)X+p\phi_{\frac{m}{p^{2}}}| \Lambda_{-}(p)X^{2}\right)\mid(1+p(\nabla_{p}-p\Delta_{p})X^{2})\]
where \(\phi_{m}\mid(1+p(\nabla_{p}-p\Delta_{p})X^{2})=\begin{cases}\phi_{m}&\text{if }p|m \\ (1-p^{2k-6}X^{2})\phi_{m}&\text{otherwise}\end{cases}\)
Proof.: We follow the same proof of [12, Corollary]. Then, the result follows from [8, Proposition 3.2]. We will just show the computations for the last claim of our proposition. We have
\[(F|\Delta_{p})(Z)=(p^{2})^{2k-4}(p^{2})^{-k}F(Z)=p^{2k-8}F(Z)\]
and so \(\phi_{m}\mid\Delta_{p}=p^{2k-8}\phi_{m}\). Also,
\[(F|\nabla a)(Z)=(p^{2})^{2k-4}(p^{2})^{-k}F\left(\begin{pmatrix}\tau&z_{1}\\ z_{2}&\tau^{\prime}+a/p\end{pmatrix}\right)=p^{2k-8}\sum_{m=1}^{\infty}\phi_{ m}(\tau,z_{1},z_{2})e^{2\pi im\tau^{\prime}}e^{2\pi ima/p}\]
so \(\phi_{m}\mid\nabla_{p}=\left(p^{2k-8}\sum_{a=0}^{p-1}e^{2\pi ia/p}\right)\phi_ {m}=\begin{cases}0&\text{if }(m,p)=1\\ p^{2k-7}\phi_{m}&\text{otherwise}\end{cases}\)
from which the result follows.
**Proposition 5.5**.: Let \(F\in S_{2}^{k}\) be a Hecke eigenform and \(m\geq 1\). Then
\[D_{p,F}^{(2)}(X)\sum_{\delta\geq 0}\phi_{mp^{2\delta}}\mid_{k}(\Delta_{p^{ \delta}}^{-1}\Lambda_{+}(p^{\delta}))(p^{-3}X)^{\delta}=\phi_{m}\mid_{k}S^{(2 )}(X)-\phi_{m/p^{2}}\mid_{k}(\Delta_{p}^{-1}\Lambda_{-}(p)S^{(2)}(X))p^{-3}X\]
Proof.: This follows from the proposition 5.3 using the same techniques as in [12, Corollary].
In a similar fashion to Heim in [13] now, we have that the action of the operators \(T_{+}(p),\Lambda_{+}(p),\nabla_{p}^{r}(p)\) on Fourier-Jacobi forms of index coprime to \(p\) is identical to zero. This leads to the definition of the following polynomials:
\[S^{(2)}(X)^{\text{factor}}=1-(p^{-3}T^{J,r}-p^{-2}+p^{-1})X+p^{- 2}X^{2}\] \[S^{(2)}(X)^{\text{prim}}=1-(p^{-3}T^{J,r}-p^{-2}(p^{2}-p+1))X+(-p ^{-3}T^{J,r}-p^{-2}(p-2))X^{2}+p^{-2}X^{3}\] \[=S^{(2)}(X)^{\text{factor}}(1+X)\]
Hence, \(\phi|S^{(2)}(X)=\phi|S^{(2)}(X)^{\text{prim}}\) if \(\phi\) has index relatively prime to \(p\). We now have the following lemma:
**Lemma 5.6**.: \(\phi|S^{(2)}(X)T_{+}(p)=\phi|T_{+}(p)S^{(2)}(X)^{\text{factor}}\) if \(\phi\) has index \(p\).
Proof.: The proof follows by the following results:
* \(\phi\mid\nabla_{p}^{r}=p\phi\)
* \(\phi\mid T_{+}(p)\nabla_{p}^{r}=0\) because \(\phi\mid T_{+}(p)\) will have index \(1\).
* \(\phi\mid\big{[}T^{J,r},T_{+}(p)\big{]}=\phi\mid(p^{3}T_{+}(p)-\nabla_{p}^{r}T _{+}(p)=(p^{3}-p)\phi\mid T_{+}(p)\) by the first point.
Here \(\big{[}T^{J,r},T_{+}(p)\big{]}:=T^{J,r}T_{+}(p)-T_{+}(p)T^{J,r}\) denotes the commutator. We will now give the proof of the third point. By the proof of 5.3, we have
* \(\epsilon(T_{1,p})=T^{J}(p)+\Lambda_{-}(p)+\Lambda_{+}(p)+\nabla_{p}-\Delta_{p}\)
* \(\epsilon(T_{p})=T_{-}(p)+T_{+}(p)\)
and by looking the elements whose product has signature \(p\), we obtain
\[T^{J}(p)T_{+}(p)+\Lambda_{+}(p)T_{-}(p)+(\nabla_{p}-\Delta_{p})T_{+}(p)=\]
\[T_{+}(p)T^{J}(p)+T_{-}(p)\Lambda_{+}(p)+T_{+}(p)(\nabla_{p}-\Delta_{p})\]
from which the result follows as \(\phi\mid\Lambda_{+}(p)=0\) for \(\phi\) of index \(p\) and \(T_{-}(p)\Lambda_{+}(p)=p^{3}\Delta_{p}T_{+}(p)\)
### Calculation of the Dirichlet Series
In what follows, we will assume that \(F\) is in the Maass space, as we have defined in 3.7.
From now on we will always assume that \(h\) has totally real Fourier coefficients. This is a technical assumption which could be lifted. We can rewrite \(D_{F,G,h}\) as
\[D_{F,G,h}(s) =4\sum_{l,\epsilon,m}\langle m^{3-k}\phi|T_{-}(m)U_{l},\psi_{mN(l )}\rangle_{\mathcal{A}}a_{mN(\epsilon)}N(l)^{-(k+s-3)}N(\epsilon)^{-(k+s-1)}m ^{-(2k+s-4)}\] \[=4\beta_{k}\sum_{l,\epsilon,m}\langle\tilde{\phi}_{1}|T_{-}(m)U_ {l},\tilde{\psi}_{mN(l)}\rangle_{\mathcal{A}}a_{mN(\epsilon)}N(l)^{-s}N( \epsilon)^{-(k+s-1)}m^{-(2k+s-4)}\]
with \(l,\epsilon\in\mathbb{Z}[i]\) coprime with their real parts positive and imaginary parts non-negative and \(m\in\mathbb{N}\). Then, if \(\phi\) is a Fourier-Jacobi form, by the above, we get
\[\phi|_{k}\Lambda_{-}(p)=p^{3k-8}\tilde{\phi}\left(\begin{pmatrix}\tau&pz_{1} \\ pz_{2}&p^{2}\tau^{\prime}\end{pmatrix}\right)e^{-2\pi mp^{2}\tau^{\prime}}=p^{3 k-8}\phi(\tau,pz_{1},pz_{2})=p^{3k-8}\phi|_{k}U_{p}\]
We now define the \(p-\)part of the Dirichlet series
\[D_{F,G,h}^{(p)}(s) =\sum_{l,\epsilon,m\geq 0}\langle\tilde{\phi}_{1}|T_{-}(p^{m})U_{ p^{l}},\tilde{\psi}_{p^{m+2l}}\rangle_{\mathcal{A}}a_{p^{m+2\epsilon}}p^{-2 sl}p^{-2(k+s-1)\epsilon}p^{-(2k+s-4)m}\] \[=\sum_{l,\epsilon,m\geq 0}\langle\tilde{\phi}_{1}|T_{-}(p^{m}) \Lambda_{-}(p^{l}),\tilde{\psi}_{p^{m+2l}}\rangle_{\mathcal{A}}a_{p^{m+2 \epsilon}}p^{-(3k+2s-8)l}p^{-2(k+s-1)\epsilon}p^{-(2k+s-4)m}\]
together with the condition that \(\min(l,\epsilon)=0\). The last line is obtained using the relation between \(U_{p},\Lambda_{-}(p)\).
Now, with respect to the inner product of Jacobi forms, we have by [8, Proposition 5.1] that \(\Lambda_{-}^{\text{adj}}(p)=p^{2k-6}\Lambda_{+}(p)\) and \(T_{-}^{\text{adj}}(p)=p^{k-3}T_{+}(p)\). This then gives that the adjoint of \(\Lambda_{-}(p)\) is \(\Lambda_{+}(p)\) for the inner product on \(P-\)forms and similarly the \(P-\)form adjoint for \(T_{-}\) is \(T_{+}\).
Let now \(X=p^{-(k+s-1)}\) and \(N=p^{k-1}\). Consider the Satake parameters \(\alpha_{1},\alpha_{2}\) of \(h\) such that \(\alpha_{1}+\alpha_{2}=a_{p}\) and \(\alpha_{1}\alpha_{2}=p^{k-1}\). Let also \(X_{i}=\alpha_{i}p^{-(2k+s-4)},i=1,2\).
We write
\[D_{F,G,h}^{(p)}(s)=D_{(\epsilon)}(s)+D_{(l)}(s)-D_{(\epsilon,l)}(s)\]
where the corresponding index means that this variable(or both) is \(0\). Using the fact that
\[a_{p^{m}}=\frac{\alpha_{1}^{m+1}-\alpha_{2}^{m+1}}{\alpha_{1}-\alpha_{2}}\]
and properties for the adjoint operators we mentioned above, we obtain:
\[D_{(\epsilon)}(s)(\alpha_{1}-\alpha_{2}) =\alpha_{1}\sum_{l,m=0}^{\infty}\langle\tilde{\phi}_{1},\tilde{\psi} _{p^{m+2l}}|T_{+}(p^{m})\Lambda_{+}(p^{l})\rangle_{\mathcal{A}}p^{-(3k+2s-8)l}( \alpha_{1}p^{-(2k+s-4)})^{m}\] \[-\alpha_{2}\sum_{l,m=0}^{\infty}\langle\tilde{\phi}_{1},\tilde{ \psi}_{p^{m+2l}}|T_{+}(p^{m})\Lambda_{+}(p^{l})\rangle_{\mathcal{A}}p^{-(3k+2s- 8)l}(\alpha_{2}p^{-(2k+s-4)})^{m}\] \[D_{(l)}(s)(\alpha_{1}-\alpha_{2}) =\alpha_{1}\sum_{\epsilon,m=0}^{\infty}\langle\tilde{\phi}_{1}, \tilde{\psi}_{p^{m}}|T_{+}(p^{m})\rangle_{\mathcal{A}}(\alpha_{1}p^{-(k+s-1)} )^{2\epsilon}(\alpha_{1}p^{-(2k+s-4)})^{m}\] \[-\alpha_{2}\sum_{\epsilon,m=0}^{\infty}\langle\tilde{\phi}_{1}, \tilde{\psi}_{p^{m}}|T_{+}(p^{m})\rangle_{\mathcal{A}}(\alpha_{2}p^{-(k+s-1)} )^{2\epsilon}(\alpha_{2}p^{-(2k+s-4)})^{m}\] \[D_{(\epsilon,l)}(s)(\alpha_{1}-\alpha_{2}) =\alpha_{1}\sum_{m=0}^{\infty}\langle\tilde{\phi}_{1},\tilde{\psi }_{p^{m}}|T_{+}(p^{m})\rangle_{\mathcal{A}}(\alpha_{1}p^{-(2k+s-4)})^{m}\] \[-\alpha_{2}\sum_{m=0}^{\infty}\langle\tilde{\phi}_{1},\tilde{\psi }_{p^{m}}|T_{+}(p^{m})\rangle_{\mathcal{A}}(\alpha_{1}p^{-(2k+s-4)})^{m}\]
**Proposition 5.7**.: We have
\[D_{(l)}(s)-D_{(\epsilon,l)}(s)=\frac{\langle\tilde{\phi}_{1},\tilde{\psi}_{1} \rangle_{\mathcal{A}}}{\alpha_{1}-\alpha_{2}}\left(\frac{\alpha_{1}^{3}X^{2}}{ Q_{p,F}^{(2)}(X_{1})}-\frac{\alpha_{2}^{3}X^{2}}{Q_{p,F}^{(2)}(X_{2})}\right)\]
Proof.: This follows by the above and the Proposition 5.4 with \(m=1\). We have
\[\sum_{m=0}^{\infty}\langle\tilde{\phi}_{1},\tilde{\psi}_{p^{m}}|T_{+}(p^{m})X_ {1}^{m}\rangle_{\mathcal{A}}=\langle\tilde{\phi}_{1},Q_{p,F}^{(2)}(X_{1})^{-1} (1-p^{2k-6}X_{1}^{2})\tilde{\phi}_{1}\rangle_{\mathcal{A}}=(1-p^{2k-6}X_{1}^{ 2})Q_{p,F}^{(2)}(X_{1})^{-1}\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle_{ \mathcal{A}}\]
Also,
\[\sum_{\epsilon=0}^{\infty}(\alpha_{1}p^{-(k+s-1)})^{2\epsilon}=\frac{1}{1- \alpha_{1}^{2}p^{-2(k+s-1)}}\]
So, the first part of the difference we are interested in is
\[\alpha_{1}\frac{\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle_{\mathcal{A}} }{\alpha_{1}-\alpha_{2}}Q_{p,F}^{(2)}(X_{1})^{-1}(1-p^{2k-6}X_{1}^{2})\left( \frac{1}{1-\alpha_{1}^{2}p^{-2(k+s-1)}}-1\right)=\frac{\langle\tilde{\phi}_{1}, \tilde{\psi}_{1}\rangle_{\mathcal{A}}}{\alpha_{1}-\alpha_{2}}\frac{\alpha_{1}^ {3}X^{2}}{Q_{p,F}^{(2)}(X_{1})}\]
and similarly for the second part.
**Proposition 5.8**.: Let \(Y=p^{2}NX^{2}\) and \(l\geq 0\). Then,
\[\sum_{n=0}^{\infty}\tilde{\psi}_{p^{m+2l}}|_{k}T_{+}(p^{n})\Lambda _{+}(p^{l})X_{1}^{m}(X^{2}N^{-1}p^{5})^{l}= Q_{p,F}^{(2)}(X_{1})^{-1}(\tilde{\psi}_{p^{2l}}-\tilde{\psi}_{p ^{2l-1}}|_{k}T_{-}(p)X_{1}+p\tilde{\psi}_{p^{2l-2}}|\Lambda_{-}(p)X_{1}^{2})\] \[|_{k}\left((1+p(\nabla_{p}-p\Delta_{p})X_{1}^{2})\Delta_{p^{l}}^{-1 }\Lambda_{+}(p^{l})(p^{-3}Y)^{l}\right)\]
Proof.: The proof just uses Proposition 5.4, together with the fact that \(F|_{k}\Delta_{p^{l}}=(p^{2k-8})^{l}F\).
**Proposition 5.9**.: We have
\[\alpha_{1}Q_{p,G}^{(2)}(X_{1})^{-1}\sum_{l=0}^{\infty}\langle\tilde{\phi}_{1}, \tilde{\psi}_{p^{2l}}|_{k}(1+p(\nabla_{p}-p\Delta_{p})X_{1}^{2})\Delta_{p^{l}}^ {-1}\Lambda_{+}(p^{l})(p^{-3}Y)^{l}\rangle_{\mathcal{A}}-\]
\[-\alpha_{2}Q_{p,G}^{(2)}(X_{2})^{-1}\sum_{l=0}^{\infty}\langle\tilde{\phi}_{1}, \tilde{\psi}_{p^{2l}}|_{k}(1+p(\nabla_{p}-p\Delta_{p})X_{2}^{2})\Delta_{p^{l}}^ {-1}\Lambda_{+}(p^{l})(p^{-3}Y)^{l}\rangle_{\mathcal{A}}\]
\[=\left(\alpha_{1}Q_{p,G}^{(2)}(X_{1})^{-1}-\alpha_{2}Q_{p,G}^{(2)}(X_{2})^{-1} \right)\frac{\langle\tilde{\phi}_{1};\tilde{\psi}_{1}|_{k}S^{(2)}(Y)\rangle_{ \mathcal{A}}}{D_{p,G}^{(2)}(Y)}+\left(\frac{\alpha_{2}^{3}X^{2}}{Q_{p,G}^{(2)}( X_{2})}-\frac{\alpha_{1}^{3}X^{2}}{Q_{p,G}^{(2)}(X_{1})}\right)\langle\tilde{\phi}_{1}, \tilde{\psi}_{1}\rangle_{\mathcal{A}}\]
Proof.: We first observe \(\tilde{\psi}_{p^{2l}}|_{k}(1+p(\nabla_{p}-p\Delta_{p})X_{1}^{2})=\begin{cases}( 1-p^{2k-6}X_{1}^{2})\tilde{\psi}_{1}&\text{if }l=0\\ \tilde{\psi}_{p^{2l}}&\text{if }l\geq 1\end{cases}\)
by using the result of Proposition 5.4. Hence, by Proposition 5.3 we obtain
\[\sum_{l=0}^{\infty}\langle\tilde{\phi}_{1},\tilde{\psi}_{p^{2l}}|_{k}(1+p( \nabla_{p}-p\Delta_{p})X_{2}^{2})\Delta_{p^{l}}^{-1}\Lambda_{+}(p^{l})(p^{-3} Y)^{l}\rangle_{\mathcal{A}}=(1-p^{2k-6}X_{1}^{2})\tilde{\psi}_{1}+\sum_{l=1}^{ \infty}\tilde{\psi}_{p^{2l}}|_{k}\Delta_{p^{l}}^{-1}\Lambda_{+}(p^{l})(p^{-3} Y)^{l}=\]
\[=\sum_{l=0}^{\infty}\tilde{\psi}_{p^{2l}}|_{k}\Delta_{p^{l}}^{-1}\Lambda_{+}(p ^{l})(p^{-3}Y)^{l}-p^{2k-6}X_{1}^{2}\tilde{\psi}_{1}=\tilde{\psi}_{1}|_{k}S^{( 2)}(Y)D_{p,G}^{(2)}(Y)^{-1}-\alpha_{1}^{2}X^{2}\tilde{\psi}_{1}\]
and from this the result follows.
**Proposition 5.10**.: We have
\[\alpha_{1}Q_{p,G}^{(2)}(X_{1})^{-1}\sum_{l=0}^{\infty}p\langle\tilde{\phi}_{1 },\tilde{\psi}_{p^{2l-2}}|_{k}\Lambda_{-}(p)X_{1}^{2}(1+p(\nabla_{p}-p\Delta_{ p})X_{1}^{2})\Delta_{p^{l}}^{-1}\Lambda_{+}(p^{l})(p^{-3}Y)^{l}\rangle_{ \mathcal{A}}-\]
\[-\alpha_{2}Q_{p,G}^{(2)}(X_{2})^{-1}\sum_{l=0}^{\infty}p\langle\tilde{\phi}_{1 },\tilde{\psi}_{p^{2l-2}}|_{k}\Lambda_{-}(p)X_{2}^{2}(1+p(\nabla_{p}-p\Delta_{ p})X_{2}^{2})\Delta_{p^{l}}^{-1}\Lambda_{+}(p^{l})(p^{-3}Y)^{l}\rangle_{ \mathcal{A}}\]
\[=\left(\frac{\alpha_{1}^{3}Np^{4}X^{4}}{Q_{p,G}^{(2)}(X_{1})}-\frac{\alpha_{2} ^{3}Np^{4}X^{4}}{Q_{p,G}^{(2)}(X_{2})}\right)\frac{\langle\tilde{\phi}_{1}, \tilde{\psi}_{1}|_{k}S^{(2)}(Y)\rangle_{\mathcal{A}}}{D_{p,G}^{(2)}(Y)}\]
Proof.: We use the identities \(\Lambda_{-}(p)(\nabla_{p}-p\Delta_{p})=0\) and \(\Lambda_{-}(p)\Lambda_{+}(p)=p^{6}(\Delta_{p})^{2}\). We have
\[\sum_{l=0}^{\infty}p\tilde{\psi}_{p^{2l-2}}|_{k}\Lambda_{-}(p)X_{1}^{2}(1+p( \nabla_{p}-p\Delta_{p})X_{1}^{2})\Delta_{p^{l}}^{-1}\Lambda_{+}(p^{l})(p^{-3} Y)^{l}\]
\[=p\sum_{l=0}^{\infty}\tilde{\psi}_{p^{2l-2}}|_{k}\Lambda_{-}(p)X_{1}^{2}\Delta _{p^{l}}^{-1}\Lambda_{+}(p^{l})(p^{-3}Y)^{l}=p\sum_{l=0}^{\infty}\tilde{\psi}_{ p^{2l-2}}|_{k}p^{6}(\Delta_{p})^{2}\Delta_{p^{l}}^{-1}\Lambda_{+}(p^{l-1})(p^{-3} Y)^{l-1}X_{1}^{2}\]
\[=p^{4}\sum_{l=0}^{\infty}\tilde{\psi}_{p^{2l-2}}|\Delta_{p^{l-1}}^{-1}\Lambda_{+ }(p^{l-1})(p^{-3}Y)^{l-1}\Delta_{p}X_{1}^{2}Y=p^{2k-4}D_{p,G}^{(2)}(Y)^{-1} \tilde{\psi}_{1}|S^{(2)}(Y)X_{1}^{2}Y\]
from which the result then follows.
**Proposition 5.11**.: \[\alpha_{1}Q_{p,G}^{(2)}(X_{1})^{-1}\sum_{l=0}^{\infty}\langle\tilde{\phi}_{1}, \tilde{\psi}_{p^{2l-1}}|_{k}T_{-}(p)X_{1}(1+p(\nabla_{p}-p\Delta_{p})X_{1}^{2} )\Delta_{p^{l}}^{-1}\Lambda_{+}(p^{l})(p^{-3}Y)^{l}\rangle_{\mathcal{A}}-\]
\[-\alpha_{2}Q_{p,G}^{(2)}(X_{2})^{-1}\sum_{l=0}^{\infty}\langle\tilde{\phi}_{1}, \tilde{\psi}_{p^{2l-1}}|_{k}T_{-}(p)X_{2}(1+p(\nabla_{p}-p\Delta_{p})X_{2}^{2} )\Delta_{p^{l}}^{-1}\Lambda_{+}(p^{l})(p^{-3}Y)^{l}\rangle_{\mathcal{A}}\]
\[=\left(\frac{\alpha_{1}^{2}p^{4}X^{3}\lambda_{p}}{Q_{p,G}^{(2)}(X_{1})(1+Y)}- \frac{\alpha_{2}^{2}p^{4}X^{3}\lambda_{p}}{Q_{p,G}^{(2)}(X_{2})(1+Y)}\right) \frac{\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle S_{F}^{(2)}(Y)^{\text{ prim}}}{(\alpha_{1}-\alpha_{2})D_{p,G}^{(2)}(Y)}\]
Proof.: The proof is exactly the same as the proof in [13, Proposition 4.5](using also the above claim that \(\phi|_{k}S^{(2)}(X)T_{+}(p)=\phi|_{k}T_{+}(p)S^{(2)}(X)^{\text{factor}}\), if \(\phi\) has index \(p\).) Here \(\lambda_{p}\) is the eigenvalue given by \(\tilde{\phi}_{p}|T_{+}(p)=\lambda_{p}\tilde{\phi}_{1}\)
By combining all the above, we obtain that
\[D^{(p)}_{F,G,h}(s)=\frac{\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle S^{(2)}_{ F}(Y)^{\text{prim}}}{(\alpha_{1}-\alpha_{2})D^{(2)}_{p,G}(Y)}\times\]
\[\times\left(\frac{\alpha_{1}}{Q^{(2)}_{p,G}(X_{1})}-\frac{\alpha_{2}}{Q^{(2)} _{p,G}(X_{2})}+\frac{\alpha_{1}^{3}Np^{4}X^{4}}{Q^{(2)}_{p,G}(X_{1})}-\frac{ \alpha_{2}^{3}Np^{4}X^{4}}{Q^{(2)}_{p,G}(X_{2})}-\frac{\alpha_{2}^{2}p^{4}X^{3 }\lambda_{p}}{Q^{(2)}_{p,G}(X_{1})(1+Y)}+\frac{\alpha_{2}^{2}p^{4}X^{3}\lambda _{p}}{Q^{(2)}_{p,G}(X_{2})(1+Y)}\right)\]
Let us now look at the expression in the big bracket. The numerator equals
\[((\alpha_{1}+\alpha_{1}^{3}Np^{4}X^{4})(1+Y)-\alpha_{1}^{2}p^{4}X^{3}\lambda_{ p})Q^{(2)}_{p,G}(X_{2})-((\alpha_{2}+\alpha_{2}^{3}Np^{4}X^{4})(1+Y)-\alpha_{2}^{ 2}p^{4}X^{3}\lambda_{p})Q^{(2)}_{p,G}(X_{1})\]
Here
\[Q^{(2)}_{p,G}(t)=1-\lambda_{p}t+(p\lambda_{T_{1,p}}+p(p^{3}+p^{2}-p+1)p^{2k-8} )t^{2}-p^{4}p^{2k-8}\lambda_{p}t^{3}+p^{4k-8}t^{4}\]
where \(\lambda_{T_{1,p}}\) is the eigenvalue corresponding to the operator \(T_{1,p}\). Denote by \(A_{2}=p\lambda_{T_{1,p}}+p(p^{3}+p^{2}-p+1)p^{2k-8}\). By then performing the very lengthy calculation, and grouping in powers of \(Y\), we obtain that the above numerator equals
\[(\alpha_{1}-\alpha_{2})(1-Y)(1-Y(A_{2}p^{2}N^{-2}-2)+Y^{2}(p^{2}N^{-2}\lambda_ {p}^{2}-2A_{2}p^{2}N^{-2}+2)-Y^{3}(A_{2}p^{2}N^{-2}-2)+Y^{4})\]
\[=(\alpha_{1}-\alpha_{2})(1-Y)D^{(2)}_{p,G}(Y)\]
using the relations of equation (2). Hence,
\[D^{(p)}_{F,G,h}(s)=\frac{\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle_{ \mathcal{A}}S^{(2)}_{F}(Y)^{\text{factor}}(1-Y)}{Q^{(2)}_{p,G}(X_{1})Q^{(2)}_{ p,G}(X_{2})}\]
Let us now explore the connection of \(S^{(2)}_{F}(Y)^{\text{factor}}\) with known \(L-\)functions.
**Proposition 5.12**.: We have
\[S^{(2)}_{F}(Y)^{\text{factor}}=L_{p}(f,k+s-2)L_{p}\left(f,k+s-2,\left(\frac{- 4}{p}\right)\right)\]
where \(f\in S_{k-1}\left(\Gamma_{0}(4),\left(\frac{-4}{\cdot}\right)\right)\) is the modular form whose Maass lift is \(F\) as in 3.8.
Proof.: We first write
\[f\mid_{k-1}T(p)=a(p)f\]
for the standard operator \(T(p)=\Gamma_{0}(4)\text{diag}(1,p)\Gamma_{0}(4)=T(1,p)\). By standard relations between Hecke operators, we then have
\[T(p^{2})=T(p)^{2}-\left(\frac{-4}{p}\right)p^{k-2}\]
where \(T(p^{2})=T(1,p^{2})+\left(\frac{-4}{p}\right)p^{k-3}\). This then implies
\[f\mid_{k-1}T(1,p^{2})=(a(p)^{2}+p^{k-2}+p^{k-3})f\]
Using now [7, Lemma 3.3] we obtain that
\[\tilde{\phi}_{1}\mid_{k}T^{J}(p)=p^{k-4}(a(p)^{2}+p^{k-2}+p^{k-3})\tilde{\phi }_{1}\]
Hence
\[S^{(2)}_{F}(Y)=1-p^{1-k}(a(p)^{2}+2p^{k-2})Y+p^{-2}Y^{2}\]
We now define the Satake parameters of \(f\) as follows:
\[1-a(p)p^{-s}+\left(\frac{-4}{p}\right)p^{k-2}p^{-2s}=(1-\alpha_{p}p^{-s})(1- \beta_{p}p^{-s})\]
We now have \(Y=p^{2}NX^{2}\), \(N=p^{k-1}\) and \(X=p^{-(k+s-1)}\) and so we obtain
\[S^{(2)}_{F}(Y)=1-p^{4-2k}(\alpha_{p}^{2}+\beta_{p}^{2})p^{-2s}+p^{-2k+4-2s}=(1- p^{4-2k-2s}\alpha_{p}^{2})(1-p^{4-2k-2s}\beta_{p}^{2})=\]
\[=(1-p^{2-k-s}\alpha_{p})(1-p^{2-k-s}\beta_{p})(1+p^{2-k-s}\alpha_{p})(1+p^{2-k-s} \beta_{p})\] \[=L_{p}(f,k+s-2)L_{p}\left(f,k+s-2,\left(\frac{-4}{p}\right)\right)\]
Hence, in total, we obtain the following Theorem:
**Theorem 5.13**.: _Let \(F,G\in S^{k}_{2}\) and \(h\in S^{k}_{1}\) be Hecke eigenforms, with \(h\) having totally real Fourier coefficients, and \(F\) belonging in the Maass space, with corresponding \(f\in S_{k-1}\left(\Gamma_{0}(4),\left(\frac{-4}{\cdot}\right)\right)\). Let also \(\phi_{1},\psi_{1}\) denote the first Fourier-Jacobi coefficients of \(F,G\) and \(X=p^{-(k+s-1)},N=p^{k-1}\) and \(Y=pNX^{2}\). We then have for \(\mathrm{Re}(s)\) large enough_
\[D^{(p)}_{F,G,h}(s)=\frac{\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle_{A}L _{p}(f,k+s-2)L_{p}\left(f,k+s-2,\left(\frac{-4}{p}\right)\right)(1-Y)}{Q^{(2) }_{p,G}(X_{1})Q^{(2)}_{p,G}(X_{2})}\]
**Remark**.: The factor \(Q^{2}_{p,G}(X_{1})Q^{2}_{p,G}(X_{2})\) is of degree \(8\), whereas the twisted \(L-\)function should have degree \(12\). Looking at the function \(Z^{*}_{G}(s)\) (the one with the symmetric functional equation) in the notation of [8, Theorem 5.1], we see that the Euler factor at an inert prime of this function is given by:
\[(1+p^{k-2-s})^{-1}(1-p^{k-2-s})^{-1}Q^{2}_{p,G}(p^{-s})^{-1}\]
In particular if we want to introduce the missing factors in the twisted series, we need to multiply the Dirichlet series \(D_{F,G,h}(s)\) with \(L(h,s-k+2)L(h,\chi,s-k+2)\). But this will create some extra factors at the split prime. Namely at a split prime we have the factor:
\[\prod_{i=1}^{2}(1-\alpha_{i}p^{k-2-s})^{-1}(1-\alpha_{i}p^{k-2-s})^{-1}=\prod_ {i=1}^{2}(1-\alpha_{i}p^{k-2-s})^{-2}\]
These extra factors should cancel. That is, we should find them in the numerator of the Dirichlet series for the split primes, which we consider in the next section.
## 6. Split Primes
We will now consider the case where the rational prime \(p\) splits. That is, we have that \(p=\pi\overline{\pi}\) for some prime element \(\pi\in\mathcal{O}_{K}\). Our aim in this section is to prove weak rationality theorems analogous to Propositions 5.4 and 5.5. In order to do that, we will first have to factorize the polynomials which serve as the \(p-\)factors of the standard and Gritsenko's \(L-\)function attached to a Hermitian Hecke eigenform in the parabolic Hecke algebra \(H^{1,1}_{p}\), as defined in Section 3. The factorisation of the latter polynomial has been done by Gritsenko in [8]. Our aim, therefore, is to factorize the standard Hecke polynomial. However, as we mentioned in Section 3, \(H^{1,1}_{p}\) is isomorphic to the ring of polynomials of one variable with coefficients from the Hecke ring of the parabolic subgroup
\[P_{1,2,1}=\left\{\begin{pmatrix}\pm 1&*&*\\ 0&g&*\\ 0&0&\pm 1\end{pmatrix}\in\mathrm{GL}_{4}(\mathbb{Z}_{p})\mid g\in\mathrm{GL}_{2} (\mathbb{Z}_{p})\right\}\]
Hence, we will first investigate Hecke algebras of the general linear group and then use the above isomorphism to translate the relations back to \(H^{1,1}_{p}\).
### Relations between operators in \(\mathrm{GL}_{4}\) and factorisation
For this section, we denote by \(\Gamma_{n}=\mathrm{GL}_{n}(\mathbb{Z}_{p})\) and let
\[P^{(n)}_{m_{1},\cdots,m_{l}}=\left\{\begin{pmatrix}A_{1}&*&\cdots&*\\ 0&A_{2}&\cdots&*\\ \vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&A_{l}\end{pmatrix},A_{i}\in\mathrm{GL}_{m_{i}}(\mathbb{Q}_{p}),m_{1 }+\cdots+m_{l}=n\right\}\]
be a parabolic subgroup of \(\operatorname{GL}_{n}(\mathbb{Q}_{p})\). Denote by \(\Gamma_{P}=\Gamma_{P}^{(n)}=\Gamma_{m_{1},\cdots,m_{l}}^{(n)}\), the group of \(\mathbb{Z}_{p}-\)points of \(P_{m_{1},\cdots,m_{l}}^{(n)}\). Let also \(H_{n}=H(\Gamma_{n},\operatorname{GL}_{n}(\mathbb{Q}_{p}))\) be the full Hecke algebra and \(H_{m_{1},\cdots,m_{l}}=H(\Gamma_{m_{1},\cdots,m_{l}}^{(n)},P_{m_{1},\cdots,m_{ l}}^{(n)})\) denote the corresponding parabolic Hecke algebra.
Let us explain the isomorphism we mentioned above explicitly, as Gritsenko describes in [8, Proposition 2.4].
We fix an identification \(K_{p}:=K\otimes\mathbb{Q}_{p}\cong\mathbb{Q}_{p}\times\mathbb{Q}_{p}\) and denote by \((\mu,-\mu)\) the image of the element \((2i)^{-1}\). Let also \(e=(1,0),e^{\sigma}=(0,1)\in K_{p}\). We perform a change of variables with the matrix
\[C=\begin{pmatrix}eI_{2}&-\mu e^{\sigma}I_{2}\\ \mu e^{\sigma}E_{2}&eE_{2}\end{pmatrix}=\begin{pmatrix}\begin{pmatrix}I_{2}&0 \\ 0&E_{2}\end{pmatrix},\begin{pmatrix}0&-\mu I_{2}\\ \mu E_{2}&0\end{pmatrix}\end{pmatrix}\in\operatorname{GL}_{4}(\mathbb{Q}_{p}) \times\operatorname{GL}_{4}(\mathbb{Q}_{p}).\]
where \(I_{2}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\) and \(E_{2}\) is the identity. We remark that for \(n=2\):
\[\begin{pmatrix}I_{2}&0\\ 0&E_{2}\end{pmatrix}\operatorname{diag}(a_{1},a_{2},a_{3},a_{4})\begin{pmatrix} I_{2}&0\\ 0&E_{2}\end{pmatrix}^{-1}=\operatorname{diag}(a_{2},a_{1},a_{3},a_{4})\]
We then have that \(G_{p}^{(2)}\) (see Section 3) is identified by
\[\tilde{G}_{p}^{(2)}=\{(X,Y)\in\operatorname{GL}_{4}(\mathbb{Q}_{p})\times \operatorname{GL}_{4}(\mathbb{Q}_{p}):Y^{t}X=cE_{4},c\in\mathbb{Q}_{p}^{\times}\}\]
and \(U_{p}^{(2)}\) by
\[\tilde{U}_{p}^{(2)}=\{(\gamma,\alpha(\gamma^{-1})^{t}),\gamma\in\operatorname {GL}_{4}(\mathbb{Z}_{p}),\alpha\in\mathbb{Z}_{p}^{\times}\}\]
The identification \(G_{p}^{2}=\operatorname{GL}_{4}(\mathbb{Q}_{p})\times\mathbb{Q}_{p}^{\times}\) is obtained by projection to the first component induced by \(K_{p}=\mathbb{Q}_{p}\times\mathbb{Q}_{p}\).
The double coset of \((M,c(M^{-1})^{t})\in\tilde{G}_{p}^{(2)}\) with respect to \(\tilde{U}_{p}^{(2)}\) is determined by the double coset of \(M\) with respect to \(\operatorname{GL}_{4}(\mathbb{Z}_{p})\) and by the order \(\delta\) of the ideal \(c\mathbb{Z}_{p}\). We will denote such a coset by \((M,\delta)_{\tilde{U}_{p}^{(2)}}\). Note that here there is a choice, namely whether we have \(\pi\mapsto(pu,v)\) with \(u,v\in\mathbb{Z}_{p}^{\times}\) or \(\overline{\pi}\mapsto(pu^{\prime},v^{\prime})\) with \(u^{\prime},v^{\prime}\in\mathbb{Z}_{p}^{\times}\). In the following, we always choose the first identification.
The above identification makes the following diagram commutative:
We will now prove [9, Lemma 2], which is given without proof. We note here that the Hecke ring of interest in [9] is the so called \(p-\)ring of \(\operatorname{GL}_{n}\), but this is canonically isomorphic to the \(p-\)adic Hecke ring.
**Lemma 6.1**.: We set \(n=n_{1}+n_{2}\) with \(n_{1},n_{2}\in\mathbb{N}\) and let \(A_{i}\in\operatorname{GL}_{n_{i}}(\mathbb{Q}_{p})\). Then we have the coset decomposition
\[\Gamma_{n_{1},n_{2}}\begin{pmatrix}A&0\\ 0&D\end{pmatrix}\Gamma_{n_{1},n_{2}}=\sum\Gamma_{n_{1},n_{2}}\begin{pmatrix}A&B \\ 0&D\end{pmatrix}\begin{pmatrix}\mu&0\\ 0&\nu\end{pmatrix},\]
where the sum is over \(\mu\in\Gamma_{n_{1}}^{A}\setminus\Gamma_{n_{1}}\), \(\nu\in\Gamma_{n_{2}}^{D}\setminus\Gamma_{n_{2}}\) and \(B\in V(A,D)\) with \(\Gamma_{m}^{R}:=\Gamma_{m}\cap R^{-1}\Gamma_{m}R\) for a given matrix \(R\) and
\[V(A,D)=\{AY\ |\ Y\in M_{n_{1},n_{2}}(\mathbb{Z}_{p})\pmod{D^{\times}}\}\]
that is \(AY_{1}\equiv AY_{2}\pmod{D^{\times}}\) if and only if \(AY_{1}D^{-1}-AY_{2}D^{-1}\in M_{n_{1},n_{2}}(\mathbb{Z}_{p})\).
Proof.: We let \(X\in\Gamma_{n_{1},n_{2}}\begin{pmatrix}A&0\\ 0&D\end{pmatrix}\Gamma_{n_{1},n_{2}}\). That is
\[X\in\Gamma_{n_{1},n_{2}}\begin{pmatrix}A&0\\ 0&D\end{pmatrix}\begin{pmatrix}R_{1}&R_{2}\\ 0&R_{3}\end{pmatrix}=\Gamma_{n_{1},n_{2}}\begin{pmatrix}A&AR_{2}R_{3}^{-1}\\ 0&D\end{pmatrix}\begin{pmatrix}R_{1}&0\\ 0&R_{3}\end{pmatrix}\]
Setting \(Y:=R_{2}R_{3}^{-1}\) and \(\mu=R_{1}\) and \(\nu=R_{3}\) shows that \(X\) belongs to a right coset of the stated form. We now show the equivalence.
We assume that
\[\Gamma_{n_{1},n_{2}}\begin{pmatrix}A&B_{1}\\ 0&D\end{pmatrix}\begin{pmatrix}\mu_{1}&0\\ 0&\nu_{1}\end{pmatrix}=\Gamma_{n_{1},n_{2}}\begin{pmatrix}A&B_{2}\\ 0&D\end{pmatrix}\begin{pmatrix}\mu_{2}&0\\ 0&\nu_{2}\end{pmatrix}\]
This gives us,
\[\begin{pmatrix}A\mu_{1}&B_{1}\nu_{1}\\ 0&D\nu_{1}\end{pmatrix}\begin{pmatrix}A\mu_{2}&B_{2}\nu_{2}\\ 0&D\nu_{2}\end{pmatrix}^{-1}\in\Gamma_{n_{1},n_{2}}\]
or equivalently
\[\begin{pmatrix}A\mu_{1}&B_{1}\nu_{1}\\ 0&D\nu_{1}\end{pmatrix}\begin{pmatrix}(A\mu_{2})^{-1}&-(A\mu_{2})^{-1}B_{2}\nu_ {2}(D\nu_{2})^{-1}\\ 0&(D\nu_{2})^{-1}\end{pmatrix}\in\Gamma_{n_{1},n_{2}}\]
It then follows that \(\mu_{1}\mu_{2}^{-1}\in A^{-1}\Gamma_{n_{1}}A\) and \(\nu_{1}\nu_{2}^{-1}\in D^{-1}\Gamma_{n_{2}}D\). Moreover, if we write \(B_{1}=AY_{1}\) and \(B_{2}=AY_{2}\) we obtain that
\[-A\mu_{1}\mu_{2}^{-1}Y_{2}D^{-1}+AY_{1}\nu_{1}\nu_{2}^{-1}D^{-1}\in M_{n_{1},n _{2}}(\mathbb{Z}_{p}).\]
We now write \(\mu_{1}\mu_{2}^{-1}=A^{-1}\gamma_{1}A\) and \(\nu_{1}\nu_{2}^{-1}=D^{-1}\gamma_{2}D\), with \(\gamma_{1}\in\Gamma_{n_{1}}\) and \(\gamma_{2}\in\Gamma_{n_{2}}\). We have \(\mu_{1}\mu_{2}^{-1}\in A^{-1}\Gamma_{n_{1}}A\) and as \(\mu\in\Gamma_{n_{1}}\) we get \(\mu\) runs in a set of representatives for \(A^{-1}\Gamma_{n_{1}}A\cap\Gamma_{n_{1}}\backslash\Gamma_{n_{1}}\) and similarly for \(\nu\). If we now first fix such sets of representatives for \(\mu,\nu\), we get that \(\mu_{1}\mu_{2}^{-1}=I\) and \(\nu_{1}\nu_{2}^{-1}=I\) because \(\mu_{1},\mu_{2}\) must take values from a finite set of representatives, with every two different of them different \((\mod A^{-1}\Gamma_{n_{1}}A)\) as well. Similarly for \(\nu\). The result then follows.
We now generalise the above lemma in the situation of a parabolic where on the main diagonal we have three block matrices. That is
**Lemma 6.2**.: We set \(n=n_{1}+n_{2}+n_{3}\) with \(n_{1},n_{2},n_{3}\in\mathbb{N}\). Then, with \(A_{i}\in\operatorname{GL}_{n_{i}}(\mathbb{Q}_{p})\), we have the coset decomposition
\[\Gamma_{n_{1},n_{2},n_{3}}\begin{pmatrix}A_{1}&0&0\\ 0&A_{2}&0\\ 0&0&A_{3}\end{pmatrix}\Gamma_{n_{1},n_{2},n_{3}}=\sum\Gamma_{n_{1},n_{2},n_{3} }\begin{pmatrix}A_{1}&B_{1}&C\\ 0&A_{2}&B_{2}\\ 0&0&A_{3}\end{pmatrix}\begin{pmatrix}\mu_{1}&0&0\\ 0&\mu_{2}&0\\ 0&0&\mu_{3}\end{pmatrix},\]
where the sum is over \(\mu_{i}\in\Gamma_{n_{i}}^{A_{i}}\setminus\Gamma_{n_{i}}\), for \(i=1,2,3\), \(B_{1}\in V(A_{1},A_{2})\), \(B_{2}\in V(A_{2},A_{3})\) and \(C\in V(A_{1},A_{3})\)
Proof.: The fact that any element \(X\in\Gamma_{n_{1},n_{2},n_{3}}\begin{pmatrix}A_{1}&0&0\\ 0&A_{2}&0\\ 0&0&A_{3}\end{pmatrix}\Gamma_{n_{1},n_{2},n_{3}}\) belongs to a right coset of the above form, follows from the decomposition \(\Gamma_{n_{1},n_{2},n_{3}}=U_{n_{1},n_{2},n_{3}}T_{n_{1},n_{2},n_{3}}\) where
\[U_{n_{1},n_{2},n_{3}}=\left\{\begin{pmatrix}1&M_{n_{1},n_{2}}(\mathbb{Z}_{p}) &M_{n_{1},n_{3}}(\mathbb{Z}_{p})\\ 0&1&M_{n_{2},n_{3}}(\mathbb{Z}_{p})\\ 0&0&1\end{pmatrix}\right\},\text{ and }\,T_{n_{1},n_{2},n_{3}}=\left\{ \operatorname{diag}[\gamma_{1},\gamma_{2},\gamma_{3}],\,\,\,\gamma_{i}\in \Gamma_{i}\right\}.\]
For the equivalence relations we assume
\[\Gamma_{n_{1},n_{2},n_{3}}\begin{pmatrix}A_{1}&B_{1}&C\\ 0&A_{2}&B_{2}\\ 0&0&A_{3}\end{pmatrix}\begin{pmatrix}\mu_{1}&0&0\\ 0&\mu_{2}&0\\ 0&0&\mu_{3}\end{pmatrix}=\Gamma_{n_{1},n_{2},n_{3}}\begin{pmatrix}A_{1}&\tilde {B}_{1}&\tilde{C}\\ 0&A_{2}&\tilde{B}_{2}\\ 0&0&\tilde{A}_{3}\end{pmatrix}\begin{pmatrix}\tilde{\mu}_{1}&0&0\\ 0&\tilde{\mu}_{2}&0\\ 0&0&\tilde{\mu}_{3}\end{pmatrix}\]
This gives
\[\begin{pmatrix}A_{1}&B_{1}&C\\ 0&A_{2}&B_{2}\\ 0&0&A_{3}\end{pmatrix}\begin{pmatrix}\mu_{1}\tilde{\mu}_{1}^{-1}&0&0\\ 0&\mu_{2}\tilde{\mu}_{2}^{-1}&0\\ 0&0&\mu_{3}\tilde{\mu}_{3}^{-1}\end{pmatrix}\begin{pmatrix}A_{1}&\tilde{B}_{1 }&\tilde{C}\\ 0&A_{2}&\tilde{B}_{2}\\ 0&0&A_{3}\end{pmatrix}^{-1}\in\Gamma_{n_{1},n_{2},n_{3}},\]
or equivalently
\[\begin{pmatrix}A_{1}\mu_{1}\tilde{\mu}_{1}^{-1}&B_{1}\mu_{2}\tilde{\mu}_{2}^{- 1}&C\mu_{3}\tilde{\mu}_{3}^{-1}\\ 0&A_{2}\mu_{2}\tilde{\mu}_{2}^{-1}&B_{2}\mu_{3}\tilde{\mu}_{3}^{-1}\\ 0&0&A_{3}\mu_{3}\tilde{\mu}_{3}^{-1}\end{pmatrix}\begin{pmatrix}A_{1}&\tilde{ B}_{1}&\tilde{C}\\ 0&A_{2}&\tilde{B}_{2}\\ 0&0&A_{3}\end{pmatrix}^{-1}\in\Gamma_{n_{1},n_{2},n_{3}},\]
The block-diagonal entries give us the relations \(\mu_{i}\tilde{\mu}_{i}^{-1}\in A_{i}^{-1}\Gamma_{n_{i}}A_{i}\cap\Gamma_{n_{i}}\) while the "\(B\)" entries give us the relations on \(B_{1}\) and \(B_{2}\). We now show the relation on \(C\). We write
\[\begin{pmatrix}A_{1}&\tilde{B}_{1}&\tilde{C}\\ 0&A_{2}&\tilde{B}_{2}\\ 0&0&\tilde{A}_{3}\end{pmatrix}^{-1}=\begin{pmatrix}A_{1}^{-1}&*&Z\\ 0&A_{2}^{-1}&X\\ 0&0&\tilde{A}_{3}^{-1}\end{pmatrix}\]
with \(Z=-A_{1}^{-1}(-\tilde{B}_{1}A_{2}^{-1}\tilde{B}_{2}+\tilde{C})A_{3}^{-1}\) and \(X=-A_{2}^{-1}\tilde{B}_{2}A_{3}^{-1}\). We then have that
\[A_{1}\mu_{1}\tilde{\mu}_{1}^{-1}Z+B_{1}\mu_{2}\tilde{\mu}_{2}^{-1}X+C\mu_{3} \tilde{\mu}_{3}^{-1}A_{3}^{-1}\in M_{n_{1},n_{3}}(\mathbb{Z}_{p}).\]
We now write \(\mu_{i}\tilde{\mu}_{i}^{-1}=A_{i}^{-1}\gamma_{i}A_{i}\) for \(\gamma_{i}\in\Gamma_{n_{i}}\), \(B_{1}=A_{1}Y_{1}\) and \(C=A_{1}Y\) and get
\[\gamma_{1}A_{1}Z+A_{1}Y_{1}A_{2}^{-1}\gamma_{2}A_{2}Y_{1}+A_{1}YA_{3}^{-1} \gamma_{3}\in M_{(n_{1},n_{3})}(\mathbb{Z}_{p})\]
Arguing as before we may set \(\mu_{i}=\tilde{\mu}_{i}\) to obtain
\[\tilde{B}_{1}A_{2}^{-1}\tilde{B}_{2}A_{3}^{-1}-B_{1}A_{2}^{-1}\tilde{B}_{2}A_ {3}^{-1}-\tilde{A}_{3}^{-1}+CA_{3}^{-1}\in M_{n_{1},n_{3}}(\mathbb{Z}_{p})\]
Using the same argument we may even set \(\tilde{B}_{1}=B_{1}\) and hence obtain \(CA_{3}^{-1}\in M_{n_{1},n_{3}}(\mathbb{Z}_{p})\) which concludes the proof.
In the following, we will make use of the special case of the above lemma with \(n_{1}=n_{3}=1\) and \(n_{2}=2\). In particular, we get the following corollary:
**Corollary 6.3**.: Let \(a,b\in\mathbb{Q}_{p}^{\times}\) and \(A\in\operatorname{GL}_{2}(\mathbb{Q}_{p})\). Then we have
\[\Gamma_{1,2,1}\begin{pmatrix}a&*&*\\ 0&A&*\\ 0&0&b\end{pmatrix}\Gamma_{1,2,1}=\sum\Gamma_{1,2,1}\begin{pmatrix}a&B&D\\ 0&A&C\\ 0&0&b\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&N&0\\ 0&0&1\end{pmatrix}\]
where \(N\in\Gamma_{2}^{A}\backslash\Gamma_{2}\) and \(B\in V(a,A),D\in V(a,b),C\in V(A,b)\) with
\[V(A,D)=\{AY/\sim|\;AY=AY^{\prime}\iff A(Y-Y^{\prime})D^{-1}\in M_{n}(\mathbb{ Z}_{p})\}\]
We now denote by \(T(a,b,c,d)\) the element of the Hecke algebra \(H(\operatorname{GL}_{4}(\mathbb{Q}_{p}),\operatorname{GL}_{4}(\mathbb{Z}_{p}))\) defined by
\[T(a,b,c,d)=\operatorname{GL}_{4}(\mathbb{Z}_{p})\text{diag}(a,b,c,d) \operatorname{GL}_{4}(\mathbb{Z}_{p})\]
We also have the standard elements of \(H_{4}\):
\[T_{1}:=T(1,1,1,p),T_{2}:=T(1,1,p,p),T_{3}:=T(1,p,p,p)\]
The decomposition of these elements into right cosets can be found in [2, Lemma 3.2.18].
We now turn our focus into the parabolic Hecke algebra \(H_{1,2,1}\) and our aim is to compute the images of the above standard Hecke operators under the embedding described in Lemma 3.1. We note here that the conditions of the lemma for the Hecke algebras \(H_{4},H_{1,2,1}\) hold, as explained in page 2890 of [10]. In a similar fashion to \(H_{4}\), we denote by
\[T_{0}(a,b,c,d)=\Gamma_{1,2,1}\text{diag}(a_{1},a_{2},a_{3},a_{4})\Gamma_{1,2,1}\]
for an element of \(H_{1,2,1}\). Let us now introduce some useful elements of \(H_{1,2,1}\).
* \(\Lambda_{+}^{1,3}:=T_{0}(1,p,p,p)\)
* \(\Lambda_{+}^{3,1}:=T_{0}(1,1,1,p)\)
* \(\Lambda_{-}^{1,3}:=T_{0}(p,1,1,1)\)
* \(\Lambda_{-}^{3,1}:=T_{0}(p,p,p,1)\)
* \(T_{-}(p):=T_{0}(p,1,p,1)\)
* \(T_{+}(p):=T_{0}(1,1,p,p)\)
The right coset decomposition can be now computed by 6.3.
**Proposition 6.4**.: Let \(\epsilon\) denote the embedding of the Hecke algebra \(H_{4}\) into \(H_{1,2,1}\) as described in 3.1. We then have the following images of the elements \(T_{i}\):
* \(\epsilon(T_{1})=\Lambda_{-}^{1,3}+T_{0}(1,1,p,1)+\Lambda_{+}^{3,1}\)
* \(\epsilon(T_{2})=T_{-}(p)+T_{+}(p)+T_{0}(1,p,p,1)+T_{0}(p,1,1,p)\)
* \(\epsilon(T_{3})=\Lambda_{-}^{3,1}+T_{0}(p,1,p,p)+\Lambda_{+}^{1,3}\)
Proof.: These follow directly by the right coset decompositions and the definition of the embedding \(\epsilon\). We note here a typo in page 2879 of Gritsenko's paper [10] in the \(\Lambda_{+}\) component(it appears we need to swap \(\Lambda_{+}^{1,3}\) and \(\Lambda_{+}^{3,1}\)).
We are now in a position to give the factorisation of the standard Hecke polynomial \(Q_{4}\), as this is defined in [10, Example 2].
**Theorem 6.5**.: _Let \(Q_{4}(t)=1-T_{1}t+pT_{2}t^{2}-p^{3}T_{3}t^{3}+p^{6}\Delta t^{4}\in H^{4}[t]\). Then, in \(H_{1,2,1}\), we have the factorisation_
\[Q_{4}(t)=(1-\Lambda_{-}^{1,3}t)(1-X_{1}t+pX_{2}t^{2})(1-\Lambda_{+}^{3,1}t)\]
_where \(X_{1}=T_{0}(1,1,p,1)\) and \(X_{2}=T_{0}(1,p,p,1)\)._
Proof.: The factorisation follows by the above embeddings as well as a series of relations, which follow from the above right coset decompositions:
* \(\Lambda_{-}^{1,3}X_{1}=pT_{-}(p)\)
* \(X_{1}\Lambda_{+}^{3,1}=pT_{+}(p)\)
* \(\Lambda_{-}^{1,3}\Lambda_{+}^{3,1}=pT_{0}(p,1,1,p)\)
* \(X_{2}\Lambda_{+}^{3,1}=p^{2}\Lambda_{+}^{1,3}\)
* \(\Lambda_{-}^{1,3}X_{2}=p^{2}\Lambda_{-}^{3,1}\)
* \(\Lambda_{-}^{1,3}X_{1}\Lambda_{+}^{3,1}=p^{3}T_{0}(p,1,p,p)\)
* \(\Lambda_{-}^{1,3}X_{2}\Lambda_{+}^{3,1}=p^{5}\Delta\)
### Hecke Operators and weak rationality theorems
We will now translate the results above back to the Hecke algebras \(H_{p}^{2}\) and \(H_{p}^{1,1}\) of the unitary group. Let us first give the correspondence between the Hecke operators of \(H_{p}^{1,1}\) and \(H_{1,2,1}\):
* \(\Lambda_{+}^{1,3}\longleftrightarrow\Lambda_{+}(\pi):=\Gamma_{1,1}\text{diag }(\pi,1,\pi,p)\Gamma_{1,1}\)
* \(\Lambda_{+}^{3,1}\longleftrightarrow\Lambda_{+}(\overline{\pi}):=\Gamma_{1,1 }\text{diag}(\overline{\pi},1,\overline{\pi},p)\Gamma_{1,1}\)
* \(\Lambda_{-}^{3,1}\longleftrightarrow\Lambda_{-}(\pi):=\Gamma_{1,1}\text{diag }(\pi,p,\pi,1)\Gamma_{1,1}\)
* \(\Lambda_{-}^{1,3}\longleftrightarrow\Lambda_{-}(\overline{\pi}):=\Gamma_{1,1 }\text{diag}(\overline{\pi},p,\overline{\pi},1)\Gamma_{1,1}\)
* \(T_{0}(1,1,p,1)\longleftrightarrow T(\overline{\pi}):=\Gamma_{1,1}\text{diag }(1,\overline{\pi},p,\overline{\pi})\Gamma_{1,1}\)
* \(T_{0}(p,1,p,p)\longleftrightarrow T(\pi):=\Gamma_{1,1}\text{diag}(1,\pi,p, \pi)\Gamma_{1,1}\)
* \(T_{0}(1,p,p,1)\longleftrightarrow T(\pi,\overline{\pi}):=\Gamma_{1,1}\text{diag }(\pi,\pi,\pi,\pi)\Gamma_{1,1}\)
* \(T_{0}(p,1,1,p)\longleftrightarrow T(\overline{\pi},\pi):=\Gamma_{1,1}\text{diag }(\overline{\pi},\pi,\pi,\pi)\Gamma_{1,1}\)
* \(T_{-}(p)\longleftrightarrow T_{-}(p):=\Gamma_{1,1}\text{diag}(1,p,p,1)\Gamma_{1,1}\)
* \(T_{+}(p)\longleftrightarrow T_{+}(p):=\Gamma_{1,1}\text{diag}(1,1,p,p)\Gamma_{1,1}\)
We also denote \(\Delta_{\overline{\pi}}:=\Gamma_{1,1}\text{diag}(\pi,\pi,\pi,\pi)\Gamma_{1,1}\) and similarly for \(\Delta_{\overline{\pi}}\) and \(\Delta_{p}\).
In order to make clear how the isomorphism described in the beginning of subsection 6.1 works, let us describe it in the case \(\Lambda_{+}(\pi)\). We have by sending \(\pi\longmapsto p\)
\[\operatorname{diag}(\pi,1,\pi,p)\longmapsto\operatorname{diag}(p,1,p,p) \longmapsto\operatorname{diag}(1,p,p,p)\]
where the second arrow follows by applying the isomorphism \(C\) described in subsection 6.1. Also, since \(\mu(\operatorname{diag}(\pi,1,\pi,p))=p\), \(\Lambda_{+}(\pi)\) gets mapped to \((\Lambda_{+}^{1,3},1)_{\tilde{U}^{(2)}_{\pi}}\) but in general we will not keep account of the second coordinate. The only case in which this plays a difference is in the identification of \(\Delta_{\pi}\) and \(\Delta_{p}\), which both get mapped to \(\operatorname{diag}(p,p,p,p)\) but their factors of similitude are \(1,2\) respectively. The reason why factors of \(\Delta_{\overline{\pi}}\) appear in the relations below is to compensate for the second coordinate, as \(\operatorname{diag}(\overline{\pi},\overline{\pi},\overline{\pi},\overline{ \pi})\longmapsto\operatorname{diag}(1,1,1,1)\).
The table below shows some relations between the above Hecke operators. These can be obtained by translating back to \(H_{1,2,1}\) and using the right coset decompositions. The way to read the table is that we first read an operator \(X\) in the first row and an operator \(Y\) in the first column and the result is \(XY\). We write "comm" to mean that the operators commute.
**Proposition 6.6**.: Let
\[D^{(2)}_{\pi}(t)=1-T(\overline{\pi})t+p\Delta_{\overline{\pi}}T(p)t^{2}-p^{3} \Delta_{\overline{\pi}}^{2}T(\pi)t^{3}+p^{6}\Delta_{\overline{\pi}}^{3}\Delta _{\pi}t^{4}\]
and
\[D^{(2)}_{\overline{\pi}}(t)=1-T(\pi)t+p\Delta_{\pi}T(p)t^{2}-p^{3}\Delta_{ \overline{\pi}}^{2}T(\overline{\pi})t^{3}+p^{6}\Delta_{\overline{\pi}}^{3} \Delta_{\overline{\pi}}t^{4}\]
We remark here that \(Z_{\pi}(t)=D_{\pi}(\Delta_{\overline{\pi}}^{-1}t)\) and \(Z_{\overline{\pi}}(t)=D_{\overline{\pi}}(\Delta_{\overline{\pi}}^{-1}t)\) where \(Z_{\pi},Z_{\overline{\pi}}\) are the standard polynomials defined in Section 3. This can be seen by computing the images under the Satake mapping of the above coefficients, as can be found in [8, Lemma 3.7]. Let also
\[S_{\pi}(t)=1-T(\overline{\pi})t+p\Delta_{\overline{\pi}}T(\pi,\overline{\pi}) t^{2}\]
and
\[S_{\overline{\pi}}(t)=1-T(\pi)t+p\Delta_{\pi}T(\overline{\pi},\pi)t^{2}\]
We then have the following factorisations
\[D^{(2)}_{\pi}(t)=(1-\Lambda_{-}(\overline{\pi})t)S_{\pi}(t)(1-\Lambda_{+}( \overline{\pi})t)\]
\[D^{(2)}_{\overline{\pi}}(t)=(1-\Lambda_{-}(\pi)t)S_{\overline{\pi}}(t)(1- \Lambda_{+}(\pi)t)\]
Proof.: This follows from Theorem 6.5 after pulling back to the parabolic algebra of the unitary group.
We will need a lemma regarding the decomposition of a Hecke operator in \(H^{1,1}\) into left cosets.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \(\Lambda_{+}(\pi)\) & \(\Lambda_{+}(\overline{\pi})\) & \(\Lambda_{-}(\pi)\) & \(\Lambda_{-}(\overline{\pi})\) & \(T(\overline{\pi})\) & \(T(\pi,\overline{\pi})\) & \(T(\overline{\pi},\pi)\) & \(T(\overline{\pi},\pi)\) & \(T_{-}(p)\) & \(T_{+}(p)\) \\ \hline \(\Lambda_{+}(\pi)\) & & comm & \(p\Delta_{\overline{\pi}}T(\pi,\overline{\pi})\) & \(p^{3}\Delta_{\overline{\pi}}\) & comm & \(p\Delta_{\overline{\pi}}T_{+}(p)\) & comm & \(p^{2}\Delta_{\overline{\pi}}\Lambda_{+}(\overline{\pi})\) & \(p^{2}\Delta_{\overline{\pi}}T(\overline{\pi})\) & comm \\ \hline \(\Lambda_{-}(\overline{\pi})\) & & & \(p^{2}\Delta_{p}\) & \(p\Delta_{\overline{\pi}}T(\overline{\pi},\pi)\) & \(p\Delta_{\overline{\pi}}T_{+}(p)\) & comm & \(p^{2}\Delta_{\overline{\pi}}\Lambda_{+}(\pi)\) & comm & \(p^{2}\Delta_{\overline{\pi}}T(\pi)\) & comm \\ \hline \(\Lambda_{-}(\pi)\) & & & & & & & & & & \\ \hline \(\Lambda_{-}(\overline{\pi})\) & & & & & & & & & & \\ \hline \(T(\overline{\pi})\) & & comm & \(p\Delta_{\overline{\pi}}T_{-}(p)\) & & & comm & & & & \\ \hline \(T(\pi,\overline{\pi})\) & & \(p\Delta_{\overline{\pi}}T_{-}(p)\) & comm & & & & & & \\ \hline \(T(\pi,\overline{\pi})\) & & comm & \(p^{2}\Delta_{\overline{\pi}}\Lambda_{-}(\pi)\) & & & & & & & \\ \hline \(T(p)\) & & & & & & & & & & \\ \hline \(T_{+}(p)\) & & & \(p^{2}\Delta_{\overline{\pi}}T(\overline{\pi})\) & \(p^{2}\Delta_{\overline{\pi}}T(\pi)\) & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 1. Relations of Hecke Operators for split primes
**Lemma 6.7**.: Let
\[M=\operatorname{diag}(\pi^{a_{1}}\overline{\pi}^{b_{1}},\pi^{a_{2}}\overline{\pi }^{b_{2}},\pi^{a_{3}}\overline{\pi}^{b_{3}},\pi^{a_{4}}\overline{\pi}^{b_{4}}) \in S_{p}^{2}\]
with \(a_{i},b_{i}\geq 0\). Then
\[\Gamma_{1,1}M\Gamma_{1,1}=\sum_{l,q,r^{\prime},\gamma}\Gamma_{1,1}M\begin{pmatrix} 1&0&0&l\\ -\overline{q}&1&\overline{l}&r^{\prime}-l\overline{q}\\ 0&0&1&q\\ 0&0&0&1\end{pmatrix}a(\gamma),\]
where \(l,q,r^{\prime}\) runs over elements in \(\mathcal{O}_{K}\) that satisfy \(r^{\prime}\in\mathbb{Z}\) and they give representatives of
\[l\in\mathcal{O}_{K}/\pi^{a_{4}-a_{1}}\overline{\pi}^{b_{4}-b_{1}},q\in \mathcal{O}_{K}/\pi^{a_{4}-a_{3}}\overline{\pi}^{b_{4}-b_{3}},r^{\prime}\in \mathbb{Z}/\pi^{a_{4}-a_{2}}\]
For the above we understand that if any of the \(a_{4}-a_{i}\) or \(b_{4}-b_{i}\) is zero or negative then the corresponding group is set to the trivial one i.e. \(\mathcal{O}_{K}/\pi^{a_{4}-a_{i}}=0\) and \(\mathcal{O}_{K}/\overline{\pi}^{b_{4}-b_{i}}=0\).
Finally \(\gamma\) runs over elements of \(S_{p}^{1}\) such that
\[\Gamma_{1}\operatorname{diag}(\pi^{a_{1}}\overline{\pi}^{b_{1}},\pi^{a_{3}} \overline{\pi}^{b_{3}})\Gamma_{1}=\sum_{\gamma}\Gamma_{1}\operatorname{diag} (\pi^{a_{1}}\overline{\pi}^{b_{1}},\pi^{a_{3}}\overline{\pi}^{b_{3}})\gamma\]
Proof.: We write \(H_{1,1}=\left\{\begin{pmatrix}1&0&0&l\\ -\overline{q}&1&\overline{l}&r\\ 0&0&1&q\\ 0&0&0&1\end{pmatrix}\in S_{p}^{2}\mid l,q,r\in\mathcal{O}_{K}\right\}\) for the (integral) Heisenberg part of the Klingen parabolic. We then claim that
\[H_{1,1}MH_{1,1}=\sum_{q,l,r^{\prime}}H_{1,1}M\begin{pmatrix}1&0&0&l\\ -\overline{q}&1&\overline{l}&r^{\prime}+q\overline{l}\\ 0&0&1&q\\ 0&0&0&1\end{pmatrix}\quad(*),\]
where \(q,l,r^{\prime}\) as in the statement of the proposition.
To see this we first set \(h(l,q,r):=\begin{pmatrix}1&0&0&l\\ -\overline{q}&1&\overline{l}&r\\ 0&0&1&q\\ 0&0&0&1\end{pmatrix}\) and \(M=\operatorname{diag}(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\) and calculate
\[h(l_{1},q_{1},r_{1})Mh(l_{2},q_{2},r_{2})=\]
\[\begin{pmatrix}\alpha_{1}&0&0&\alpha_{1}l_{2}+l_{1}\alpha_{4}\\ -\overline{q}_{1}\alpha_{1}-\alpha_{2}\overline{q}_{2}&\alpha_{2}&\overline {l}_{2}+\overline{l}_{1}\alpha_{3}&*\\ 0&0&\alpha_{3}&\alpha_{3}q_{2}+q_{1}\alpha_{4}\\ 0&0&0&\alpha_{4}\end{pmatrix}\]
where \(*=-\overline{q}_{1}\alpha_{1}l_{2}+\alpha_{2}r_{2}+\overline{l}_{1}\alpha_{3 }q_{2}+r_{1}\alpha_{4}\). We first look at the upper right entry. We have
\[\alpha_{1}l_{2}+l_{1}\alpha_{4}=\pi^{a_{1}}\overline{\pi}^{b_{1}}l_{2}+l_{1} \pi^{a_{4}}\overline{\pi}^{b_{4}}\]
If \(a_{1}\geq a_{4}\) and \(b_{1}\geq b_{4}\) we may write
\[\alpha_{1}l_{2}+l_{1}\alpha_{4}=\left(\pi^{a_{1}-a_{4}}\overline{\pi}^{b_{1}-b_ {4}}l_{2}+l_{1}\right)\pi^{a_{4}}\overline{\pi}^{b_{4}}\]
and so no need for right cosets in this case. In the other cases, we write \(l_{2}=x+y\pi^{a_{4}-a_{1}}\overline{\pi}^{b_{4}-b_{1}}\) with the understanding that we set \(\pi^{i}=1\) and \(\overline{\pi}^{j}=1\) if \(i,j\leq 0\). Then we have that
\[\alpha_{1}l_{2}+l_{1}\alpha_{4}=\pi^{a_{1}}\overline{\pi}^{b_{1}}x+(l_{1}+y) \pi^{a_{4}}\overline{\pi}^{b_{4}}\]
For example when \(a_{4}>a_{1}\) and \(b_{1}\geq b_{4}\) we write \(l_{2}=x+y\pi^{a_{4}-a_{1}}\) and obtain
\[\alpha_{1}l_{2}+l_{1}\alpha_{4}=\pi^{a_{1}}\overline{\pi}^{b_{1}}x+y\pi^{a_{4} }\overline{\pi}^{b_{1}}+l_{1}\pi^{a_{4}}\overline{\pi}^{b_{4}}=\pi^{a_{1}} \overline{\pi}^{b_{1}}x+(y\overline{\pi}^{b_{1}-b_{4}}+l_{1})\pi^{a_{4}} \overline{\pi}^{b_{4}}\]
Similarly looking at the entry
\[\alpha_{2}\overline{l}_{2}+\overline{l}_{1}\alpha_{3}=\pi^{a_{2}}\overline{ \pi}^{b_{2}}\overline{l}_{2}+\overline{l}_{1}\pi^{a_{3}}\overline{\pi}^{b_{3}}\]
With \(l_{2}=x+y\pi^{a_{4}-a_{1}}\overline{\pi}^{b_{4}-b_{1}}\) as above, we obtain,
\[\alpha_{2}\overline{l}_{2}+\overline{l}_{1}\alpha_{3}=\pi^{a_{2}}\overline{\pi }^{b_{2}}\overline{x}+\overline{y}\pi^{a_{2}}\overline{\pi}^{b_{2}}\overline{ \pi}^{a_{4}-a_{1}}\pi^{b_{4}-b_{1}}+\overline{l}_{1}\pi^{a_{3}}\overline{\pi} ^{b_{3}}=\pi^{a_{2}}\overline{\pi}^{b_{2}}\overline{x}+(\overline{y}+\overline {l_{1}})\pi^{a_{3}}\overline{\pi}^{b_{3}}\]
where we have used the fact that \(a_{2}+b_{4}=b_{1}+a_{3}\) and \(a_{4}+b_{2}=a_{1}+b_{3}\) since \(M\in S_{p}^{2}\).
In particular for these entries it is enough to consider the entry \(l_{2}\) modulo \(\pi^{a_{4}-a_{1}}\overline{\pi}^{b_{4}-b_{1}}\) (with our convention). Similarly we can argue for the entries \(-\overline{q}_{1}\alpha_{1}-\alpha_{2}\overline{q}_{2}\) and \(\alpha_{3}q_{2}+q_{1}\alpha_{4}\).
We are now left with the \(*-\) entry. Using first the condition that each representative has to be in the group \(S_{p}^{2}\), we get by the condition \(\overline{B}^{t}D=\overline{D}^{t}B\) that
\[\overline{\alpha_{1}}\alpha_{3}\overline{l_{2}}q_{2}+\alpha_{4}\overline{ \alpha_{4}}\overline{l_{1}}q_{1}+\alpha_{4}\overline{\alpha_{2}}\overline{r_{ 2}}+\alpha_{4}\overline{\alpha_{4}r_{1}}\in\mathbb{R}\cap\mathcal{O}_{K}= \mathbb{Z}\]
We now set this expression to be \(0\) and by the conditions on the matrix \(M(\overline{\alpha_{1}}\alpha_{3}=\alpha_{4}\overline{\alpha_{2}})\) we obtain
\[r_{2}=-l_{2}\overline{q_{2}}-p^{a_{4}-a_{2}}(r_{1}+l_{1}\overline{q_{1}})\]
Using the fact that we can write \(r_{1}=r_{1}^{\prime}+q_{1}\overline{l}_{1}\) and \(r_{2}=r_{2}^{\prime}+q_{2}\overline{l}_{2}\) for some \(\tilde{r}_{i}\in\mathbb{Z}\) we have that the only part of the \(*-\) entry which is not determined by our choices of the \(q_{1},q_{2},l_{1},l_{2}\) is \(\alpha_{2}\tilde{r}_{2}+\tilde{r}_{1}\alpha_{4}\). That is
\[\alpha_{2}r_{2}^{\prime}+r_{1}^{\prime}\alpha_{4}=\pi^{a_{2}}\overline{\pi}^{b _{2}}r_{2}^{\prime}+r_{1}^{\prime}\pi^{a_{4}}\overline{\pi}^{b_{4}}\]
Arguing as above and using the fact that \(a_{4}-a_{2}=b_{4}-b_{2}\), we see that the element \(r_{2}^{\prime}\) needs to be selected from integers modulo \(\pi^{a_{4}-a_{2}}\overline{\pi}^{b_{4}-b_{2}}=p^{a_{4}-a_{2}}\).
This establishes our claim \((*)\). The rest of the proof is identical to the symplectic case as done by Gritsenko in [6].
We are now finally ready to obtain the rationality theorems, as in the case of an inert prime.
**Proposition 6.8**.: Let \(F\in S_{2}^{k}\) be a Hecke eigenform and \(m\geq 1\). Then
\[Q_{p,F}^{(2)}(X)\sum_{\delta\geq 0}\phi_{mp^{\delta}}|T_{+}(p^{\delta})X^{ \delta}=\left(\phi_{m}-\phi_{\frac{m}{p}}|T_{-}(p)X+p\phi_{\frac{m}{p^{2}}}| \Lambda_{-}(p)X^{2}\right)\mid K(X)\]
where \(K\) is as given in pages \(2898,2899\) in [8] and such that
\[\phi_{m}\mid K(X)=(1-p^{k-3}X)^{2}(1-p^{2k-4}X^{2})\phi_{m}.\]
if \((p,m)=1\).
Proof.: The proof is given by Gritsenko in [8].
**Proposition 6.9**.: For \(m\geq 1\)
\[D_{\pi,F}^{(2)}(X)\sum_{\delta\geq 0}\phi_{mp^{\delta}}|\Lambda_{+}(\overline{\pi}^{ \delta})X^{\delta}=\left(\phi_{m}-\phi_{\frac{m}{p}}|\Lambda_{-}(\overline{ \pi})X\right)\mid S_{\pi}(X)\]
Similarly,
\[D_{\pi,F}^{(2)}(X)\sum_{\delta\geq 0}\phi_{mp^{\delta}}|\Lambda_{+}(\pi^{ \delta})X^{\delta}=\left(\phi_{m}-\phi_{\frac{m}{p}}|\Lambda_{-}(\pi)X\right) \mid S_{\overline{\pi}}(X)\]
with notation obtained by exchanging \(\pi\) and \(\overline{\pi}\).
Proof.: This can be established as the corresponding statement for \(Q_{p}^{(2)}(X)\), as long as we establish that (using notation from Section 3)
\[\phi_{m}||_{k}\Lambda_{+}(\overline{\pi})=\phi_{mp}|_{k}\Lambda_{+}(\overline{ \pi}).\]
We do know from [10, Lemma 4.1] that the right hand side is a Jacobi form of index \(m\). To see the equality above we recall that by Lemma 6.7 we have
\[\Lambda_{+}(\overline{\pi})=\sum_{l,q\in\mathcal{O}/\pi,r\in\mathbb{Z}/p}\Gamma _{1,1}\begin{pmatrix}\overline{\pi}&0&0&0\\ 0&1&0&0\\ 0&0&\overline{\pi}&0\\ 0&0&0&p\end{pmatrix}h(q,l,r)\]
where \(h(l,q,r):=\begin{pmatrix}1&0&0&l\\ -\overline{q}&1&\overline{l}&r\\ 0&0&1&q\\ 0&0&0&1\end{pmatrix}\).
Writing \(Z=\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\), we observe that the \(\omega\) entry of the action \(\begin{pmatrix}\overline{\pi}&0&0&0\\ 0&1&0&0\\ 0&0&\overline{\pi}&0\\ 0&0&0&p\end{pmatrix}h(q,l,r)\cdot Z\) is given as
\[\frac{\overline{q}\tau q-z_{2}q-\overline{l}q-\overline{q}z_{1}+r}{p}+\frac{ \omega}{p}=:a(\tau,z_{1},z_{2},q,r,l)+\frac{\omega}{p}.\]
Moreover, we know that the other three entries do not depend on \(\omega\). If now
\[F(Z)=\sum_{m\geq 1}\phi_{m}(\tau,z_{1},z_{2})e(m\omega)\]
is the Fourier-Jacobi expansion of \(F\), we have
\[(F|_{k}\Lambda_{+}(\overline{\pi}))(Z)=\sum_{m}\sum_{l,q,r}\left(\phi_{m}( \tau,z_{1},z_{2})e(m\omega)|_{k}\begin{pmatrix}\overline{\pi}&0&0&0\\ 0&1&0&0\\ 0&0&\overline{\pi}&0\\ 0&0&0&p\end{pmatrix}h(q,l,r)\right)\]
(note the factor of automorphy is the same so it is fine to pass the action inside the expansion). From the remark above we may write this as:
\[F|_{k}\Lambda_{+}(\overline{\pi})=p^{k-4}\sum_{m}\left(\sum_{l,q,r}\phi_{m}( \tau_{l,q,r},z_{1,l,q,r}z_{2,l,q,r})e(ma(\tau,z_{1},z_{2},q,r,l))\right)e\left( \frac{m}{p}\omega\right)\]
This is the Fourier expansion of \(F|_{k}\Lambda_{+}(\overline{\pi})\) (the terms inside the parentheses do not involve the \(\omega\) entry). In particular,
\[\phi_{m}||_{k}\Lambda_{+}(\overline{\pi})=p^{k-4}\sum_{l,q,r}\phi_{mp}(\tau_{ l,q,r},z_{1,l,q,r}z_{2,l,q,r})e(mpa(\tau,z_{1},z_{2},q,r,l))\]
But by definition:
\[a(\tau,z_{1},z_{2},q,r,l)=\omega\left(\begin{pmatrix}\overline{\pi}&0&0&0\\ 0&1&0&0\\ 0&0&\overline{\pi}&0\\ 0&0&0&p\end{pmatrix}h(q,l,r)\cdot Z\right)-\frac{\omega}{p}=:\omega_{q,l,r}(Z )-\frac{\omega}{p}\]
where the first term denotes the \(\omega\)-entry after the action. That is,
\[\phi_{m}||_{k}\Lambda_{+}(\overline{\pi})=\left(p^{k-4}\sum_{l,q,r}\phi_{mp}( \tau_{l,q,r},z_{1,l,q,r}z_{2,l,q,r})e(mp\omega_{q,l,r}(Z))\right)e(-m\omega)\]
But now the expression inside the parentheses is nothing else than \(\phi_{mp}(\tau,z_{1},z_{2})e(mp\omega)|_{k}\Lambda_{+}(\overline{\pi})\). In particular we obtain that
\[\phi_{m}||_{k}\Lambda_{+}(\overline{\pi})=\phi_{mp}|_{k}\Lambda_{+}(\overline {\pi}).\]
To end this section, we will prove a couple of lemmas, which will be useful later, when we are dealing with the Dirichlet series of interest.
**Lemma 6.10**.: Denote by \(\Lambda_{-}(\pi)^{\operatorname{adj}}\) the adjoint of the operator \(\Lambda_{-}(\pi)\) with respect to the inner product of Fourier-Jacobi forms. Then
\[\Lambda_{-}(\pi)^{\operatorname{adj}}=p^{k-3}\Lambda_{+}(\overline{\pi})\]
Proof.: We observe from the lemma above that \(\Lambda_{-}(\pi)=\Gamma_{1,1}\text{diag}(\pi,p,\pi,1)\). We also note that the Jacobi form
\[\phi_{l}|_{k}\text{diag}(\pi,p,\pi,1)\]
is of index \(pl\) for the group \(\Gamma_{-}=U_{1}\times\mathcal{O}_{K}\times\mathcal{O}_{K}\left\{\begin{pmatrix} a&0&b&\mu\\ *&1&*&*\\ c&0&d&\lambda\\ 0&0&0&1\end{pmatrix}\in\Gamma_{1,1}\right\}\). In particular, we may write
\[\langle\phi_{l}|_{k}\Lambda_{-}(\pi),\psi_{lp}\rangle=\frac{p^{2k-4}\pi^{-k}}{ [\Gamma_{1,1}:\Gamma_{-}]}\int_{\Gamma_{-}\backslash\mathbb{H}_{1}^{J}}\phi( \tau,\pi z_{1},\overline{\pi}z_{2})\overline{\psi(\tau,z_{1},z_{2})}\exp \left(-\pi lp\frac{|z_{1}-\overline{z_{2}}|^{2}}{y}\right)y^{k-4}dxdydx_{1}dy_ {1}dx_{2}dy_{2}\]
We now perform the change of variables \(z_{1}\mapsto\pi^{-1}z_{1}\) and \(z_{2}\mapsto\overline{\pi}^{-1}z_{2}\). This is equivalent to the action of the matrix \(\text{diag}(\overline{\pi},1,\overline{\pi},p)\) on \(\mathbb{H}_{1}\times\mathbb{C}\times\mathbb{C}\). Now
\[\psi_{lp}\mid_{k}(\tau,z_{1},z_{2})=p^{2k-4}\overline{\pi}^{-k}\psi_{lp}(p\tau,z_{1},z_{2})\]
is a Jacobi form of weight \(k\) and index \(l\) with respect to the group
\[\Gamma_{+}=U_{1}\times\overline{\pi}\mathcal{O}_{K}\times\overline{\pi} \mathcal{O}_{K}=\left\{\begin{pmatrix}a&0&b&\mu\\ *&1&*&*\\ c&0&d&\lambda\\ 0&0&0&1\end{pmatrix}\in\Gamma_{1,1}\mid\mu,\lambda\equiv 0\pmod{\overline{ \pi}}\right\}\]
(this is obtained by considering the group \(\text{diag}(\overline{\pi},1,\overline{\pi},p)^{-1}\Gamma_{-}\text{diag}( \overline{\pi},1,\overline{\pi},p)\)). We then obtain the integral
\[\frac{p^{2k-6}\pi^{-k}}{[\Gamma_{1,1}:\Gamma_{+}]}\int_{\Gamma_{+} \backslash\mathbb{H}_{1}^{J}}\phi(\tau,z_{1},z_{2})\overline{\psi(\tau,\pi^{ -1}z_{1},\overline{\pi}^{-1}z_{2})}\exp\left(-\pi l\frac{|z_{1}-\overline{z_{ 2}}|^{2}}{y}\right)y^{k-4}dxdydx_{1}dy_{1}dx_{2}dy_{2}.\]
On the other hand, we have by the Lemma 6.7 that
\[\Lambda_{+}(\overline{\pi})=\sum_{l,q\in\mathcal{O}/\pi,r\in\mathbb{Z}/p} \Gamma_{1,1}\begin{pmatrix}\overline{\pi}&0&0&0\\ 0&1&0&0\\ 0&0&\overline{\pi}&0\\ 0&0&0&p\end{pmatrix}h(q,l,r)\]
and so by using the property \(\langle\phi|_{k}h(q,l,r)^{-1},\psi\rangle=\langle\phi,\psi|_{k}\ h(q,l,r)\rangle\) we obtain
\[\langle\phi_{l},\psi_{lp}\mid_{k}\Lambda_{+}(\overline{\pi})\rangle=\]
\[=p^{3}\frac{p^{k-4}\pi^{-k}}{[\Gamma_{1,1}:\Gamma_{+}]}\int_{\Gamma_{+} \backslash\mathbb{H}}\phi_{l}(\tau,z_{1},z_{2})\overline{\psi_{pl}(\tau,\pi ^{-1}z_{1},\overline{\pi}^{-1}z_{2})}e\left(-\pi l\frac{|z_{1}-\overline{z_{2}} |^{2}}{y}\right)y^{k-4}dxdydx_{1}dy_{1}dx_{2}dy_{2}\]
as \(h(q,l,r)^{-1}\in\Gamma_{1,1}\) and hence they act trivially. Therefore,
\[\langle\phi_{l}\mid_{k}\Lambda_{-}(\pi),\psi_{pl}\rangle=p^{k-5}\frac{[\Gamma_ {1,1}:\Gamma_{+}]}{[\Gamma_{1,1}:\Gamma_{-}]}\langle\phi_{l},\psi_{lp}\mid \Lambda_{+}(\overline{\pi})\rangle=p^{k-3}\langle\phi_{l},\psi_{lp}\mid_{k} \Lambda_{+}(\overline{\pi})\rangle\]
as the ratio of indices is \(p^{2}\).
Finally, knowing the action of the operators \(T(\pi,\overline{\pi})\) and \(T(\overline{\pi},\pi)\) will prove helpful in the following, so we give the following lemma:
**Lemma 6.11**.: Let \(X=\Gamma_{1,1}\begin{pmatrix}\overline{\pi}&0&0&0\\ 0&\pi&0&0\\ 0&0&\overline{\pi}&0\\ 0&0&0&\pi\end{pmatrix}\Gamma_{1,1}\). We have
\[\phi_{1}\mid_{k}X=p^{k-3}\phi_{1}\]
Proof.: Using Gritsenko's decomposition in [7], we have
\[\Gamma_{1,1}\begin{pmatrix}\overline{\pi}&0&0&0\\ 0&\pi&0&0\\ 0&0&\overline{\pi}&0\\ 0&0&0&\pi\end{pmatrix}\Gamma_{1,1}=\sum_{\gamma\in\mathcal{O}_{K}/\pi,\beta \in\mathcal{O}_{K}/\overline{\pi}}\Gamma_{1,1}\begin{pmatrix}\overline{\pi}&0 &0&\overline{\pi}\overline{\beta}\\ -\overline{\gamma}&\pi&\pi\beta&-\overline{\gamma}\overline{\beta}\\ 0&0&\overline{\pi}&\gamma\\ 0&0&0&\pi\end{pmatrix}\]
Now, the result of the action of a representative on the element \(\begin{pmatrix}\tau&z_{1}\\ z_{2}&\omega\end{pmatrix}\) is
\[\begin{pmatrix}\tau&-\frac{\gamma}{\pi}\tau+\frac{\overline{\pi}}{\pi}z_{1}+ \frac{\overline{\pi}}{\pi}\overline{\beta}\\ -\frac{\overline{\gamma}}{\overline{\pi}}\tau+\frac{\pi}{\pi}z_{2}+\frac{ \pi}{\pi}\beta&\frac{|\gamma|^{2}}{p}\tau-\frac{\gamma}{\overline{\pi}}z_{2}- \frac{\overline{\gamma}}{\overline{\pi}}z_{1}+\omega-tr\left(\frac{\overline{ \beta}\overline{\gamma}}{\overline{\pi}}\right)\end{pmatrix}\]
But \(\phi\) has a theta expansion given by
\[\phi(\tau,z_{1},z_{2})=\sum_{h\pmod{2}}\phi_{h}(\tau)\sum_{n\in\mathcal{O}}e (\tau|n+h/2|^{2}+(n+h/2)z_{1}+(\overline{n}+\overline{h/2})z_{2})\]
So, after the action we obtain
\[p^{k-4}\sum_{h}\phi_{h}(\tau)\sum e\left(\tau|n+h/2|^{2}+(n+h/2)\left(-\frac{ \gamma}{\pi}\tau+\frac{\overline{\pi}}{\pi}z_{1}+\frac{\overline{\pi}}{\pi} \overline{\beta}\right)+(\overline{n}+\overline{h/2})\left(\frac{\overline{ \gamma}}{\overline{\pi}}\tau+\frac{\overline{\pi}}{\overline{\pi}}z_{2}+\frac {\overline{\pi}}{\overline{\pi}}\beta\right)\right)\times\]
\[\times e\left(\frac{|\gamma|^{2}}{p}\tau-\frac{\gamma}{\overline{\pi}}z_{2}- \frac{\overline{\gamma}}{\pi}z_{1}-tr\left(\frac{\overline{\beta}\overline{ \gamma}}{\pi}\right)\right)\]
For fixed \(n,h,\gamma\) we observe that the sum
\[\sum_{\beta\in\mathcal{O}_{K}/\overline{\pi}}e\left(\operatorname{tr}\left( \frac{(n+h/2)\overline{\pi}-\overline{\gamma}}{\pi}\right)\overline{\beta}\right)\]
is \(0\) unless \(\pi\mid(2n+h)\overline{\pi}-2\overline{\gamma}\), in which case is \(p\)([7], p. 76). Now, the terms which contain \(z_{1},z_{2}\) are
\[e\left(\frac{(n+h/2)\overline{\pi}-\overline{\gamma}}{\pi}z_{1}+\frac{( \overline{n}+\overline{h/2})\pi-\gamma}{\overline{\pi}}z_{2}\right)\]
and if we write \(\frac{(n+h/2)\overline{\pi}-\overline{\gamma}}{\pi}=m+l/2\) for \(m\in\mathcal{O}_{K},l\in\mathcal{O}_{K}/2\) we have the form wanted. Similarly for \(\tau\), we have
\[e\left(\tau|n+h/2|^{2}-\frac{(n+h/2)\gamma}{\pi}-\frac{(\overline{n}+ \overline{h/2})\overline{\gamma}}{\overline{\pi}}+\frac{|\gamma|^{2}\tau}{p}\right)\]
and the inside expression can be written as
\[\left(\frac{(n+h/2)\overline{\pi}-\overline{\gamma}}{\pi}\right)\left(\frac{( \overline{n}+\overline{h/2})\pi-\gamma}{\overline{\pi}}\right)=|m+l/2|^{2}\]
with the notation before. Hence, putting all together we finally get the factor \(p^{k-3}\).
**Remark**.: The same is true for the operator \(\Gamma_{1,1}\begin{pmatrix}\pi&0&0&0\\ 0&\overline{\pi}&0&0\\ 0&0&\pi&0\\ 0&0&0&\overline{\pi}\end{pmatrix}\Gamma_{1,1}\).
### Calculation of the Dirichlet series - First Part
We recall that
\[D_{F,G,h}(s)=4\beta_{k}\sum_{l,\epsilon,m}\langle\tilde{\phi}_{1}|T_{-}(m)U_{l}, \tilde{\psi}_{mN(l)}\rangle_{\mathcal{A}}a_{mN(\epsilon)}N(l)^{-s}N(\epsilon)^{- (k+s-1)}m^{-(2k+s-4)}\]
with \(l,\epsilon\in\mathbb{Z}[i]\) coprime with their real parts positive and imaginary parts non-negative and \(m\in\mathbb{N}\). In the case of a split prime \(p=\pi\overline{\pi}\), we define the \(p-\)part by
\[D_{F,G,h}^{(p)}(s)=\sum_{l_{1},l_{2},\epsilon_{1},\epsilon_{2},m\geq 0}\langle \tilde{\phi}_{1}|T_{-}(p^{m})U_{\pi^{l_{1}}}U_{\overline{\pi}^{l_{2}}},\tilde{ \psi}_{p^{m+l_{1}+l_{2}}}\rangle_{\mathcal{A}}a_{p^{m+\epsilon_{1}+\epsilon_{ 2}}}p^{-s(l_{1}+l_{2})}p^{-(k+s-1)(\epsilon_{1}+\epsilon_{2})}p^{-(2k+s-4)m}\]
with the conditions that \(\min(\epsilon_{1},l_{1})=0\) and \(\min(\epsilon_{2},l_{2})=0\).
Consider now the Hecke operator \(\Lambda_{-}(\pi)=\Gamma_{1,1}\mathrm{diag}(\pi,p,\pi,1)\Gamma_{1,1}=\Gamma_{ 1,1}\mathrm{diag}(\pi,p,\pi,1)\), by Lemma 6.7. Then, if \(\phi\) is a Fourier-Jacobi, we get
\[\phi|_{k}\Lambda_{-}(\pi)=p^{2k-4}\pi^{-k}\tilde{\phi}\left(\begin{pmatrix} \tau&\pi z_{1}\\ \overline{\pi}z_{2}&p\tau^{\prime}\end{pmatrix}\right)e^{-2\pi\frac{m}{p}\tau ^{\prime}}=p^{2k-4}\pi^{-k}\phi(\tau,\pi z_{1},\overline{\pi}z_{2})=p^{2k-4} \pi^{-k}\phi|_{k}U_{\pi}\]
Hence, we can rewrite the series as:
\[D_{F,G,h}^{(p)}(s)=\sum_{\begin{subarray}{c}l_{1},l_{2},\\ \epsilon_{1},\epsilon_{2},m\geq 0\end{subarray}}\langle\tilde{\phi}_{1}|T_{-}(p ^{m})\Lambda_{-}(\pi^{l_{1}})\Lambda_{-}(\overline{\pi}^{l_{2}}),\tilde{\psi }_{p^{m+l_{1}+l_{2}}}\rangle_{\mathcal{A}}a_{p^{m+\epsilon_{1}+\epsilon_{2}}}p ^{(4-2k)l_{1}}p^{(4-2k)l_{2}}\pi^{l_{1}k\frac{m}{\pi}^{l_{2}}k}\times\]
\[\times p^{-s(l_{1}+l_{2})}p^{-(k+s-1)(\epsilon_{1}+\epsilon_{2})}p^{-(2k+s-4)m}\]
By then using an inclusion-exclusion argument, we have that the above series can be written as
\[D_{F,G,h}^{(p)}(s)=D_{(\epsilon_{1},\epsilon_{2})}(s)+D_{(l_{1},l_{2})}(s)+D_ {(\epsilon_{1},l_{2})}+D_{(\epsilon_{2},l_{1})}(s)-\]
\[-D_{(\epsilon_{1},\epsilon_{2},l_{1})}(s)-D_{(\epsilon_{2},l_{1},l_{2})}(s)-D_ {(\epsilon_{1},l_{1},l_{2})}(s)-D_{(\epsilon_{1},\epsilon_{2},l_{2})}(s)+D_ {(\epsilon_{1},\epsilon_{2},l_{1},l_{2})}(s)\]
where we use the same notation as in subsection 5.2, meaning that the corresponding index means the variables are \(0\). We can then deal with the "easy" parts first, i.e. when the operators \(\Lambda\) do not appear.
**Proposition 6.12**.: \[D_{(l_{1},l_{2})}(s)-D_{(l_{1},l_{2},\epsilon_{1})}-D_{(l_{1},l_{2}, \epsilon_{2})}+D_{(l_{1},l_{2},\epsilon_{1},\epsilon_{2})}(s)=\]
\[=\frac{\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle_{\mathcal{A}}}{\alpha_{ 1}-\alpha_{2}}\left[\frac{\alpha_{1}^{3}(1-p^{2k-4}X_{1}^{2})X^{2}}{Q_{p,G}^{( 2)}(X_{1})}-\frac{\alpha_{2}^{3}(1-p^{2k-4}X_{2}^{2})X^{2}}{Q_{p,G}^{(2)}(X_{2 })}\right]\]
where again \(X=p^{-(k+s-1)}\) and \(X_{i}=\alpha_{i}p^{-(2k+s-4)}\).
Proof.: We have
\[D_{(l_{1},l_{2})}(s)=\sum_{\epsilon_{1},\epsilon_{2},m}\langle \tilde{\phi}_{1}|T_{-}(p^{m}),\tilde{\psi}_{p^{m}}\rangle_{\mathcal{A}}a_{p^{m+ \epsilon_{1}+\epsilon_{2}}}p^{-(k+s-1)(\epsilon_{1}+\epsilon_{2})}p^{-m(2k+s-4)}\] \[D_{(l_{1},l_{2},\epsilon_{1})}=\sum_{\epsilon_{2},m}\langle \tilde{\phi}_{1}|T_{-}(p^{m}),\tilde{\psi}_{p^{m}}\rangle_{\mathcal{A}}a_{p^{m+ \epsilon_{2}}}p^{-(k+s-1)\epsilon_{2}}p^{-m(2k+s-4)}\] \[D_{(l_{1},l_{2},\epsilon_{2})}=\sum_{\epsilon_{1},m}\langle \tilde{\phi}_{1}|T_{-}(p^{m}),\tilde{\psi}_{p^{m}}\rangle_{\mathcal{A}}a_{p^{m+ \epsilon_{1}}}p^{-(k+s-1)\epsilon_{1}}p^{-m(2k+s-4)}\] \[D_{(l_{1},l_{2},\epsilon_{1},\epsilon_{2})}=\sum_{m}\langle \tilde{\phi}_{1}|T_{-}(p^{m}),\tilde{\psi}_{p^{m}}\rangle_{\mathcal{A}}a_{p^{m}} p^{-m(2k+s-4)}\]
Using now the fact that \(a_{p^{m}}=\frac{\alpha_{1}^{m+1}-\alpha_{2}^{m+1}}{\alpha_{1}-\alpha_{2}}\) and the fact that the adjoint of \(T_{-}(p^{m})\) is \(T_{+}(p^{m})\)(when they are acting on \(P-\)forms, see also in [8, Proposition 5.1]), we get
\[D_{(l_{1},l_{2})}(s)(\alpha_{1}-\alpha_{2}) =\alpha_{1}\sum_{\epsilon_{1},\epsilon_{2},m}\langle\tilde{\phi}_{1 },\tilde{\psi}_{p^{m}}|T_{+}(p^{m})\rangle_{\mathcal{A}}(\alpha_{1}p^{-(k+s-1) })^{\epsilon_{1}+\epsilon_{2}}(\alpha_{1}p^{-(2k+s-4)})^{m}\] \[-\alpha_{2}\cdots\]
and similarly for the others. Now, by Proposition 6.8, we obtain
\[\sum_{m=0}^{\infty}\langle\tilde{\phi}_{1},\tilde{\psi}_{p^{m}}|T_{+}(p^{m}) \rangle_{\mathcal{A}}X_{1}^{m}=(1-p^{k-3}X_{1})^{2}(1-p^{2k-4}X_{1}^{2})\langle \tilde{\phi}_{1},\tilde{\psi}_{1}\rangle_{\mathcal{A}}Q_{p,G}^{2}(X_{1})^{-1}\]
Also, \(\sum_{\epsilon_{2}=0}^{\infty}(\alpha_{1}p^{-(k+s-1)})^{\epsilon_{2}}=\dfrac{ 1}{1-\alpha_{1}p^{-k+s-1}}\) and similarly for \(\epsilon_{1}\) and
\[\sum_{\epsilon_{1},\epsilon_{2}=0}^{\infty}(\alpha_{1}p^{-(k+s-1)})^{(\epsilon _{1}+\epsilon_{2})}=\left(\dfrac{1}{1-\alpha_{1}p^{-(k+s-1)}}\right)^{2}\]
Hence, we obtain
\[D_{(l_{1},l_{2})}(s)-D_{(l_{1},l_{2},\epsilon_{1})}-D_{(l_{1},l_{2},\epsilon_{ 2})}+D_{(l_{1},l_{2},\epsilon_{1},\epsilon_{2})}(s)=\]
\[=\dfrac{\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle_{\mathcal{A}}}{\alpha _{1}-\alpha_{2}}\left[\dfrac{\alpha_{1}^{3}(1-p^{2k-4}X_{1}^{2})X^{2}}{Q_{p,G} ^{(2)}(X_{1})}-\dfrac{\alpha_{2}^{3}(1-p^{2k-4}X_{2}^{2})X^{2}}{Q_{p,G}^{(2)}( X_{2})}\right]\]
### Calculation of the Dirichlet Series - Second Part
In the following we have \(X_{i}=\alpha_{i}p^{-(2k+s-4)}\), \(Y_{1}=\pi^{k}p^{-(2k+s-4)}\), \(Y_{2}=\overline{\pi}^{k}p^{-(2k+s-4)}\). Let us now consider the series
\[D_{(\epsilon_{1},\epsilon_{2},l_{2})}=\sum_{l_{1},m\geq 0}\langle\tilde{\phi}_{1} |T_{-}(p^{m})U_{\pi^{l_{1}}},\tilde{\psi}_{p^{m+l_{1}}}\rangle_{\mathcal{A}}a _{p^{m}}p^{-sl_{1}}p^{-(2k+s-4)m}\]
Using the fact that \(a_{p^{m}}=\dfrac{\alpha_{1}^{m+1}-\alpha_{2}^{m+1}}{\alpha_{1}-\alpha_{2}}\) and the relation between \(U_{l}\) and \(\Lambda_{-}\), we obtain that
\[(\alpha_{1}-\alpha_{2})D_{(\epsilon_{1},\epsilon_{2},l_{2})}(s)=\alpha_{1}S_{ 1}-\alpha_{2}S_{2}\]
where
\[S_{i}=\sum_{l,m\geq 0}\langle\tilde{\phi}_{1}|T_{-}(p^{m})\Lambda_{-}(\pi^{l}), \tilde{\psi}_{p^{m+l}}\rangle_{\mathcal{A}}p^{-(2k+s-4)l}\pi^{lk}(\alpha_{i}p ^{-(2k+s-4)})^{m}\]
Using now the fact that the adjoint(with respect to the inner product of \(P-\)forms) of \(T_{-}(p)\) is \(T_{+}(p)\) and of \(\Lambda_{-}(\pi)\) is \(\Lambda_{+}(\overline{\pi})\) and that \(T_{-}(p)\) and \(\Lambda_{-}(\pi)\) commute, we get
\[S_{i}=\sum_{l,m\geq 0}\langle\tilde{\phi}_{1},\tilde{\psi}_{p^{m+l}}|T_{+}(p^{m} )\Lambda_{+}(\overline{\pi}^{l})\rangle_{\mathcal{A}}X_{i}^{m}Y_{1}^{l}=\sum_{ l,m\geq 0}\langle\tilde{\phi}_{1},\tilde{\psi}_{p^{m+l}}|T_{+}(p^{m})\Lambda_{+}( \overline{\pi}^{l})Y_{2}^{l}\rangle_{\mathcal{A}}X_{i}^{m}\]
because we have a Hermitian inner product.
**Lemma 6.13**.: We have
\[\sum_{l,m\geq 0}\tilde{\psi}_{p^{m+l}}|T_{+}(p^{m})\Lambda_{+}(\overline{\pi}^ {l})X_{i}^{m}Y_{2}^{l}=\dfrac{1}{Q_{p,G}^{(2)}(X_{i})}\sum_{l\geq 0}\left[ \tilde{\psi}_{p^{l}}-\tilde{\psi}_{p^{l-1}}|T_{-}(p)X_{i}+p\tilde{\psi}_{p^{l- 2}}|\Lambda_{-}(p)X_{i}^{2}\right]\mid K(X_{i})\Lambda_{+}(\overline{\pi}^{l} )Y_{2}^{l}\]
with \(K\) the polynomial of 6.8.
Proof.: The proof follows immediately by Proposition 6.8.
Let us now compute each of the components. We write
\[K(t)=1-K_{1}t+K_{2}t^{2}-K_{3}t^{3}+K_{4}t^{4}\in H_{p}^{1,1}[t]\]
where \(K\) is the above polynomial.
**Proposition 6.14**.: \[\frac{1}{1-\alpha_{i}X}\sum_{l\geq 0}\tilde{\psi}_{p^{l}}\mid K(X_{i})\Lambda_{+}( \overline{\pi}^{l})Y^{l}=\]
\[\frac{\tilde{\psi}_{1}\mid S_{\pi}(Y)}{D_{\pi,G}^{(2)}(Y)}-p^{2}\frac{\left[ \tilde{\psi}_{p}-\tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})Y\right]\mid S _{\pi}(Y)\Lambda_{+}(\pi)\Delta_{\overline{\pi}}YX_{i}}{D_{\pi,G}^{(2)}(Y)}+ \left[(1-\alpha_{i}X)(1-p^{2k-4}X_{i}^{2})-1\right]\tilde{\psi}_{1}\]
Proof.: This follows by using the commutativity relations we have obtained in Table 1, the fact that \(K_{3}\Lambda_{+}(\overline{\pi})=K_{4}\Lambda_{+}(\overline{\pi})=0\) and the rationality theorem given in 6.9. In particular, we have
* \(\sum_{l\geq 0}\tilde{\psi}_{p^{l}}\mid\Lambda_{+}(\overline{\pi}^{l})Y^{l}= \frac{\tilde{\psi}_{1}\mid S_{\pi}(Y)}{D_{\pi,G}^{(2)}(Y)}\)
* \(\sum_{l\geq 0}\tilde{\psi}_{p^{l}}\mid T(\overline{\pi},\pi)\Lambda_{+}( \overline{\pi}^{l})Y^{l}=\sum_{l\geq 0}\tilde{\psi}_{p^{l}}\mid\Lambda_{+}( \overline{\pi}^{l})T(\overline{\pi},\pi)Y^{l}=\frac{\tilde{\psi}_{1}\mid S_{ \pi}(Y)T(\overline{\pi},\pi)}{D_{\pi,G}^{(2)}(Y)}\)
* \(\sum_{l\geq 0}\tilde{\psi}_{p^{l}}\mid T(\pi,\overline{\pi})\Lambda_{+}( \overline{\pi}^{l})Y^{l}=\tilde{\psi}_{1}\mid T(\pi,\overline{\pi})+p^{2}\sum _{l\geq 1}\tilde{\psi}_{p^{l}}\mid\Delta_{\overline{\pi}}\Lambda_{+}(\pi) \Lambda_{+}(\overline{\pi}^{l-1})Y^{l}\)
\[=\tilde{\psi}_{1}\mid T(\pi,\overline{\pi})+p^{2}\frac{\left[\tilde{\psi}_{p}- \tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})Y\right]\mid S_{\pi}(Y)\Lambda _{+}(\pi)\Delta_{\overline{\pi}}Y}{D_{\pi,G}^{(2)}(Y)}\]
* \(\sum_{l\geq 0}\tilde{\psi}_{p^{l}}\mid K_{2}\Lambda_{+}(\overline{\pi}^{l})Y^{l}= \tilde{\psi}_{1}\mid K_{2}+p^{2}\sum_{l\geq 1}\tilde{\psi}_{p^{l}}\mid \Lambda_{+}(\overline{\pi}^{l-1})\Lambda_{+}(\pi)T(\overline{\pi},\pi)\Delta_ {\overline{\pi}}Y^{l}\)
\[=\tilde{\psi}_{1}\mid K_{2}+p^{2}\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1} \mid\Lambda_{-}(\overline{\pi})Y\right]\mid S_{\pi}(Y)\Lambda_{+}(\pi)\Delta_ {\overline{\pi}}T(\overline{\pi},\pi)Y}{D_{\pi,G}^{(2)}(Y)}\]
* \(\sum_{l\geq 0}\tilde{\psi}_{p^{l}}\mid K_{3}\Lambda_{+}(\overline{\pi}^{l})Y^{l}= \tilde{\psi}_{1}\mid K_{3}\)
* \(\sum_{l\geq 0}\tilde{\psi}_{p^{l}}\mid K_{4}\Lambda_{+}(\overline{\pi}^{l})Y^{l}= \tilde{\psi}_{1}\mid K_{4}\)
By putting all these together and then using Lemma 6.11 and the fact that \(a_{i}X=p^{k-3}X_{i}\), we obtain the result.
Let us now consider the third sum. We have
**Proposition 6.15**.: \[\frac{1}{1-\alpha_{i}X}\sum_{l\geq 0}\tilde{\psi}_{p^{l-2}}\mid\Lambda_{-}(p)X _{i}^{2}\mid K(X_{i})\Lambda_{+}(\overline{\pi}^{l})Y^{l}=\frac{\tilde{\psi}_{1 }\mid S_{\pi}(Y)U_{\pi}(X_{i})X_{i}^{2}Y^{2}}{D_{\pi,G}^{(2)}(Y)}=\]
\[=\frac{p^{2k-4}(p^{k-3}-p^{2k-4}X_{i})\tilde{\psi}_{1}\mid S_{\pi}(Y)\Delta_{ \overline{\pi}}Y^{2}X_{i}^{2}}{D_{\pi,G}^{(2)}(Y)}\]
where we define \(U_{\pi}(t)=p^{4}\Delta_{\overline{\pi}}\Delta_{p}(T(\overline{\pi},\pi)-p^{4} \Delta_{p}t)\in H_{p}^{1,1}[t]\).
Proof.: We will first simplify \(\Lambda_{-}(p)K(X_{i})\). By using the relations of Table 1, we have:
* \(\Lambda_{-}(p)T(\pi,\overline{\pi})=\Lambda_{-}(\pi)\Lambda_{-}(\overline{\pi} )T(\pi,\overline{\pi})=p^{2}\Delta_{\overline{\pi}}\Lambda_{-}(\pi)^{2}\)
* \(\Lambda_{-}(p)T(\overline{\pi},\pi)=\Lambda_{-}(\overline{\pi})\Lambda_{-}(\pi) T(\overline{\pi},\pi)=p^{2}\Delta_{\pi}\Lambda_{-}(\overline{\pi})^{2}\)
* \(\Lambda_{-}(p)K_{2}=p^{4}\Delta_{p}\Lambda_{-}(p)\)
* \(\Lambda_{-}(p)K_{3}=\Lambda_{-}(p)K_{4}=0\)
These relations can also be found in [10], pages \(2881-2882\). Now, by observing that \(\Lambda_{-}(p)=\Lambda_{-}(\overline{\pi})\Lambda_{-}(\pi)\), using Table 1 we have that
* \(\Lambda_{-}(\pi)\Lambda_{+}(\pi)=p^{3}\Delta_{p}\)
* \(\Lambda_{-}(\overline{\pi})\Lambda_{+}(\pi)=p\Delta_{\overline{\pi}}T(\overline{ \pi},\pi)\)
* \(\Lambda_{-}(p)\Lambda_{+}(\overline{\pi}^{2})=\Lambda_{-}(\overline{\pi})\Lambda_{ -}(\pi)\Lambda_{+}(\overline{\pi})\Lambda_{+}(\overline{\pi})=\cdots=p^{4} \Delta_{p}\Delta_{\overline{\pi}}T(\overline{\pi},\pi)\)
Now, \(T(\overline{\pi},\pi)\) and \(\Delta_{p}\) commute with \(\Lambda_{+}(\overline{\pi})\) and so for \(l\geq 2\), we can write
\[\Lambda_{-}(p)K(X_{i})\Lambda_{+}(\overline{\pi}^{l})=\Lambda_{+}(\overline{ \pi}^{l-2})(p^{4}\Delta_{p}\Delta_{\overline{\pi}}T(\overline{\pi},\pi)-(p^{4} \Delta_{\overline{\pi}}\Delta_{p}T(\overline{\pi},\pi)^{2}+p^{8}\Delta_{ \overline{\pi}}\Delta_{p}^{2})X_{i}+p^{8}\Delta_{p}^{2}\Delta_{\overline{\pi}} T(\overline{\pi},\pi)X_{i}^{2})\]
\[=\Lambda_{+}(\overline{\pi}^{l-2})(1-T(\overline{\pi},\pi)X_{i})U_{\pi}(X_{i})\]
Hence,
\[\frac{1}{1-\alpha_{i}X}\sum_{l\geq 0}\tilde{\psi}_{p^{l-2}}\mid(\Lambda_{-}(p) X_{i}^{2})\mid K(X_{i})\Lambda_{+}(\overline{\pi}^{l})Y^{l}=\]
\[\frac{1}{1-\alpha_{i}X}\sum_{l\geq 2}\tilde{\psi}_{p^{l-2}}\mid\Lambda_{+}( \overline{\pi}^{l-2})Y^{l-2}(1-T(\overline{\pi},\pi)X_{i})U_{\pi}(X_{i})X_{i} ^{2}Y^{2}=\]
\[=\frac{\tilde{\psi}_{1}\mid S_{\pi}(Y)U_{\pi}(X_{i})X_{i}^{2}Y^{2}}{D_{\pi,G}^ {(2)}(Y)}\]
by Lemma 6.11.
Finally, for the middle term, we have:
**Proposition 6.16**.: We have
\[-\frac{1}{1-\alpha_{i}X}\sum_{l\geq 0}\tilde{\psi}_{p^{l-1}}\mid T_{-}(p)X_{i} \mid K(X_{i})\Lambda_{+}(\overline{\pi}^{l})Y^{l}=\]
\[=-p^{2}\frac{\tilde{\psi}_{1}\mid S_{\pi}(Y)T(\pi)\Delta_{\overline{\pi}}X_{i} Y}{D_{\pi,G}^{(2)}(Y)}+p^{5}\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid \Lambda_{-}(\overline{\pi})Y\right]\mid S_{\pi}(Y)\Delta_{p}\Delta_{\overline{ \pi}}T_{+}(p)Y^{2}X_{i}^{2}}{D_{\pi,G}^{(2)}(Y)}+p^{2k-4}\tilde{\psi}_{1}\mid T (\overline{\pi})YX_{i}^{2}\]
Proof.: Firstly, we have no terms for \(l=0\), so we consider \(l\geq 1\). The idea is to pass \(\Lambda_{+}(\overline{\pi}^{m})\) to the left for some \(m\), so that it acts on the Fourier-Jacobi coefficients, and then we will be able to apply the rationality proposition 6.9 without difficulties. By page 2882 of [10], and after translating back to \(H_{p}^{1,1}\), we have that \(T_{-}(p)K_{3}=T_{-}(p)K_{4}=0\). Now, using Table 1, we have \(T_{-}(p)\Lambda_{+}(\overline{\pi})=p^{2}\Delta_{\overline{\pi}}T(\pi)\) and that \(T(\pi)\) commutes with \(\Lambda_{+}(\pi)\). Therefore, we can compute:
\[\sum_{l\geq 0}\tilde{\psi}_{p^{l-1}}\mid T_{-}(p)X_{i}\mid\Lambda_{+}(\overline{ \pi}^{l})Y^{l}=p^{2}\sum_{l\geq 0}\tilde{\psi}_{p^{l-1}}\mid\Lambda_{+}( \overline{\pi}^{l-1})Y^{l-1}T(\pi)\Delta_{\overline{\pi}}X_{i}Y=p^{2}\frac{ \tilde{\psi}_{1}\mid S_{\pi}(Y)T(\pi)\Delta_{\overline{\pi}}YX_{i}}{D_{\pi,G}^ {(2)}(Y)}\]
Let us now deal with \(T_{-}(p)K_{1}\Lambda_{+}(\overline{\pi})\). We remind the reader that \(K_{1}=T(\pi,\overline{\pi})+T(\overline{\pi},\pi)\) and \(T(\overline{\pi},\pi)\) commutes with \(\Lambda_{+}(\overline{\pi})\). Hence, we can pass \(\Lambda_{+}(\overline{\pi}^{l})\) to the left and we have already seen how to compute \(T_{-}(p)\Lambda_{+}(\overline{\pi}^{l})\). Also, by Table 1 we have
\[T_{-}(p)T(\overline{\pi},\pi)\Lambda_{+}(\overline{\pi}^{l})=T_{-}(p)\Lambda_{ +}(\overline{\pi}^{l})T(\overline{\pi},\pi)=p^{2}\Delta_{\overline{\pi}} \Lambda_{+}(\overline{\pi}^{l-1})T(\pi)T(\overline{\pi},\pi)\]
and for \(l\geq 2\)
\[T_{-}(p)T(\pi,\overline{\pi})\Lambda_{+}(\overline{\pi}^{l})=p^{2}\Delta_{ \overline{\pi}}T_{-}(p)\Lambda_{+}(\pi)\Lambda_{+}(\overline{\pi}^{l-1})=p^{4 }\Delta_{p}T(\overline{\pi})\Lambda_{+}(\overline{\pi}^{l-1})=p^{5}\Delta_{ \overline{\pi}}\Delta_{p}\Lambda_{+}(\overline{\pi}^{l-2})T_{+}(p)\]
For \(l=1\) we have
\[T_{-}(p)T(\pi,\overline{\pi})\Lambda_{+}(\overline{\pi}^{l})=p^{2}T_{-}(p) \Lambda_{+}(\overline{\pi})=p^{4}\Delta_{p}T(\overline{\pi})\]
Finally, we will deal with the term \(T_{-}(p)K_{2}\Lambda_{+}(\overline{\pi}^{l})\). We first translate into the Hecke algebra \(H_{1,2,1}\) of \(\mathrm{GL}_{4}\). We then have, by abusing notation and denoting \(K_{2}\) for \(\epsilon(K_{2})\) as well, with \(\epsilon\) the embedding of Proposition 6.4:
\[K_{2}=p\sum_{a,b,c\;(p)}\left(\Gamma_{1,2,1}\left(\begin{matrix}p&a&b&c\\ 0&p&0&0\\ 0&0&p&0\\ 0&0&0&p\end{matrix}\right)+\Gamma_{1,2,1}\left(\begin{matrix}p&0&0&c\\ 0&p&0&b\\ 0&0&p&a\\ 0&0&0&p\end{matrix}\right)\right)-p\sum_{c}\Gamma_{1,2,1}\left(\begin{matrix}p&0&0 &c\\ 0&p&0&0\\ 0&0&p&0\\ 0&0&0&p\end{matrix}\right)+(p^{2}-p^{4})\Delta\]
Now
\[\sum_{a,b,c}\Gamma_{1,2,1}\left(\begin{matrix}p&0&0&c\\ 0&p&0&b\\ 0&0&p&a\\ 0&0&0&p\end{matrix}\right)\Lambda_{+}^{3,1}=p^{3}\Delta\Lambda_{+}^{3,1}\]
and
\[\sum_{a,b,c}\Gamma_{1,2,1}\begin{pmatrix}p&0&0&c\\ 0&p&0&0\\ 0&0&p&0\\ 0&0&0&p\end{pmatrix}\Lambda_{+}^{3,1}=p\Delta\Lambda_{+}^{3,1}\]
and so we only need to deal with the first part of the sum of cosets. But, we observe
\[\sum_{a,b,c}\Gamma_{1,2,1}\begin{pmatrix}p&a&b&c\\ 0&p&0&0\\ 0&0&p&0\\ 0&0&0&p\end{pmatrix}=\Lambda_{+}^{1,3}\Lambda_{-}^{1,3}\]
Hence, by translating back to \(H_{p}^{1,1}\), we have for \(l\geq 2\),
\[T_{-}(p)K_{2}\Lambda_{+}(\overline{\pi}^{l})=pT_{-}(p)\Lambda_{+}(\pi)\Lambda _{-}(\overline{\pi})\Lambda_{+}(\overline{\pi}^{l})=\cdots=p^{5}\Delta_{ \overline{\pi}}\Delta_{p}\Lambda_{+}(\overline{\pi}^{l-2})T_{+}(p)T( \overline{\pi},\pi)\]
and for \(l=1\) we get
\[T_{-}(p)K_{2}\Lambda_{+}(\overline{\pi})=p^{4}\Delta_{p}T(\overline{\pi})T( \overline{\pi},\pi)\]
Applying now proposition 6.9 and using lemma 6.11 as well, we obtain the stated result.
### Calculation of the Dirichlet Series - Third Part
We now want to deal with the Dirichlet series
\[D_{(\epsilon_{1},\epsilon_{2})}(s)=\sum_{l_{1},l_{2},m\geq 0} \langle\tilde{\phi}_{1}|T_{-}(p^{m})\Lambda_{-}(\pi^{l_{2}})\Lambda_{-}( \overline{\pi}^{l_{1}}),\tilde{\psi}_{p^{m+l_{1}+l_{2}}}\rangle_{\mathcal{A}} a_{p^{m}}p^{(4-2k)l_{2}}p^{(4-2k)l_{1}}\times\\ \times\pi^{l_{2}k}\overline{\pi}^{l_{1}k}p^{-s(l_{1}+l_{2})}p^{-(2k +s-4)m}:=\\ :=\alpha_{1}T_{1}-\alpha_{2}T_{2}\]
where
\[T_{i}=\sum_{l_{1},l_{2},m\geq 0}\langle\tilde{\phi}_{1},\tilde{\psi}_{p^{m+l_{1 }+l_{2}}}|T_{+}(p^{m})\Lambda_{+}(\pi^{l_{1}})\Lambda_{+}(\overline{\pi}^{l_{ 2}})\rangle_{\mathcal{A}}X_{i}^{m}Y_{2}^{l_{1}}Y_{1}^{l_{2}}\]
if we set \(X_{i}=\alpha_{i}p^{-(2k+s-4)}\), \(Y_{1}=\pi^{k}p^{-(2k+s-4)}\), \(Y_{2}=\overline{\pi}^{k}p^{-(2k+s-4)}\) and we keep in mind that the operators \(T_{+}(p),\Lambda_{+}(\pi),\Lambda_{+}(\overline{\pi})\) all commute with each other.
**Lemma 6.17**.: We have
\[\sum_{l_{1},l_{2},m\geq 0}\tilde{\psi}_{p^{m+l_{1}+l_{2}}}|T_{+}(p^{m}) \Lambda_{+}(\pi^{l_{1}})\Lambda_{+}(\overline{\pi}^{l_{2}})X_{i}^{m}Y_{1}^{l_ {1}}Y_{2}^{l_{2}}=\]
\[=Q_{p,G}^{2}(X_{i})^{-1}\sum_{l_{1},l_{2}\geq 0}\left[\tilde{\psi}_{p^{l_{1}+l_{2 }}}-\tilde{\psi}_{p^{l_{1}+l_{2}-1}}|T_{-}(p)X_{i}+p\tilde{\psi}_{p^{l_{1}+l_{2 }-2}}|\Lambda_{-}(p)X_{i}^{2}\right]\mid K(X_{i})\Lambda_{+}(\overline{\pi}^{l _{2}})\Lambda_{+}(\pi^{l_{1}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}\]
Proof.: The proof follows immediately by Proposition 6.8.
We now look deal with the above expression term by term. We have
**Lemma 6.18**.: \[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid K(X_{i})\Lambda_{+}( \overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=(1-p^{2k-5}Y_{1}Y_{2})(1-p^{2}Y_{2}Y_{1}^{-1}\lambda_{\overline{\pi}}X_{i})(1 -p^{2}Y_{1}Y_{2}^{-1}\lambda_{\overline{\pi}}X_{i})\frac{\tilde{\psi}_{1}\mid S_ {\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})}{D_{\overline{\pi},G}^{(2)}(Y_{1})D_{ \pi,G}^{(2)}(Y_{2})}\]
\[-\left[(p^{k-3}-p^{2}Y_{1}Y_{2}^{-1}\lambda_{\pi})X_{i}+p^{2k-4}X_{i}^{2} \right]\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})}{D_{\overline{ \pi},G}^{(2)}(Y_{1})}-\left[(p^{k-3}-p^{2}Y_{2}Y_{1}^{-1}\lambda_{\overline{\pi} })X_{i}+p^{2k-4}X_{i}^{2}\right]\frac{\tilde{\psi}_{1}\mid S_{\pi}(Y_{2})}{D_{ \pi,G}^{(2)}(Y_{2})}+\]
\[+p^{2}\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid\Lambda_{-}(\overline{ \pi})Y_{2}\right]\mid S_{\pi}(Y_{2})\Lambda_{+}(\pi)\Delta_{\overline{\pi}}T( \overline{\pi},\pi)Y_{2}X_{i}^{2}}{D_{\pi,G}^{(2)}(Y_{2})}+\]
\[+p^{2}\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid\Lambda_{-}(\pi)Y_{1} \right]\mid S_{\overline{\pi}}(Y_{1})\Lambda_{+}(\overline{\pi})\Delta_{\pi}T( \pi,\overline{\pi})Y_{1}X_{i}^{2}}{D_{\overline{\pi},G}^{(2)}(Y_{1})}+\]
where \(\lambda_{\pi},\lambda_{\overline{\pi}}\) are the eigenvalues of \(\Delta_{\pi},\Delta_{\overline{\pi}}\) respectively.
Proof.: Firstly, we have, using 6.9
\[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid\Lambda_{+}(\overline {\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\sum_{l_{1} \geq 0}\frac{\left[\tilde{\psi}_{p^{l_{1}}}-\tilde{\psi}_{p^{l_{1}-1}}\mid \Lambda_{-}(\overline{\pi})Y_{2}\right]\mid S_{\overline{\pi}}(Y_{2})\Lambda_ {+}(\pi^{l_{1}})Y_{1}^{l_{1}}}{D_{\pi,G}^{(2)}(Y_{2})}=\]
\[=\frac{(1-p^{2k-5}Y_{1}Y_{2})\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})S _{\pi}(Y_{2})}{D_{\overline{\pi},G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}\]
Also, again for 6.9
\[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid T(\pi,\overline{ \pi})\Lambda_{+}(\overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})Y_{2}^{l_{2}} Y_{1}^{l_{1}}=\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid \Lambda_{+}(\pi^{l_{1}})T(\pi,\overline{\pi})\Lambda_{+}(\overline{\pi}^{l_{2 }})Y_{2}^{l_{2}}Y_{1}^{l_{1}}=\]
\[=\sum_{l_{1}\geq 0}\tilde{\psi}_{p^{l_{1}}}\mid\Lambda_{+}(\pi^{l_{1}})T(\pi, \overline{\pi})Y_{1}^{l_{1}}+p^{2}\sum_{l_{1}\geq 0,l_{2}\geq 1}\tilde{\psi}_{p^{l_{1} +l_{2}}}\mid\Lambda_{+}(\pi^{l_{1}+1})\Lambda_{+}(\overline{\pi}^{l_{2}-1}) \Delta_{\overline{\pi}}Y_{1}^{l_{1}}Y_{2}^{l_{2}}\]
\[=\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})T(\pi,\overline{\pi})}{D _{\overline{\pi},G}^{(2)}(Y_{1})}+p^{2}Y_{2}Y_{1}^{-1}\left((1-p^{2k-5}Y_{1}Y_ {2})\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})\Delta_{ \overline{\pi}}}{D_{\overline{\pi},G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}- \frac{\tilde{\psi}_{1}\mid S_{\pi}(Y_{2})\Delta_{\overline{\pi}}}{D_{\pi,G}^ {(2)}(Y_{2})}\right)\]
and we get an analogous result for
\[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid T(\overline{\pi}, \pi)\Lambda_{+}(\overline{\pi}^{l_{2}})\Lambda_{+}(\overline{\pi}^{l_{1}})Y_{ 2}^{l_{2}}Y_{1}^{l_{1}}\]
Next, using 1, we observe that for \(l_{1},l_{2}\geq 1\) we have
\[K_{2}\Lambda_{+}(\overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})=p\Lambda_{+}( \pi)\Lambda_{-}(\overline{\pi})\Lambda_{+}(\overline{\pi}^{l_{2}})\Lambda_{+}( \pi^{l_{1}})=p\Lambda_{+}(\pi)\Lambda_{-}(\overline{\pi})\Lambda_{+}(\pi) \Lambda_{+}(\pi^{l_{1}-1})\Lambda_{+}(\overline{\pi}^{l_{2}})\]
\[=p^{4}\Lambda_{+}(\pi)\Delta_{p}\Lambda_{+}(\pi^{l_{1}-1})\Lambda_{+}( \overline{\pi}^{l_{2}})=p^{4}\Lambda_{+}(\pi^{l_{1}})\Lambda_{+}(\overline{ \pi}^{l_{2}})\Delta_{p}\]
Hence,
\[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid K_{2}\Lambda_{+}(\pi^{l_{1 }})\Lambda_{+}(\overline{\pi}^{l_{2}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[\sum_{l_{1}\geq 0}\tilde{\psi}_{p^{l_{1}}}\mid K_{2}\Lambda_{+}(\pi^{l_{1}})Y_{1}^{l _{1}}+\sum_{l_{2}\geq 0}\tilde{\psi}_{p^{l_{2}}}\mid K_{2}\Lambda_{+}(\pi^{l_{2}})Y_{ 2}^{l_{2}}+\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid K_{2}\Lambda_{+}( \overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}- \tilde{\psi}_{1}\mid K_{2}=\]
\[=\tilde{\psi}_{1}\mid K_{2}+p^{2}\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1} \mid\Lambda_{-}(\overline{\pi})\right]\mid S_{\overline{\pi}}(Y_{2})\Lambda_{+}( \pi)\Delta_{\overline{\pi}}T(\overline{\pi},\pi)Y_{2}}{D_{\pi,G}^{(2)}(Y_{2})}+\]
\[+p^{2}\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid\Lambda_{-}(\pi)\right] \mid S_{\overline{\pi}}(Y_{1})\Lambda_{+}(\overline{\pi})\Delta_{\pi}T( \pi,\overline{\pi})Y_{1}}{D_{\overline{\pi},G}^{(2)}(Y_{1})}+\]
\[+p^{2k-4}\left(\frac{(1-p^{2k-5}Y_{1}Y_{2})\tilde{\psi}_{1}\mid S_{\overline{ \pi}}(Y_{1})S_{\pi}(Y_{2})}{D_{\overline{\pi},G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2 })}-\frac{\tilde{\psi}_{1}\mid S_{\pi}(Y_{2})}{D_{\pi,G}^{(2)}(Y_{2})}-\frac{ \tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})}{D_{\overline{\pi},G}^{(2)}(Y_{ 1})}+\tilde{\psi}_{1}\right)\]
as the sum
\[\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid\Lambda_{+}(\pi^{l_{1}}) \Lambda_{+}(\overline{\pi}^{l_{2}})p^{4}\Delta_{p}Y_{1}^{l_{1}}Y_{2}^{l_{2}}\]
can be computed to be
\[p^{2k-4}\left(\frac{(1-p^{2k-5}Y_{1}Y_{2})\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_ {1})S_{\pi}(Y_{2})}{D_{\pi,G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}-\frac{ \tilde{\psi}_{1}\mid S_{\pi}(Y_{2})}{D_{\pi,G}^{(2)}(Y_{2})}-\frac{\tilde{ \psi}_{1}\mid S_{\overline{\pi}}(Y_{1})}{D_{\overline{\pi},G}^{(2)}(Y_{1})}+ \tilde{\psi}_{1}\right)\]
Finally,
\[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid K_{3}\Lambda_{+}( \overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}= \tilde{\psi}_{1}\mid K_{3}\]
and
\[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid K_{4}\Lambda_{+}( \overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}= \tilde{\psi}_{1}\mid K_{4}\]
as \(K_{3}\Lambda_{+}(\overline{\pi})=K_{4}\Lambda_{+}(\overline{\pi})=K_{3} \Lambda_{+}(\pi)=K_{4}\Lambda_{+}(\pi)=0\).
**Lemma 6.19**.: We have
\[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}-2}}\mid\Lambda_{-}(p)K(X_ {i})\Lambda_{+}(\pi^{l_{1}})\Lambda_{+}(\overline{\pi}^{l_{2}})Y_{1}^{l_{1}}Y _{2}^{l_{2}}X_{i}^{2}=\]
\[=(1-\alpha_{i}X)\left[\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{2})U_{ \pi}(X_{i})X_{i}^{2}Y_{2}^{2}}{D_{\pi,G}^{(2)}(Y_{2})}+\frac{\tilde{\psi}_{1} \mid S_{\overline{\pi}}(Y_{1})U_{\overline{\pi}}(X_{i})X_{i}^{2}Y_{1}^{2}}{D_ {\pi,G}^{(2)}(Y_{1})}\right]+\]
\[+p^{4k-10}Y_{1}Y_{2}(1-p^{2k-5}Y_{1}Y_{2})(1-p^{2}Y_{2}Y_{1}^{-1}\lambda_{ \overline{\pi}}X_{i})(1-p^{2}Y_{1}Y_{2}^{-1}\lambda_{\pi}X_{i})X_{i}^{2}\frac{ \tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})S_{\overline{\pi}}(Y_{2})}{D_{ \pi,G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}\]
\[-p^{4k-10}X_{i}^{3}Y_{1}Y_{2}(p^{k-3}-p^{2}Y_{1}Y_{2}^{-1}\lambda_{\pi})\frac {\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})}{D_{\pi,G}^{(2)}(Y_{1})}-p^{4 k-10}X_{i}^{3}Y_{1}Y_{2}(p^{k-3}-p^{2}Y_{2}Y_{1}^{-1}\lambda_{\overline{\pi}})\frac{ \tilde{\psi}_{1}\mid S_{\pi}(Y_{2})}{D_{\pi,G}^{(2)}(Y_{2})}\]
where again \(U_{\pi}(t):=p^{4}\Delta_{\overline{\pi}}\Delta_{p}(T(\overline{\pi},\pi)-p^{4 }\Delta_{p}t)\).
Proof.: For the proof, we rewrite the sum as follows:
\[\sum_{l_{1},l_{2}\geq 0}=\sum_{l_{1}=0,l_{2}\geq 2}+\sum_{l_{2}=0,l_{1}\geq 2}+\sum_{ l_{1},l_{2}\geq 1}\]
We know how to compute the first two sums by Lemma 6.15, so will now deal with the last one. We rewrite this as
\[\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-2}}\mid\Lambda_{-}(p)K(X_{i}) \Lambda_{+}(p)\Lambda_{+}(\pi^{l_{1}-1})\Lambda_{+}(\overline{\pi}^{l_{2}-1}) Y_{1}^{l_{1}}Y_{2}^{l_{2}}X_{i}^{2}\]
But
\[\Lambda_{-}(p)K(X_{i})\Lambda_{+}(p)=p^{6}\Delta_{p}^{2}(1-K_{1}X_{i}+p^{4} \Delta_{p}X_{i}^{2})\]
as we can obtain by Table 1 or the relations written in [10]. Now, using 6.9, we get
\[\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-2}}\mid\Lambda_{+}(\overline {\pi}^{l_{1}-1})\Lambda_{+}(\overline{\pi}^{l_{2}-1})Y_{1}^{l_{1}}Y_{2}^{l_{2}} X_{i}^{2}=Y_{1}Y_{2}\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid\Lambda_{+}( \pi^{l_{1}-1})\Lambda_{+}(\overline{\pi}^{l_{2}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}X_{i }^{2}=\]
\[=(1-p^{2k-5}Y_{1}Y_{2})Y_{1}Y_{2}X_{i}^{2}\frac{\tilde{\psi}_{1}\mid S_{ \overline{\pi}}(Y_{1})S_{\overline{\pi}}(Y_{2})}{D_{\overline{\pi},G}^{(2)}(Y_{ 1})D_{\pi,G}^{(2)}(Y_{2})}\]
Also, we have
\[\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-2}}\mid T(\pi, \overline{\pi})\Lambda_{+}(\pi^{l_{1}-1})\Lambda_{+}(\overline{\pi}^{l_{2}-1})Y_ {1}^{l_{1}}Y_{2}^{l_{2}}=\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-2}}\mid \Lambda_{+}(\pi^{l_{1}-1})T(\pi,\overline{\pi})\Lambda_{+}(\overline{\pi}^{l_{2 }-1})Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=Y_{1}Y_{2}\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}}}\mid \Lambda_{+}(\pi^{l_{1}})T(\pi,\overline{\pi})\Lambda_{+}(\overline{\pi}^{l_{2 }})Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=Y_{1}Y_{2}\sum_{l_{1}\geq 0}\tilde{\psi}_{p^{l_{1}}}\mid\Lambda_{+}(\pi^{l_{1}}) T(\pi,\overline{\pi})Y_{1}^{l_{1}}+p^{2}Y_{1}Y_{2}\sum_{l_{1}\geq 0,l_{2}\geq 1} \tilde{\psi}_{p^{l_{1}+l_{2}}}\mid\Lambda_{+}(\pi^{l_{1}+1})\Lambda_{+}(\overline {\pi}^{l_{2}-1})\Delta_{\overline{\pi}}Y_{1}^{l_{1}}Y_{2}^{l_{2}}\]
\[=Y_{1}Y_{2}\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})T(\pi,\overline{\pi})}{D_ {\overline{\pi},G}^{(2)}(Y_{1})}+p^{2}Y_{2}^{2}\sum_{l_{1}\geq 0}\frac{\left[\tilde{\psi}_{p^{l_{1}+1}}- \tilde{\psi}_{p^{l_{1}}}\mid\Lambda_{-}(\overline{\pi})Y_{2}\right]\mid S_{ \pi}(Y_{2})\Lambda_{+}(\pi^{l_{1}+1})\Delta_{\overline{\pi}}Y_{1}^{l_{1}+1}}{D_ {\pi,G}^{(2)}(Y_{2})}\]
\[=Y_{1}Y_{2}\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})T(\pi,\overline{ \pi})}{D_{\overline{\pi},G}^{(2)}(Y_{1})}+p^{2}Y_{2}^{2}\left((1-p^{2k-5}Y_{1}Y_{2}) \frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})\Delta_{ \overline{\pi}}}{D_{\pi,G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}-\frac{\tilde{\psi}_{ 1}\mid S_{\pi}(Y_{2})\Delta_{\overline{\pi}}}{D_{\pi,G}^{(2)}(Y_{2})}\right)\]
and similarly for \(T(\overline{\pi},\pi)\).
**Lemma 6.20**.: We have
\[\sum_{l_{1},l_{2}\geq 0}\tilde{\psi}_{p^{l_{1}+l_{2}-1}}\mid T_{-}(p)K(X_{i}) \Lambda_{+}(\pi^{l_{1}})\Lambda_{+}(\overline{\pi}^{l_{2}})Y_{1}^{l_{1}}Y_{2}^ {l_{2}}=\]
\[(1-\alpha_{i}X)\times\]
\[\times\left[p^{2}\frac{\tilde{\psi}_{1}\mid S_{\pi}(Y_{2})T(\pi)\Delta_{\overline {\pi}}X_{i}Y_{2}}{D_{\pi,G}^{(2)}(Y_{2})}-p^{5}\frac{\left[\tilde{\psi}_{p}- \tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})Y_{2}\right]\mid S_{\pi}(Y_{2} )\Delta_{p}\Delta_{\overline{\pi}}T_{+}(p)Y_{2}^{2}X_{i}^{2}}{D_{\pi,G}^{(2)} (Y_{2})}-p^{2k-4}\tilde{\psi}_{1}\mid T(\overline{\pi})Y_{2}X_{i}^{2}}{D_{ \pi,G}^{(2)}(Y_{2})}+\right.\]
\[\left.+\frac{1}{2}p^{2k-5}(1+p^{2k-4}X_{i}^{2})X_{i}\times\right.\]
\[\times\left[(1-p^{2k-5}Y_{1}Y_{2})\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1 }\mid\Lambda_{-}(\pi)Y_{1}\right]\mid S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2} )T_{+}(p)Y_{1}Y_{2}}{D_{\pi,G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}-\frac{ \tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})S_{\pi}(Y_{2})T_{+}(p)Y_{1}Y_{ 2}^{2}}{D_{\pi,G}^{(2)}(Y_{2})}+\right.\]
\[+(1-p^{2k-5}Y_{1}Y_{2})\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid \Lambda_{-}(\overline{\pi})Y_{2}\right]\mid S_{\pi}(Y_{2})S_{\overline{\pi}}( Y_{1})T_{+}(p)Y_{1}Y_{2}}{D_{\pi,G}^{(2)}(Y_{2})D_{\overline{\pi},G}^{(2)}(Y_{ 1})}-\frac{\tilde{\psi}_{1}\mid\Lambda_{-}(\pi)S_{\overline{\pi}}(Y_{1})T_{+ }(p)Y_{2}Y_{1}^{2}}{D_{\pi,G}^{(2)}(Y_{1})}\right]-\]
\[-X_{i}^{2}\left[p^{5}(1-p^{2k-5}Y_{1}Y_{2})\frac{\left[\tilde{\psi}_{p}-\tilde {\psi}_{1}\mid\Lambda_{-}(\pi)Y_{1}\right]\mid S_{\overline{\pi}}(Y_{1})S_{ \pi}(Y_{2})\Delta_{\overline{\pi}}\Delta_{p}T_{+}(p)Y_{2}^{2}}{D_{\pi,G}^{(2) }(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}-p^{5}\frac{\tilde{\psi}_{p}\mid S_{\pi}(Y_{2} )T_{+}(p)\Delta_{\overline{\pi}}\Delta_{p}Y_{2}^{2}}{D_{\pi,G}^{(2)}(Y_{2})}\right.\]
\[\left.+p^{2k-4}Y_{2}\left(\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1}) T(\overline{\pi})}{D_{\pi,G}^{2}(Y_{1})}-\tilde{\psi}_{1}\mid T(\overline{\pi}) \right)+\right.\]
\[\left.+p^{5}(1-p^{2k-5}Y_{1}Y_{2})\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1} \mid\Lambda_{-}(\overline{\pi})Y_{2}\right]\mid S_{\pi}(Y_{2})S_{\overline{ \pi}}(Y_{1})\Delta_{\pi}\Delta_{p}T_{+}(p)Y_{1}^{2}}{D_{\pi,G}^{(2)}(Y_{2})D_{ \overline{\pi},G}^{(2)}(Y_{1})}-p^{5}\frac{\tilde{\psi}_{p}\mid S_{\overline{ \pi}}(Y_{1})T_{+}(p)\Delta_{\pi}\Delta_{p}Y_{1}^{2}}{D_{\pi,G}^{(2)}(Y_{1})}\right.\]
\[\left.+p^{2k-4}Y_{1}\left(\frac{\tilde{\psi}_{1}\mid S_{\pi}(Y_{2})T(\pi)}{D_{ \pi,G}^{2}(Y_{2})}-\tilde{\psi}_{1}\mid T(\pi)\right)\right]\]
Proof.: If \(l_{1}=0\) or \(l_{2}=0\) then we know how to compute this by 6.16. So assume \(l_{1},l_{2}\geq 1\). Now,
\[T_{-}(p)\Lambda_{+}(\overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})=p^{3}\Lambda _{+}(\overline{\pi}^{l_{2}-1})\Lambda_{+}(\pi^{l_{1}-1})T_{+}(p)\Delta_{p}\]
using that \(T_{-}(p)\Lambda_{+}(p)=p^{3}\Delta_{p}T_{+}(p)\). Hence, we have
\[\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-1}}\mid T_{-}(p)\Lambda_{+}( \pi^{l_{1}})\Lambda_{+}(\overline{\pi}^{l_{2}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=p^{3}\sum_{l_{i}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-1}}\mid\Lambda_{+}( \pi^{l_{1}-1})\Lambda_{+}(\overline{\pi}^{l_{2}-1})T_{+}(p)\Delta_{p}Y_{1}^{l_{1 }}Y_{2}^{l_{2}}=\]
\[=p^{3}\sum_{l_{1}\geq 1}\frac{\left[\tilde{\psi}_{p^{l_{1}}}-\tilde{\psi}_{p^{l_{1}-1}} \mid\Lambda_{-}(\overline{\pi})Y_{2}\right]\mid S_{\pi}(Y_{2})\Lambda_{+}(\pi^{l _{1}-1})T_{+}(p)\Delta_{p}Y_{1}^{l_{1}}Y_{2}}{D_{\pi,G}^{(2)}(Y_{2})}\]
\[=p^{3}\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid\Lambda_{-}(\pi)Y_{1} \right]\mid S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})T_{+}(p)\Delta_{p}Y_{1}Y_{2}}{ D_{\pi,G}^{(2)}(Y_{2})}-p^{3}\frac{\tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})S_{ \pi}(Y_{2})T_{+}(p)\Delta_{p}Y_{1}Y_{2}^{2}}{D_{\pi,G}^{(2)}(Y_{2})}\]
\[-p^{6}\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid\Lambda_{-}(\pi)Y_{1} \right]\mid S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})T_{+}(p)\Delta_{p}^{2}Y_{1}^{ 2}Y_{2}^{2}}{D_{\pi,G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}=\]
\[=p^{3}(1-p^{2k-5}Y_{1}Y_{2})\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid \Lambda_{-}(\pi)Y_{1}\right]\mid S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})T_{+}(p )\Delta_{p}Y_{1}Y_{2}}{D_{\overline{\pi},G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2}) }-p^{3}\frac{\tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})S_{\pi}(Y_{2})T_{+ }(p)\Delta_{p}Y_{1}Y_{2}^{2}}{D_{\pi,G}^{(2)}(Y_{2})}\]
We note here that the last expression is not (visibly) symmetric when we interchange \(\pi\leftrightarrow\overline{\pi}\). In order to make it symmetric, we compute it by first calculating the series involving the operator \(\Lambda_{-}(\pi)\) first and hence we can write
\[\sum_{l_{i}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-1}}\mid T_{-}(p)\Lambda_{+}( \pi^{l_{1}})\Lambda_{+}(\overline{\pi}^{l_{2}})Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=\frac{1}{2}p^{2k-5}\left[(1-p^{2k-5}Y_{1}Y_{2})\frac{\left[\tilde{\psi}_{p}- \tilde{\psi}_{1}\mid\Lambda_{-}(\pi)Y_{1}\right]\mid S_{\overline{\pi}}(Y_{1} )S_{\overline{\pi}}(Y_{2})T_{+}(p)Y_{1}Y_{2}}{D_{\pi,G}^{(2)}(Y_{1})D_{\pi,G}^ {(2)}(Y_{2})}-\frac{\tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})S_{\pi}(Y_ {2})T_{+}(p)Y_{1}Y_{2}^{2}}{D_{\pi,G}^{(2)}(Y_{2})}+\right.\]
\[+\left.(1-p^{2k-5}Y_{1}Y_{2})\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1} \mid\Lambda_{-}(\overline{\pi})Y_{2}\right]\mid S_{\pi}(Y_{2})S_{\overline{ \pi}}(Y_{1})T_{+}(p)Y_{1}Y_{2}}{D_{\pi,G}^{(2)}(Y_{2})D_{\overline{\pi},G}^{(2 )}(Y_{1})}-\frac{\tilde{\psi}_{1}\mid\Lambda_{-}(\pi)S_{\overline{\pi}}(Y_{1} )T_{+}(p)Y_{2}Y_{1}^{2}}{D_{\pi,G}^{(2)}(Y_{1})}\right]\]
Moreover, as in 6.18, we have that
\[K_{2}\Lambda_{+}(\overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})=p^{4}\Delta_ {p}\Lambda_{+}(\pi)^{l_{1}}\Lambda_{+}(\overline{\pi}^{l_{2}})\]
and so
\[T_{-}(p)K_{2}\Lambda_{+}(\overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})=p^{ 7}\Lambda_{+}(\overline{\pi}^{l_{2}-1})\Lambda_{+}(\pi^{l_{1}-1})T_{+}(p) \Delta_{p}^{2}\]
Finally, for the last one, we note \(K_{1}=T(\pi,\overline{\pi})+T(\overline{\pi},\pi)\). Now
\[\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-1}}\mid T_{-}(p)T(\pi, \overline{\pi})\Lambda_{+}(\overline{\pi}^{l_{2}})\Lambda_{+}(\pi^{l_{1}})Y_{ 1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=p^{2}\sum_{l_{1},l_{2}\geq 1}\tilde{\psi}_{p^{l_{1}+l_{2}-1}}\mid T_{-}(p) \Lambda_{+}(\pi^{l_{1}+1})\Lambda_{+}(\overline{\pi}^{l_{2}-1})\Delta_{\overline {\pi}}Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=p^{2}\sum_{l_{1}\geq 1,l_{2}=1}+p^{2}\sum_{l_{1}\geq 1,l_{2}\geq 2}\]
For the first sum we have
\[p^{2}\sum_{l_{1}\geq 1}\tilde{\psi}_{p^{l_{1}}}\mid T_{-}(p)\Lambda_{+}(\pi^{l_{1} +1})\Delta_{\overline{\pi}}Y_{1}^{l_{1}}Y_{2}=p^{4}Y_{2}\sum_{l_{1}\geq 1}\tilde{ \psi}_{p^{l_{1}}}\mid\Lambda_{+}(\pi^{l_{1}})T(\overline{\pi})\Delta_{p}Y_{1}^ {l_{1}}=\]
\[=p^{4}Y_{2}\sum_{l_{1}\geq 0}\tilde{\psi}_{p^{l_{1}}}\mid\Lambda_{+}(\pi^{l_{1} })T(\overline{\pi})\Delta_{p}Y_{1}^{l_{1}}-p^{4}Y_{2}\tilde{\psi}_{1}\mid T( \overline{\pi})\Delta_{p}=\]
\[=p^{2k-4}Y_{2}\left[\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})T( \overline{\pi})}{D_{\overline{\pi},G}^{2}(Y_{1})}-\tilde{\psi}_{1}\mid T( \overline{\pi})\right]\]
and for the second
\[p^{2}\sum_{l_{1}\geq 1,l_{2}\geq 2}\tilde{\psi}_{p^{l_{1}+l_{2}-1}}\mid T_{-}(p) \Lambda_{+}(\pi^{l_{1}+1})\Lambda_{+}(\overline{\pi}^{l_{2}-1})\Delta_{ \overline{\pi}}Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=p^{5}\sum_{l_{1}\geq 1,l_{2}\geq 2}\tilde{\psi}_{p^{l_{1}+l_{2}-1}}\mid \Lambda_{+}(\overline{\pi})^{l_{2}-2}\Lambda_{+}(\pi^{l_{1}})\Delta_{ \overline{\pi}}\Delta_{p}T_{+}(p)Y_{1}^{l_{1}}Y_{2}^{l_{2}}=\]
\[=p^{5}\sum_{l_{1}\geq 1}\frac{\left[\tilde{\psi}_{p^{l_{1}+1}}-\tilde{\psi}_{p^{l_{1}}} \mid\Lambda_{-}(\overline{\pi})Y_{2}\right]\mid S_{\pi}(Y_{2})\Lambda_{+}(\pi^{l _{1}})\Delta_{\overline{\pi}}\Delta_{p}T_{+}(p)Y_{2}^{2}Y_{1}^{l_{1}}}{D_{\pi,G}^ {(2)}(Y_{2})}\]
But
\[\sum_{l_{1}\geq 1}\tilde{\psi}_{p^{l_{1}+1}}\mid S_{\pi}(Y_{2})\Lambda_{+}(\pi^{l _{1}})\Delta_{\overline{\pi}}\Delta_{p}T_{+}(p)Y_{2}^{2}Y_{1}^{l_{1}}=\]
\[D_{(\epsilon_{1},\epsilon_{2},l_{2})}(s)=\alpha_{1}S_{1}-\alpha_{2}S_{2}\]
we have
\[D_{(\epsilon_{1},l_{2})}(s)=\frac{\alpha_{1}}{1-\alpha_{1}X}S_{1}-\frac{\alpha_ {2}}{1-\alpha_{2}X}S_{2}\]
so
\[D_{(\epsilon_{1},l_{2})}(s)-D_{(\epsilon_{1},\epsilon_{2},l_{2})}(s)=\frac{ \alpha_{1}^{2}X}{1-\alpha_{1}X}S_{1}-\frac{\alpha_{2}^{2}X}{1-\alpha_{2}X}S_{2}\]
We also recall that we have
\[D_{(\epsilon_{1},\epsilon_{2})}(s)=\alpha_{1}T_{1}-\alpha_{2}T_{2}\]
By now putting together the results of the last three subsections, we obtain the following Theorem:
**Theorem 6.21**.: _Let \(p=\pi\overline{\pi}\) be a split prime in \(\mathcal{O}_{K}\). Let \(F,G\in S_{2}^{k}\) and \(h\in S_{1}^{k}\) be Hecke eigenforms, with \(h\) having totally real Fourier coefficients, and \(F\) belonging in the Maass space. Let also \(\phi_{1},\psi_{1}\) be the first Fourier-Jacobi coefficients of \(F,G\) respectively and \(X_{i}=\alpha_{i}p^{-(2k+s-4)}\), \(Y_{1}=\pi^{k}p^{-(2k+s-4)}\), \(Y_{2}=\overline{\pi}^{k}p^{-(2k+s-4)}\). We then have for \(\mathrm{Re}(s)\) large enough_
\[(\alpha_{1}-\alpha_{2})D_{F,G,h}^{(p)}(s)=\]
\[\frac{1}{Q_{p,G}^{(2)}(X_{1})}\langle\tilde{\phi},T(X_{1})\rangle_{\mathcal{A} }-\frac{1}{Q_{p,G}^{(2)}(X_{2})}\langle\tilde{\phi},T(X_{2})\rangle_{\mathcal{ A}}\]
_where_
\[T(X_{i}):=\alpha_{i}X_{i}p^{k-2}(1-p^{k-2}X_{i})\left[(1+p^{3k-8}X_{i}Y_{1}Y_{ 2})\frac{\tilde{\psi}_{1}\mid S_{\pi}(Y_{2})}{D_{\pi,G}^{(2)}(Y_{2})}-Y_{1} \frac{\tilde{\psi}_{1}\mid S_{\pi}(Y_{2})T(\pi)}{D_{\pi,G}^{(2)}(Y_{2})}+\right.\]
\[+(1+p^{3k-8}X_{i}Y_{1}Y_{2})\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})}{D_ {\overline{\pi},G}^{(2)}(Y_{1})}-Y_{2}\frac{\tilde{\psi}_{1}\mid S_{\overline{ \pi}}(Y_{1})T(\overline{\pi})}{D_{\overline{\pi},G}^{(2)}(Y_{1})}\Bigg{]}-\]
\[-\frac{1}{2}\alpha_{i}X_{i}p^{2k-5}Y_{1}Y_{2}(1-p^{k-2}X_{i})^{2}\left[(1-p^{2 k-5}Y_{1}Y_{2})\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid\Lambda_{-}(\pi)Y_{1} \right]\mid S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})T_{+}(p)}{D_{\overline{\pi },G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}-\right.\]
\[\left.-\frac{\tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})S_{\overline{\pi}} (Y_{2})T_{+}(p)Y_{2}}{D_{\overline{\pi},G}^{(2)}(Y_{2})}+(1-p^{2k-5}Y_{1}Y_{2 })\frac{\left[\tilde{\psi}_{p}-\tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi} )Y_{2}\right]\mid S_{\overline{\pi}}(Y_{2})S_{\overline{\pi}}(Y_{1})T_{+}(p)} {D_{\overline{\pi},G}^{(2)}(Y_{2})D_{\overline{\pi},G}^{(2)}(Y_{1})}-\right.\]
\[\left.-\frac{\tilde{\psi}_{1}\mid\Lambda_{-}(\pi)S_{\overline{\pi}}(Y_{1})T_{+ }(p)Y_{1}}{D_{\overline{\pi},G}^{(2)}(Y_{1})}\right]+\]
\[+\alpha_{1}(1-p^{2k-5}Y_{1}Y_{2})(1+p^{4k-9}Y_{1}Y_{2}X_{i}^{2})(1-p^{k-2}X_ {i})^{2}\frac{\tilde{\psi}_{1}\mid S_{\overline{\pi}}(Y_{1})S_{\overline{\pi} }(Y_{2})}{D_{\overline{\pi},G}^{(2)}(Y_{1})D_{\pi,G}^{(2)}(Y_{2})}\]
_with \(S_{\pi},S_{\overline{\pi}}\) are the polynomials defined in Proposition 6.6 and \(\Lambda_{-}(\pi),\Lambda_{-}(\overline{\pi}),T(\pi),T(\overline{\pi}),T_{+}(p)\) are operators defined in subsection 6.2._
We finally have the following proposition about the relation of \(S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})\) with known \(L-\)functions.
**Proposition 6.22**.: Assume \(p=\pi\overline{\pi}\) is a split prime in \(\mathcal{O}_{K}\). We have
\[S_{\overline{\pi},F}(Y_{1})S_{\pi,F}(Y_{2})=L_{p}(s+k-2,f)L_{p}\left(s+k-2,f, \left(\frac{-4}{p}\right)\right)\]
where \(f\in S_{k-1}\left(\Gamma_{0}(4),\left(\frac{-4}{\cdot}\right)\right)\) is the modular form whose Maass lift is \(F\), as in 3.8.
Proof.: Let us first consider \(S_{\pi,F}(Y_{2})\). By assuming that
\[f\mid_{k-1}T(p)=a(p)f\]
and using Lemma 3.3 from [7] we obtain that
\[\tilde{\phi}_{1}\mid_{k}T(\overline{\pi})=p^{k-2}(\overline{\pi})^{-k}a(p) \tilde{\psi}_{1}\]
Using now the fact that \(Y_{2}=(\overline{\pi})^{k}p^{-(2k+s-4)}\) and that
\[S_{\pi}(Y_{2})=1-T(\overline{\pi})Y_{2}+p\Delta_{\overline{\pi}}T(\pi, \overline{\pi})Y_{2}^{2}\]
we get
\[S_{\pi,F}(Y_{2})=1-p^{-k-s+2}a(p)+p^{-k-2s+2}=L_{p}(s+k-2,f)\]
and similarly for \(S_{\overline{\pi}}(Y_{1})\). Given that \(\left(\frac{-4}{p}\right)=1\) in this case, the result follows.
## 7. Euler Product
We can now use the above calculations in order to deduce the following theorem:
**Theorem 7.1**.: _With notation and assumptions as above, we have that the series \(D_{F,G,h}(s)\) has an Euler product of the form_
\[D_{F,G,h}(s)=4\beta_{k}\langle\tilde{\phi}_{1},\tilde{\psi}_{1}\rangle_{ \mathcal{A}}\prod_{p\text{ prime}}\frac{D_{F,G,h}^{(p)}(s)}{\langle\tilde{\phi}_{1}, \tilde{\psi}_{1}\rangle_{\mathcal{A}}}\]
_where we also define_
\[D_{F,G,h}^{(2)}(s):=\sum_{l,\epsilon,m\geq 0}\langle\tilde{\phi}_{1}|T_{-}(2^{m} )U_{\pi^{l}},\tilde{\psi}_{2^{m+l}}\rangle_{\mathcal{A}}a_{2m+\epsilon}2^{-sl }2^{-(k+s-1)\epsilon}2^{-(2k+s-4)m}\]
_with \(\pi:=(1+i)\), together with the condition \(\min(l,\epsilon)=0\)._
The proof of this theorem is the subject matter of this section. We need to distinguish cases when \(p\) is inert or splits in \(\mathbb{Z}[i]\). We have the following two propositions, the proof of which is essentially the same.
**Proposition 7.2**.: Let \(p\) be an inert prime. Let \(m^{\prime}\in\mathbb{N}\) and \(l^{\prime},\epsilon^{\prime}\in\mathbb{Z}[i]\) all relative prime to \(p\). Then, we claim
\[\sum_{l,\epsilon,m\geq 0}\langle\tilde{\phi}_{1}|T_{-}(m^{\prime}p^{m}) \Lambda_{-}(l^{\prime}p^{l}),\tilde{\psi}_{m^{\prime}N(l^{\prime})p^{m+2l}} \rangle\mathcal{A}a_{m^{\prime}N(\epsilon^{\prime})p^{m+2\epsilon}}p^{-(3k+2s- 8)l}p^{-2(k+s-1)\epsilon}p^{-(2k+s-4)m}=\]
\[=\langle\tilde{\phi}_{1}\mid T_{-}(m^{\prime})\Lambda_{-}(l^{\prime}),\tilde{ \psi}_{m^{\prime}N(l^{\prime})}\rangle\mathcal{A}a_{m^{\prime}N(\epsilon^{ \prime})}\left(\frac{D^{(p)}_{F,G,h}(s)}{\langle\tilde{\phi}_{1},\tilde{\psi}_ {1}\rangle\mathcal{A}}\right)\]
**Proposition 7.3**.: Let \(p\) be a prime that splits in \(\mathbb{Z}[i]\). Let \(m^{\prime}\in\mathbb{N}\) and \(l^{\prime},\epsilon^{\prime}\in\mathbb{Z}[i]\) all relative prime to \(p\)(or equivalently coprime to both \(\pi,\pi\)). Then, we claim
\[\sum_{\begin{subarray}{c}l_{1},l_{2},\\ \epsilon_{1},\epsilon_{2},m\geq 0\end{subarray}}\langle\tilde{\phi}_{1}|T_{-}(m^{ \prime}p^{m})\Lambda_{-}(l^{\prime}\pi^{l_{1}}\overline{\pi}^{l_{2}}),\tilde{ \psi}_{m^{\prime}N(l^{\prime})p^{m+l_{1}+l_{2}}}\rangle\mathcal{A}a_{m^{\prime }N(\epsilon^{\prime})p^{m+\epsilon_{1}+\epsilon_{2}}}p^{(4-2k)l_{1}}p^{(4-2k)l _{2}}\pi^{l_{1}k}\overline{\pi}^{l_{2}k}\times\]
\[\times p^{-s(l_{1}+l_{2})}p^{-(k+s-1)(\epsilon_{1}+\epsilon_{2})}p^{-(2k+s-4 )m}\]
\[=\langle\tilde{\phi}_{1}\mid T_{-}(m^{\prime})\Lambda_{-}(l^{\prime}),\tilde{ \psi}_{m^{\prime}N(l^{\prime})}\rangle\mathcal{A}a_{m^{\prime}N(\epsilon^{ \prime})}\left(\frac{D^{(p)}_{F,G,h}(s)}{\langle\tilde{\phi}_{1},\tilde{\psi}_ {1}\rangle\mathcal{A}}\right)\]
Proof.: The proof proceeds in an analogous way as we proved the results in sections 5 and 6.
The difference is the following:
Every time we have the term \(\tilde{\psi}_{1}\), we will instead have \(\tilde{\psi}_{m^{\prime}N(l^{\prime})}\mid T_{+}(m^{\prime})\Lambda_{+}(l^{ \prime})\). This is because of the rationality theorems 6.8 and 6.9 as well as the fact that \(m^{\prime}N(l^{\prime})\) is co-prime to \(p\)(so terms of the form \(\tilde{\psi}_{m^{\prime}N(l^{\prime})/p}\) vanish). Also, because of the co-primality, the operators \(T_{+}(m^{\prime}),\Lambda_{+}(l^{\prime})\) commute with every operator with index power of \(p\), so we can move them around freely.
The other thing we need to consider are the terms of the form \(\tilde{\psi}_{p}\), in the original calculations.
Let us first deal with the inert case. These will translate into forms of the form \(\tilde{\psi}_{pm^{\prime}N(l^{\prime})}\mid T_{+}(p)\) for the inert case, which can be dealt with a similar way as before because we will have
\[\tilde{\psi}_{pm^{\prime}N(l^{\prime})}\mid T_{+}(p)=\lambda_{p}\tilde{\psi}_ {m^{\prime}N(l^{\prime})}\]
where \(\lambda_{p}\) is the eigenvalue of the operator \(T_{p}\in H_{p}^{2}\), when acted on \(G\), i.e. \(G|_{k}T_{p}=\lambda_{p}G\). This is true because of the embedding
\[\epsilon(T_{p})=T_{+}(p)+T_{-}(p)\]
as in Proposition 5.3 and so we get
\[\tilde{\psi}_{m^{\prime}N(l^{\prime})}\mid\mid T(p)=\lambda_{p}\tilde{\psi}_ {m^{\prime}N(l^{\prime})}\]
and
\[\tilde{\psi}_{m^{\prime}N(l^{\prime})}\mid\mid T(p)=\tilde{\psi}_{pm^{\prime} N(l^{\prime})}\mid T_{+}(p)+\tilde{\psi}_{m^{\prime}N(l^{\prime})/p}\mid T_{-}(p)= \tilde{\psi}_{pm^{\prime}N(l^{\prime})}\mid T_{+}(p)\]
The same can be said for the split case as well. In this case, originally we have terms of the form
\[\tilde{\psi}_{p}\mid S_{\pi}(Y_{2})T_{+}(p),\ \tilde{\psi}_{p}\mid S_{\pi}(Y_{2})S_{ \overline{\pi}}(Y_{1})T_{+}(p),\ \tilde{\psi}_{1}\mid\Lambda_{-}(\overline{\pi})S_{\pi}(Y_{2})T_{+}(p)\]
(and the corresponding expressions for \(\overline{\pi}\)). But we now have the relations:
* \(S_{\pi}(Y_{2})T_{+}(p)=T_{+}(p)-\left[p^{2}\Delta_{\overline{\pi}}\Lambda_{+}( \pi)-\Lambda_{+}(\overline{\pi})T(\pi,\overline{\pi})+T_{+}(p)T(\overline{\pi}) \right]Y_{2}+\) \[+p^{2}\Delta_{\overline{\pi}}\Lambda_{+}(\pi)T(\overline{\pi})Y_{2}^{2}\]
* \(S_{\overline{\pi}}(Y_{1})S_{\pi}(Y_{2})T_{+}(p)=T_{+}(p)\left[1-T(\pi)Y_{1}+p^{ 3}\Delta_{p}Y_{1}Y_{2}\right]\left[1-T(\overline{\pi})Y_{2}\right]+\) \[+\Lambda_{+}(\pi)\left[T(\overline{\pi},\pi)Y_{1}-p^{2}\Delta_{ \overline{\pi}}Y_{2}\right]\left[1-T(\overline{\pi})Y_{2}\right]-\] \[-p^{2}\Delta_{\pi}Y_{1}\Lambda_{+}(\overline{\pi})\left[1-T(\pi)Y_{ 1}+p^{3}\Delta_{p}Y_{1}Y_{2}\right]\left[1-T(\overline{\pi})Y_{2}\right]+ \Lambda_{+}(\overline{\pi})S_{\overline{\pi}}(Y_{1})Y_{2}\]
* \(\Lambda_{-}(\overline{\pi})S_{\pi}(Y_{2})T_{+}(p)=p^{2}\Delta_{\overline{\pi}}T(\pi) -\left[p^{5}\Delta_{\overline{\pi}}\Delta_{p}-p\Delta_{\overline{\pi}}T(\pi, \overline{\pi})T(\overline{\pi},\pi)+p^{2}\Delta_{\overline{\pi}}T(\pi)T( \overline{\pi})\right]Y_{2}+\) \[+\,p^{5}\Delta_{\overline{\pi}}\Delta_{p}T(\overline{\pi})Y_{2}^{2}\]
So, if we know how to handle expressions of the form \(\tilde{\psi}_{p}\mid T_{+}(p),\Lambda_{+}(\pi),\Lambda_{+}(\overline{\pi})\), we can have the same result. Now, we have by Lemma 6.4
\[\epsilon(T_{2})=T_{-}(p)+T_{+}(p)+T(\pi,\overline{\pi})+T(\overline{\pi},\pi)\]
and so we get
\[\tilde{\psi}_{m^{\prime}N(l^{\prime})}\mid T(p)=\tilde{\psi}_{m^{ \prime}N(l^{\prime})}\mid\left(T_{+}(p)+T_{-}(p)+T(\pi,\overline{\pi})+T( \overline{\pi},\pi)\right)=\] \[=\tilde{\psi}_{pm^{\prime}N(l^{\prime})}\mid T_{+}(p)+0+\tilde{ \psi}_{m^{\prime}N(l^{\prime})}\mid\left(T(\pi,\overline{\pi})+T(\overline{ \pi},\pi)\right)=\] \[=\tilde{\psi}_{pm^{\prime}N(l^{\prime})}\mid T_{+}(p)+2p^{k-3} \tilde{\psi}_{m^{\prime}N(l^{\prime})}\]
But \(\tilde{\psi}_{m^{\prime}N(l^{\prime})}\mid T(p)=\lambda_{p}^{G}\tilde{\psi}_{ m^{\prime}N(l^{\prime})}\), where \(\lambda_{p}^{G}\) is the eigenvalue of \(G\) corresponding to \(T_{2}\), and so we obtain
\[\tilde{\psi}_{pm^{\prime}N(l^{\prime})}\mid T_{+}(p)=(\lambda_{p}-2p^{k-3}) \tilde{\psi}_{m^{\prime}N(l^{\prime})}\]
Moreover,
\[\epsilon(T_{1})=\Lambda_{-}(\overline{\pi})+T(\overline{\pi})+\Lambda_{+}( \overline{\pi})\]
so by a similar argument as above we get
\[\tilde{\psi}_{pm^{\prime}N(l^{\prime})}\mid\Lambda_{+}(\overline{\pi})= \lambda_{T_{1}}^{G}\tilde{\psi}_{m^{\prime}N(l^{\prime})}-\tilde{\psi}_{m^{ \prime}N(l^{\prime})}\mid T(\overline{\pi})\]
and similarly for \(\Lambda_{+}(\pi)\). Hence, we can proceed in the inner products. In every case we will have something of the form
\[\langle\tilde{\phi}_{1},\tilde{\psi}_{m^{\prime}N(l^{\prime})}\mid T_{+}(m^{ \prime})\Lambda_{+}(l^{\prime})\rangle_{\mathcal{A}}a_{m^{\prime}N(\epsilon^{ \prime})}\left(\frac{D^{(p)}_{F,G,h}(s)}{\langle\tilde{\phi}_{1},\tilde{\psi}_ {1}\rangle_{\mathcal{A}}}\right)\]
which is of the form required when we take the adjoints of the above operators.
The proof of Theorem 7.1 now follows from the above two propositions by working prime by prime and factoring from the initial Dirichlet series the corresponding expression for each prime.
|
2309.08317 | Investigation of mmWave Radar Technology For Non-contact Vital Sign
Monitoring | Non-contact vital sign monitoring has many advantages over conventional
methods in being comfortable, unobtrusive and without any risk of spreading
infection. The use of millimeter-wave (mmWave) radars is one of the most
promising approaches that enable contact-less monitoring of vital signs. Novel
low-power implementations of this technology promise to enable vital sign
sensing in embedded, battery-operated devices. The nature of these new
low-power sensors exacerbates the challenges of accurate and robust vital sign
monitoring and especially the problem of heart-rate tracking. This work focuses
on the investigation and characterization of three Frequency Modulated
Continuous Wave (FMCW) low-power radars with different carrier frequencies of
24 GHz, 60 GHz and 120 GHz. The evaluation platforms were first tested on
phantom models that emulated human bodies to accurately evaluate the baseline
noise, error in range estimation, and error in displacement estimation.
Additionally, the systems were also used to collect data from three human
subjects to gauge the feasibility of identifying heartbeat peaks and breathing
peaks with simple and lightweight algorithms that could potentially run in
low-power embedded processors. The investigation revealed that the 24 GHz radar
has the highest baseline noise level, 0.04mm at 0{\deg} angle of incidence, and
an error in range estimation of 3.45 +- 1.88 cm at a distance of 60 cm. At the
same distance, the 60 GHz and the 120 GHz radar system shows the least noise
level, 0.0lmm at 0{\deg} angle of incidence, and error in range estimation 0.64
+- 0.01 cm and 0.04 +- 0.0 cm respectively. Additionally, tests on humans
showed that all three radar systems were able to identify heart and breathing
activity but the 120 GHz radar system outperformed the other two. | Steven Marty, Federico Pantanella, Andrea Ronco, Kanika Dheman, Michele Magno | 2023-09-15T11:18:54Z | http://arxiv.org/abs/2309.08317v1 | # Investigation of mmWave Radar Technology For Non-contact Vital Sign Monitoring
###### Abstract
Non-contact vital sign monitoring has many advantages over conventional methods in being comfortable, unobtrusive and without any risk of spreading infection. The use of millimeter-wave (mmWave) radars is one of the most promising approaches that enable contact-less monitoring of vital signs. Novel low-power implementations of this technology promise to enable vital sign sensing in embedded, battery-operated devices. The nature of these new low-power sensors exacerbates the challenges of accurate and robust vital sign monitoring and especially the problem of heart-rate tracking. This work focuses on the investigation and characterization of three Frequency Modulated Continuous Wave (FMCW) low-power radars with different carrier frequencies of 24 GHz, 60 GHz and 120 GHz. The evaluation platforms were first tested on phantom models that emulated human bodies to accurately evaluate the baseline noise, error in range estimation, and error in displacement estimation. Additionally, the systems were also used to collect data from three human subjects to gauge the feasibility of identifying heartbeat peaks and breathing peaks with simple and lightweight algorithms that could potentially run in low-power embedded processors. The investigation revealed that the 24 GHz radar has the highest baseline noise level, 0.04 mm at 0deg angle of incidence, and an error in range estimation of \(3.45\pm 1.88\) cm at a distance of 60 cm. At the same distance, the 60 GHz and the 120 GHz radar system shows the least noise level, 0.01 mm at 0deg angle of incidence, and error in range estimation \(0.64\pm 0.01\) cm and 0.04 \(\pm\) 0.0 cm respectively. Additionally, tests on humans showed that all three radar systems were able to identify heart and breathing activity but the 120 GHz radar system outperformed the other two.
contactless vital sign monitoring, FMCW, mmWave, radar systems, biomedical systems, comparison
## I Introduction
Monitoring of vital signs, especially heart rate (HR) and respiratory rate (RR), is essential in modern clinical care [1]. Continuous monitoring of HR and RR can hugely benefit the management of patients by early detection of health deterioration. Commonly used systems for the measurement of cardiovascular parameters, such as the electrocardiogram (ECG) or the photoplethysmograph (PPG), require contact with the body [2]. Attaching multiple electrodes or placing optical transducers makes current monitoring methods for HR and RR uncomfortable and obtrusive, and is not suitable in many application scenarios. For instance, contact monitoring is not a viable option in certain patient population groups, such as the pediatric, the geriatric, and those with wounds or burns. This is due to the added risk of iatrogenic injury or infection via cross-contamination. Hence, methods for non-contact monitoring of HR and RR are actively pursued [3]. Non-contact optical solutions have been developed using RGB cameras [4], but these are affected by ambient conditions, such as lighting, temperature and skin colour. Millimeter-wave radar technology enables promising and emerging approaches for non-contact vital signs monitoring, circumventing the most pertinent challenges of traditional systems. Due to the non-contact nature of measurement, there is no risk of infection or injury. Previous work with radars has shown the feasibility of using such systems for contact-less vital sign monitoring [3]. Depending on the sent-out signal, different radar technologies are defined, such as Continuous Wave (CW), Frequency Modulated Continuous Wave (FMCW), Pulse Coherent Radar (PCR), etc. Today's technology allows the manufacturing of low-power and accurate FMCW radars, which can enable a new class of non-contact energy-efficient devices. These devices allow measuring distances and displacements with sub-mm precision, sufficient to capture the small displacements caused by respiration and heart activity, as shown in Figure 1. On the other hand, many challenges are still open, and especially HR monitoring with radars is still not fully implemented and evaluated for different subject phenotypes or application scenarios. Moreover, the previous work is not presenting an investigation and evaluation of the effect of using different carrier frequencies.
In this paper, we carefully and accurately characterize novel low-power FMCW radars with three different carrier frequencies of \(24\,\mathrm{GHz}\), \(60\,\mathrm{GHz}\) and \(120\,\mathrm{GHz}\). We investigate the choice of carrier frequency and chirp bandwidth to achieve the best performance for vital sign monitoring, given the differences in displacement and range resolution. The major contributions of this paper are: 1) Characterisation
Fig. 1: Vital-sign measurement principle of FMCW radars, figure from [5].
of the baseline noise, range estimation, and displacement error for radars with different carrier frequencies 2) Feasibility of measuring and estimating HR and RR in three human subjects with different FMCW radars. 3) Evaluation and discussion of the advantages of the characterized radars and especially the three different carrier frequencies.
## II Related Work
Various studies have sought to determine the most suitable radar system for vital sign monitoring. Depending on the specific vital sign (HR, RR, Tidal Volume or Heart Rate Variability) different radar-based technologies might be appropriate [6]. Munoz-Ferreras et al. [7] and Giordano et al. [8] emphasize the advantages of FMCW radars over other radar technologies for RR and HR. However, it is uncertain which frequency range is optimal. Lower carrier frequency leads to shallower skin penetration but it only decreases slightly, from \(1\,\mathrm{mm}\) to \(0.5\,\mathrm{mm}\) between \(24\,\mathrm{-}100\,\mathrm{GHz}\)[9]. Most approaches use radars with frequencies below \(24\,\mathrm{GHz}\)[10] but recent research has shown that higher frequencies provide improvement in measurement [3]. Novel systems with larger bandwidths in higher frequencies as \(77\,\mathrm{GHz}\)[11] and \(120\,\mathrm{GHz}\)[12] showed promising results. A study on low power radars between \(2\,\mathrm{GHz}\)-\(16\,\mathrm{GHz}\) showed that for a given power, an increase in frequency results in a higher sensitivity to small movements [13]. This suggests that a comparison of higher frequency ranges (\(>16\,\mathrm{GHz}\)) within the same radar technology is necessary to obtain the most power-efficient non-contact vital sign monitoring platform.
Estimating the HR and RR from the displacement signal involves different signal processing steps. Respiration causes a large displacement of the chest and hence, relatively simpler signal processing techniques like Fast Fourier Transform (FFT) are used to identify the respiratory rate [14]. However, determining HR is more complex due to smaller chest displacements (one order of magnitude) [15] and confounding factors of breathing rate harmonics. Previous work has used empirical mode decomposition (EMD) [16] or Random Body Motion Cancellation algorithms [11] to tackle these problems. However, prior investigations have not evaluated the raw radar signal measured at different carrier frequencies, which might necessitate computationally intensive post-processing to extract vital sign estimations.
For an initial evaluation, phantoms that mimic human chest movement to characterize different radars have been used. This has been implemented through the use of a metallic pendulum [17] or a vibrating metal plate [18]. The former is restricted to single-frequency movements, whereas the latter can do multi-frequency movements to simulate both the heart and the breathing displacement together. Hence, in this work, a metal plate is used for the evaluation of the characterized radar systems.
## III Methodology
To have a fair comparison of how different carrier frequencies affect the signal quality for vital sign monitoring, we evaluate three FMCW radar modules of the same family from Infineon Technologies: BGT24, BGT60, and BGT120. All radar systems are designed for low-power applications, with an estimated average power consumption of \(8\,\mathrm{mW}\) (sensor only) in our setting at \(100\,\mathrm{chirps/s}\). The main difference between the three systems is the frequency of the carrier signal (\(24\,\mathrm{GHz}\), \(60\,\mathrm{GHz}\) and \(120\,\mathrm{GHz}\)) and the chirp bandwidth \(B\), which is reported in Table I.
### _Setup and Configuration_
For each radar system, one transmitter antenna and one receiving antenna are used. In order to have a fair comparison, all radars were configured with a sampling rate \(F_{c}\) of \(2\,\mathrm{MHz}\) and \(n=128\) samples are acquired for each chirp. A chirp is generated every \(10\,\mathrm{ms}\).
The slope of the chirp is defined as \(S=B/T_{c}\), with \(T_{c}=n/F_{c}\). The range bin resolution is defined as \(R=\frac{c}{2B}\), where \(c\) is the speed of light. The max range is the product of the number of samples per chirp and the bin resolution: \(n\cdot R\).
To evaluate the system performance, a phantom model has been developed to simulate breathing and heart rate pulses. The device is composed of a square aluminium frame and a flat, movable steel plate with a surface area of \(400\,\mathrm{cm}^{2}\), comparable to the size of an adult human's chest. The plate can be moved with high precision by a servo motor controlled by a microcontroller.
Fixed amplitudes for displacement are chosen that are comparable to the displacement amplitude of the chest during cardiopulmonary activity according to [15]. The usual range of heartbeat oscillation on the chest is in the range of \(0.01\,\mathrm{mm}\)-\(0.04\,\mathrm{mm}\). Hence, a displacement of \(0.08\,\mathrm{mm}\) was chosen as a lower bound for the minimum oscillation caused by a heartbeat. A displacement of \(0.03\,\mathrm{mm}\) amplitude is in the usual range of heartbeat vibration and 1.2 mm displacement was set as a reference for breathing oscillations, which are in the range of \(1\,\mathrm{mm}\)-\(10\,\mathrm{mm}\). All these displacements are implemented on the phantom with an oscillation frequency of \(0.5\,\mathrm{Hz}\) and a step size of \(0.4\,\mathrm{\SIUnitSymbolMicro m}\).
The metal plate has high reflectance in the investigated frequency range of \(24\,\mathrm{-}120\,\mathrm{GHz}\). Hence, the radars are evaluated in a controlled ideal environment tailored for maximum reflectance from the target object. In addition to the bare metal plate phantom, a variant of it was also developed to better emulate the reflectance of the human body. For this setup, the metal plate was covered with EMI-absorbing foam [19], rated for the range of \(5\,\mathrm{GHz}\)-\(90\,\mathrm{GHz}\), minimizing the reflections from the metal plate. On top of the foam, a \(3\,\mathrm{cm}\) layer of gelatinous material prepared accordingly to [20] was placed. The phantom with gelatin can be seen in Figure 2.
\begin{table}
\begin{tabular}{c|c c c c} Radar & \(F_{start}\) & \(F_{end}\) & \(B\) & \(R\) \\ \hline BGT24 & \(23\,\mathrm{GHz}\) & \(25\,\mathrm{GHz}\) & \(2\,\mathrm{GHz}\) & \(7.5\,\mathrm{cm}\) \\ BGT60 & \(58\,\mathrm{GHz}\) & \(63\,\mathrm{GHz}\) & \(5\,\mathrm{GHz}\) & \(3\,\mathrm{cm}\) \\ BGT120 & \(116\,\mathrm{GHz}\) & \(126\,\mathrm{GHz}\) & \(10\,\mathrm{GHz}\) & \(1.5\,\mathrm{cm}\) \\ \end{tabular}
\end{table} TABLE I: Specification of the characterized radar systems
The three radar systems are mounted on acrylic support and held by a tripod, which allows adjusting the position of the radar antennas to be pointed towards the centre of the phantom. In order to better characterize the displacement, measurements were compared with the Polytec Scanning Vibrometer 500 (PSV500) [21], which provided the ground truth on displacement. The PSV500 can capture vibrational velocities of \(0.01\,\mathrm{\SIUnitSymbolMicro m}\,\mathrm{s}^{-1}\) to \(30\,\mathrm{m}\,\mathrm{s}^{-1}\) with integrated conversion to displacement.
### _Signal Processing_
The task of extracting vital signs from radar data requires complex signal processing. Since the scope of this paper is a hardware characterization, a previously published pipeline, which is conducive to low-power systems, for HR and RR estimation has been used [22] and illustrated in Figure 3.
For every chirp, we calculate the spectrum of the raw incoming chirp signal, often named Intermediate Frequency (IF) signal. This is done with the FFT, often named Range FFT. A high-magnitude peak in the Range FFT provides the range (distance) of the target. The conversion from frequency \(f_{IF}\) to distance \(d\) is given by:
\[d=\frac{2S}{c*f_{IF}} \tag{1}\]
where \(S\) is the slope of the chirp. From the identified target bin, the phase of the signal is extracted, unwrapped and transformed to measure the small displacements of the target. Equation 2 converts the phase change \(\Delta\phi\) into displacement \(\Delta d\), where \(\lambda\) is the wavelength, indicating the influence of the carrier frequency on the displacement resolution. For higher frequencies, the same displacement is represented by a larger angular change.
\[\Delta d=\frac{\lambda*\Delta\phi}{4*\pi} \tag{2}\]
The displacement signal is band-pass filtered for the heart beat signal between \(0.7\,\mathrm{Hz}\)-\(2\,\mathrm{Hz}\) and for the respiratory signal between \(0.1\,\mathrm{Hz}\)-\(0.5\,\mathrm{Hz}\). These frequency bands correspond to 42-120 bpm and 6-30 breaths per minute, which is a reasonable range for these vital signs [22]. For the purpose of this system characterisation, the heart and respiration rates are obtained from the frequency with the peak amplitude in the FFT of the band-pass filtered signals.
## IV Experiments
We designed a set of experiments to evaluate the behaviour of the different radar systems for range estimation, displacement estimation, and overall precision in the task of vital sign monitoring. For the first set of experiments, both phantom models are compared, while vital signs are estimated with data from three human subjects.
### _Range Estimation_
The accuracy of range estimation for all three systems with static targets was evaluated. The phantom was placed at distances of \(30\,\mathrm{cm}\), \(40\,\mathrm{cm}\), \(50\,\mathrm{cm}\), and \(60\,\mathrm{cm}\), and the accuracy and precision with different radars were measured for \(60\,\mathrm{s}\). The distances were chosen to allow for a fair comparison of all three systems while keeping the same settings. Longer distances would exceed the maximum range for BGT120, making the comparison with this sensor impossible. Every recording was divided into 12 sub-intervals of \(5\,\mathrm{s}\) each, where the distance to the target was estimated as the average of the detected ranges for the chirps in the sub-intervals. The ranges are selected by choosing the largest amplitude bin in the Range FFT. Since the three radar systems offer different bandwidths, the corresponding size of the range bins is also different. In order to make the comparison fair, zero padding was introduced into the Range FFT calculation to reach a theoretical range precision of \(0.157\,\mathrm{cm}\) for all three devices.
### _Baseline Noise at Fixed Position_
The variance of the phase for a static target was evaluated, which defines the baseline noise of the radar system. In this experiment, the phantom was static and placed at a distance of \(50\,\mathrm{cm}\). The displacement was extracted from the corresponding bin in the range spectrum. The effect of the orientation of the radars with respect to the target was also evaluated, considering incidence angles of \(0^{\circ}\), \(30^{\circ}\), and \(60^{\circ}\). For each experiment, \(20\,\mathrm{s}\) of data were acquired.
### _Displacement Estimation_
To evaluate the accuracy and precision of the displacement estimation, the phantom was set to oscillate with amplitudes of \(0.08\,\mathrm{mm}\), \(0.3\,\mathrm{mm}\), and \(1.2\,\mathrm{mm}\) (as described in section III-A) with a constant oscillation frequency of \(0.5\,\mathrm{Hz}\) for \(20\,\mathrm{s}\). The phase from the largest-magnitude bin in the range spectrum is extracted, unwrapped and transformed into displacement.
Fig. 3: Signal processing pipeline from raw data acquisition to HR and RR estimation
Fig. 2: Evaluation setup, featuring the radar systems, the phantom (with gelatin) and the laser vibrometer.
The ground truth of the displacement is given by the simultaneous measurement of the laser vibrometer, which eliminates slight errors caused by the mechanical imperfections of the phantom model. Finally, the displacement error is reported as the difference in the peak-to-peak distance measurements between the radar system and the laser vibrometer.
### _Performance on Human Subjects_
Three male human subjects were recorded twice for two minutes using all three radar systems placed 50 cm away. The HR and RR are estimated with the processing pipeline explained in Section III-B. The ground truth was provided by the Polar H10, which is equipped with an ECG sensor and an accelerometer to measure the breathing pattern. The raw ECG signal of the Polar H10 was processed with the Python library HeartPy [23] to derive the HR ground truth. The RR was extracted based on the accelerometer data, also acquired by the Polar H10 belt. Feasibility of vital sign monitoring is shown by providing RR and HR estimations along with the Mean Absolute Error (MAE) with respect to the ground truth from the Polar belt.
## V Experimental Results
### _Range Estimation and Noise_
The errors in estimating the range of the phantom at different distances are shown in Table II and summarized over all distances in a boxplot in Figure 4. The BGT24 radar system overestimates the distance by about \(6\,\mathrm{cm}\) on average for both phantom models. The BGT60 and BGT120 are more accurate in range estimation with a low (\(<\!1\) cm) error and variance for the configuration with the metal plate. For the BGT60 and the BGT120, the gelatine increases the variance of the range measurement and also increases the mean average error of range estimation. This is expected due to the less ideal reflectance properties of the material.
### _Baseline Noise at Fixed Position_
The baseline noise in each radar system for the two phantom models is given in Table III. The baseline noise recorded from the gelatin phantom is generally slightly higher than that of the metal due to lowered reflectance from the former. Also, the noise levels increase as the angle incidence increases. This can be attributed to ambient sources of noise being captured by the radar from the surroundings. Additionally, the BGT24 has a higher baseline noise than the BGT60 and BGT120 for all angles of incidence and for both phantom models.
### _Displacement Estimation_
Table IV shows the measured displacement of the three radars with respect to ground truth (GT) given by the laser vibrometer. It can be seen that the BGT24 has consistently larger errors in displacement estimation. The BGT60 and the BGT120 radar systems show similar errors, however, in the gelatin model, the BGT120 performs better than the BGT60 for the sub-millimetre displacements. For the BGT60 and BGT120 systems, DC offset removal [14] was used right after the range FFT to reduce the influence of clutter from the reflections of static objects. For the BGT24 system, this technique cannot be used because the small displacements of vital signs (in the range \(0.1\,\mathrm{mm}\)-\(1\,\mathrm{mm}\)) are too low when compared to the wavelength of the carrier (\(\lambda=12.5\) mm), which makes it difficult to estimate the DC correctly.
Figure 6 shows that all three systems can track the displacement of the phantom, marked in black. The BGT24 shows a higher noise level, which is congruent with the results on baseline noise in Section V-B.
\begin{table}
\begin{tabular}{c|c|c c c} Phantom & Range & BGT24 & BGT60 & BGT120 \\ \hline \multirow{3}{*}{Metal} & \(30\,\mathrm{cm}\) & \(6.25\pm 0.2\) & \(0.49\pm 0.01\) & \(0.02\pm 0.0\) \\ & \(40\,\mathrm{cm}\) & \(5.53\pm 0.18\) & \(-0.55\pm 0.01\) & \(-0.87\pm 0.01\) \\ & \(50\,\mathrm{cm}\) & \(5.67\pm 0.3\) & \(0.14\pm 0.01\) & \(-0.65\pm 0.01\) \\ & \(60\,\mathrm{cm}\) & \(3.45\pm 1.88\) & \(0.64\pm 0.01\) & \(0.04\pm 0.0\) \\ \hline \multirow{3}{*}{Gelatin} & \(30\,\mathrm{cm}\) & \(6.02\pm 0.2\) & \(1.25\pm 0.04\) & \(0.05\pm 0.13\) \\ & \(40\,\mathrm{cm}\) & \(7.2\pm 3.2\) & \(-0.21\pm 0.05\) & \(-0.01\pm 0.72\) \\ \cline{1-1} & \(50\,\mathrm{cm}\) & \(6.3\pm 0.76\) & \(0.64\pm 0.11\) & \(2.32\pm 0.06\) \\ \cline{1-1} & \(60\,\mathrm{cm}\) & \(5.0\pm 0.18\) & \(3.13\pm 0.01\) & \(4.07\pm 0.04\) \\ \end{tabular}
\end{table} TABLE II: Range estimation error in centimetres.
\begin{table}
\begin{tabular}{c|c|c c c} Phantom & Angle & BGT24 & BGT60 & BGT120 \\ \hline \multirow{3}{*}{Metal} & \(0^{\circ}\) & 0.015 & 0.004 & 0.001 \\ & \(30^{\circ}\) & 0.059 & 0.001 & 0.001 \\ & \(60^{\circ}\) & 0.044 & 0.021 & 0.348 \\ \hline \multirow{3}{*}{Gelatin} & \(0^{\circ}\) & 0.040 & 0.001 & 0.001 \\ & \(30^{\circ}\) & 0.060 & 0.001 & 0.001 \\ \cline{1-1} & \(60^{\circ}\) & 2.988 & 0.031 & 3.796 \\ \end{tabular}
\end{table} TABLE III: Baseline noise, variance of the phase in millimeters.
Fig. 4: Error in measured range with radars at centre frequencies at 24GHz, 60GHz and 120GHz
\begin{table}
\begin{tabular}{c|r|r|r r r} Phantom & Target \(\Delta\) & Laser & BGT24 & BGT60 & BGT120 \\ \hline \multirow{3}{*}{Metal} & 1.2 & 1.187 & 0.038 & 0.018 & 0.028 \\ & 0.3 & 0.318 & 0.055 & 0.010 & 0.010 \\ & 0.08 & 0.079 & 0.020 & 0.047 & 0.004 \\ \hline \multirow{3}{*}{Gelatin} & 1.2 & 1.187 & 0.071 & 0.040 & 0.059 \\ & 0.3 & 0.313 & 0.120 & 0.033 & 0.013 \\ \cline{1-1} & 0.08 & 0.079 & 0.026 & 0.019 & 0.015 \\ \end{tabular}
\end{table} TABLE IV: Estimated Peak-Peak Displacement absolute error in millimeters
### _Performance on Human Subjects_
A comparison of the measured displacements from the three radars is visualized in Figure 5. On the left, the band-passed signal for the heart rate is shown, where the peaks that appear within a \(150\,\mathrm{ms}\) window from the actual ECG beat (ground truth) are marked in green. On the right, the breathing signal is shown, alongside the corresponding Polar acceleration signal. For this time window, all the radars seem to perform similarly, however, this is not always the case, as we show below.
For the complete comparison of all recordings, the HR and RR are estimated for windows of \(60\,\mathrm{s}\), which are shifted by one second. The estimations together with the ground truth are visualized in Figure 7. Generally, the data suggests that the BGT120 performs the best followed by BGT24 and BGT60. BGT120 achieves an MAE for the HR of \(0.4\,\mathrm{bpm}\) (beats per minute) with a standard deviation (\(\sigma\)) of \(1\,\mathrm{bpm}\), whereas the MAE for the BGT24 and the BGT60 is \(4\,\mathrm{bpm}\) (\(\sigma=7\)) and \(6\,\mathrm{bpm}\) (\(\sigma=7\)) respectively. As visible in Figure 7, all three radars are able to accurately track the HR on some recordings, showing promise for all technologies for this task. However, for some subjects and recordings, the estimation can be quite off, indicating that this basic algorithm is not sufficiently robust. Specifically, as mentioned in [3], this algorithm can easily misinterpret the harmonics of the breathing pattern for heart rate. These issues can be tackled with advanced algorithms, which are not in the scope of this work.
For the RR, all the radars perform similarly with an MAE \(<1\) breath per minute (brpm). This result is expected, as the displacement caused by breathing is around one order of magnitude greater than the one of the heartbeat [15]. Only the BGT24 has some noticeable outliers for Subject 2.
## VI Conclusions
This work characterizes three novel and low-power FMCW radar systems operating at frequencies of \(24\,\mathrm{GHz}\), \(60\,\mathrm{GHz}\) and \(120\,\mathrm{GHz}\) to evaluate the feasibility of contactless HR and RR monitoring and the benefits of the different frequencies. The performance of the radars was evaluated on two phantom systems to estimate the range of the target object, baseline noise, and error in displacement estimation. Phantom experiments indicate that the higher frequency (\(60\,\mathrm{GHz}\) and \(120\,\mathrm{GHz}\)) systems, which also provide higher bandwidth configurations, show lower error in range and displacement estimation than the \(24\,\mathrm{GHz}\) radar system. One reason is the frequency-distance (Eq. 1) and the phase-displacement (Eq. 2) relationship. First, bigger bandwidths allow for finer distance resolution and therefore lower error. Secondly, the same displacement is represented by different IF signal phase changes \(\Delta\phi\) depending on the carrier frequency. For higher frequencies, the corresponding phase change is bigger and therefore easier to measure. Thus, a wider bandwidth with higher carrier frequency is beneficial for displacement measurements in the heartbeat range with a high SNR. Furthermore, the three systems were also tested on human subjects to investigate the performance in measuring the HR and RR with low complexity signal processing such as FFT. Tests on 3 subjects showed that the low-power radars could identify the breathing and heart activity patterns. The \(120\,\mathrm{GHz}\) radar system was most accurate in estimating the HR (\(0.4\,\mathrm{bpm}\)) and RR ( \(<1\)brpm). Hence a low-power (\(8\,\mathrm{mW}\)), radar system was able to achieve high accuracy with a low computational load algorithm for HR and RR estimation. The results are comparable to other state-of-the-art non-contact instruments as cameras (2-8bpm RMSE for HR and 1-4brpm RMSE for RR) without the drawbacks of being affected by lighting or skin colour [4].
Fig. 5: Heart-rate signal with correct and wrongly identified peaks (left) and Respiration signal (right) for all three radar systems.
Fig. 6: Measured displacement of amplitude \(0.3\,\mathrm{mm}\), comparison between the radar systems and the laser vibrometer.
In the future, research needs to focus on evaluating these low-power radars on a larger sample set to develop algorithms for accurate and precise HR and RR measurement that account for inter-subject variability, due to muscle, adiposity, breast tissue, or body hair on the chest. Additionally, investigating greater distances between the radar and the subject could help in optimising the radar systems for different applications.
|
2301.01277 | Consumer acceptance of the use of artificial intelligence in online
shopping: evidence from Hungary | The rapid development of technology has drastically changed the way consumers
do their shopping. The volume of global online commerce has significantly been
increasing partly due to the recent COVID-19 crisis that has accelerated the
expansion of e-commerce. A growing number of webshops integrate Artificial
Intelligence (AI), state-of-the-art technology into their stores to improve
customer experience, satisfaction and loyalty. However, little research has
been done to verify the process of how consumers adopt and use AI-powered
webshops. Using the technology acceptance model (TAM) as a theoretical
background, this study addresses the question of trust and consumer acceptance
of Artificial Intelligence in online retail. An online survey in Hungary was
conducted to build a database of 439 respondents for this study. To analyse
data, structural equation modelling (SEM) was used. After the respecification
of the initial theoretical model, a nested model, which was also based on TAM,
was developed and tested. The widely used TAM was found to be a suitable
theoretical model for investigating consumer acceptance of the use of
Artificial Intelligence in online shopping. Trust was found to be one of the
key factors influencing consumer attitudes towards Artificial Intelligence.
Perceived usefulness as the other key factor in attitudes and behavioural
intention was found to be more important than the perceived ease of use. These
findings offer valuable implications for webshop owners to increase customer
acceptance | Szabolcs Nagy, Noemi Hajdu | 2022-12-26T09:03:28Z | http://arxiv.org/abs/2301.01277v1 | # Consumer Acgetance of the Use of Artificial Intelligence in Online Shopping: Evidence from Hungary
###### Abstract
The rapid development of technology has drastically changed the way consumers do their shopping. The volume of global online commerce has significantly been increasing partly due to the recent COVID-19 crisis that has accelerated the expansion of e-commerce. A growing number of webshops integrate Artificial Intelligence (AI), state-of-the-art technology into their stores to improve customer experience, satisfaction and loyalty. However, little research has been done to verify the process of how consumers adopt and use AI-powered webshops. Using the technology acceptance model (TAM) as a theoretical background, this study addresses the question of trust and consumer acceptance of Artificial Intelligence in online retail. An online survey in Hungary was conducted to build a database of 439 respondents for this study. To analyse data, structural equation modelling (SEM) was used. After the respecification of the initial theoretical model, a nested model, which was also based on TAM, was developed and tested. The widely used TAM was found to be a suitable theoretical model for investigating consumer acceptance of the use of Artificial Intelligence in online shopping. Trust was found to be one of the key factors influencing consumer attitudes towards Artificial Intelligence. Perceived usefulness as the other key factor in attitudes and behavioural intention was found to be more important than the perceived ease of use. These findings offer valuable implications for webshop owners to increase customer acceptance.
consumer acceptance, artificial intelligence, online shopping, AI-powered webshops, technology acceptance model, trust, perceived usefulness, perceived ease of use, attitudes, behavioural intention, Hungary
L81, M31, O30
## Introduction
The rapid development of digital technology has changed online shopping (Daley, 2018). In recent years, the use of Artificial Intelligence (AI) in online commerce has been increased since AI is an excellent tool to meet rapidly changing consumer demand and to increase sales efficiency. The global spending by retailers on AI services is expected to quadruple and reach $12 billion by 2023, and over 325000 retailers will adopt AI technology (Maynard, 2019).
Smidt and Power (2020) claimed that online product research has significantly increased over the past years. USA's largest online retailer, Amazon, is the exemplary case of how to effectively integrate AI into online retail. Besides the rich assortment, fast delivery and competitive prices, a more localised shopping journey can be created. Thus Amazon can use location-specific pricing and send destination-specific messages to its customers, who will pay in their local currency (Barmada, 2020).
Novel marketing techniques supported by new technologies, including the use of AI systems spark the proliferation of new marketing methods to effectively reach target consumers and to offer enhanced consumer experiences (Pusztahelyi, 2020). Pursuant to Asling (2017), the use of AI in online shopping makes customer-centric search and a new level of personalisation possible resulting in a more efficient sales process. Information technology (IT) has changed the nature of company-customer relationships (Rust and Huang, 2014). However, any technology-driven transformation is based on trust (Pricewaterhouse Coopers, 2018).
Online retailers need more in-depth insight into how consumers perceive and accept the use of AI in webshops and how much they trust them. They also need to know how to use AI most effectively to increase online spending and online purchase frequency since the importance of time and cost efficiency in shopping has recently become more and more critical. In this regard, online shopping means a convenient way for customers to buy the desired products.
So far, only a few researchers have addressed the question of trust and consumer acceptance of AI in online retail. Based on the technology acceptance model (TAM), this study aims to fill this research gap and proposes an integrated theoretical framework of consumers' acceptance of AI-powered webshops. Further objectives of this paper are to investigate the relationships between the elements of TAM; to analyse the effects of trust, perceived usefulness and perceived ease of use on attitudes and behavioural intention.
After reviewing the use of AI in online shopping, this paper discusses the role of trust in online shopping and presents the technology acceptance model. The next section deals with the research methodology, including the research questions, hypotheses and the sample. In the results and discussion section, the validity and reliability of the model, as well as the model fit are presented. Hypothesis testing, detailed analysis of the relationships between the elements of the nested model, and comparison of the results with the previous research findings are also discussed here before the conclusions sections.
## 1 Literature review
According to IBM's U.S. Retail Index, the COVID-19 has speeded up the change from traditional shopping to online purchasing by circa five years (Haller, Lee and Cheung, 2020). Due to the pandemic situation, there is an increased demand for AI in the retail industry (Meticulous Market Research, 2020).
### The use of AI in online shopping
AI systems are a set of software and hardware that can be used to continuously assess and analyse data to characterise environmental factors and to determine decisions and actions (European Commission, 2018). Prior research mainly focused on the advantages of the use of AI in online settings and failed to address how consumers accept AI in online retail. According to utility theory, this new technology helps consumers to find and choose the best product alternatives, while decreases the search cost and search time (Pricewaterhouse Coopers, 2018), thus increasing utility (Stigler, 1961; Bakos, 1977; Stigler and Becker, 1977; Andre, et al. 2017; Lynch and Ariely, 2000). AI filters the information for each target customer and provides what exactly is needed (Paschen, Wilson and Ferreira, 2020). AI supports automating business processes, gains insight through data analysis, and engages with customers and employees (Davenport and Ronanki, 2018).
Artificial intelligence is widely used to increase the efficiency of marketing (Kwong, Jiang, and Luo, 2016) and retail (Weber and Schutte, 2019) and to automate marketing (Dumitriu and Popescu, 2020). AI-powered online stores provide their customers with automated assistance during the consumer journey (Yoo, Lee and Park, 2010; Pantano and Pizzi, 2020). It is a great advantage, especially for the elder people, who are averse to technical innovations.
Consumers' online information search and product selection habits can be better understood by AI to offer a more personalised shopping route (Rust and Huang, 2014). It is a great opportunity for online shops to analyse the profile of existing and potential customers and thereby suggest tailor-made marketing offerings for them (Onete, Constantinescu and Filip, 2008). AI also makes the contact with both the customers and the employees continuous and interactive. Frequently asked questions (FAQs) regarding the products, product-use and ordering process can be automated by a chatbot. New sales models use automated algorithms to recommend unique, personalised marketing offerings, thus increasing customer satisfaction and engagement. To sum up the advantages, AI systems operate automatically and analyse big data in real-time to interpret and shape consumer behavioural patterns to offer products and services in a personalised way, thus enhancing the shopping experience.
However, AI systems also have some disadvantages. They work most effectively with big data; therefore, the implementation of AI systems requires huge investments (Roetzer, 2017).
### The role of trust in online shopping
Trust is of great importance in online commerce. According to Kim, Ferrin and Rao (2008), consumer confidence has a positive effect on a consumer's intention to buy. The higher the consumer trust in an online shop is, the more likely the consumer will be to go through the buying process. Trust is especially crucial when the customer perceives a financial risk.
Thatcher et al. (2013) identified two types of trust: general and specific trust. General trust concerns the e-commerce environment, consumer beliefs about and attitudes towards it. Specific trust is related to the shopping experience in a specific virtual store. Confidence can be enhanced through interactive communication between the retailer and the buyer by using appropriate product descriptions and images to reduce the perceived risk. As stated in Cotoiu et al. (2014) there is a strong negative correlation between perceived risks and trust. According to Reichheld and Schefter (2000, p. 107), "price does not rule the Web; trust does".
Aranyossy and Magiszrrak (2016) found that a higher level of e-commerce trust was associated with more frequent online shopping. However, when shopping online, customers do not necessarily notice that a website uses AI tools (Daley, 2018).
All things considered, AI marks a new era in online sales. However, continuous technological development such as the use of AI-powered websites divides society, as there are those who accept novelty while others reject it.
### Technology Acceptance Model (TAM)
Consumers' adaptation to new technologies can be explained by several models. Dhagarra, Goswami and Kumar (2020) summarised them as follows: (1) Theory of Reasoned Action (TRA) by Fishbein and Ajzen (1975); (2) Theory of Planned Behaviour (TPB) by Ajzen (1985); (3) Technology Acceptance Model (TAM) by Davis (1986); (4) Innovation Diffusion Theory (IDT) by Rajagopal (2002); (5) Technology Readiness Index (TRI) by Parasuraman, (2000); and (6) Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh, et al. (2003).
Technology acceptance model (TAM), an extension of (TRA), is one of the most widely-used theoretical models (Venkatesh, 2000) to explain why an IT user accepts or rejects information technology and to predict IT user behaviour (Legris, Ingham, and Collerette, 2003). The original TAM contains six elements: external variables, perceived usefulness, perceived ease of use, attitude, behavioural intention to use and actual use. According to TAM, external variables have a direct influence on perceived usefulness (PU) and perceived ease of use (PEU), i.e. the two cognitive belief components. Perceived ease of use directly influences PU and attitude, whereas perceived usefulness has a direct impact on attitude and behavioural intention to use, which affects actual use (Figure no. 1).
Ha and Stoel (2008) examined the factors affecting customer acceptance of online shopping and found that perceived ease of use, perceived trust and perceived shopping enjoyment had the greatest impact on customer acceptance. Ease of use, trust and shopping enjoyment had a significant impact on perceived usefulness; trust, shopping enjoyment, and usefulness
had a significant effect on attitude towards online shopping. They also found that attitude and perceived usefulness had an influential role in consumer intention to purchase online.
According to Vijayasarathy (2004), there is a positive association between consumer attitude towards online shopping and the beliefs concerning usefulness, compatibility, security and ease of use. Also, the intention to purchase online is strongly influenced by consumer beliefs about online shopping, self-efficacy and attitude. Surprisingly, no positive relationship between purchasing intention and consumer beliefs about the usefulness of online shopping was reported (Vijayasarathy, 2004). Gefen, Karahanna and Straub (2003) found that perceived usefulness and perceived ease of use influence consumer repurchase intention.
It must be noted that Schepman and Rodway (2020) expressed some criticisms about the applicability of TAM to measure attitudes towards AI. According to them, it is the online retailers that can decide to integrate AI into webshops, and consumers have no choice but to use it when shopping online in such stores. Therefore, traditional technology acceptance models might not be ideal to measure attitudes towards AI. However, we are convinced that consumers still have the free will to decide whether to use new technology, i.e. to shop online in an AI-powered webshop, or not.
## 2 Methodology and research questions
### Methodology
The constructs and the measurement instruments presented in Table no. 1 were developed based on the literature review, and according to the Technology Acceptance Model. Variables with asterisk and in _italics_ were adapted from Park (2009), the others were adapted from Hu and O'Brien (2016). However, each variable was modified by the authors to make it possible to measure the perceived role of AI in online shopping.
For data collection, a questionnaire made up of 26 questions (variables) was used (Table no. 1). Additionally, six demographics variables - gender, education, age, occupation, place of residence and internet subscription - were also included in the survey. All measurement instruments were listed in Table no. 1 but the demographics variables were measured on a seven-point Likert-scale ranging from strongly disagree (1) strongly agree (7).
In the very first section of the questionnaire, respondents were provided with a detailed explanation of AI-powered webshops and shopping apps, which are online stores where shopping is supported by artificial intelligence. AI-powered webshops present personalised product/service offerings based on previous search patterns and purchases that we made before, and automatically display products that AI chooses for us. Also, AI offers similar products to those that were originally viewed but were not available in the right size (product recommendation based on visual similarity). Another typical sign of an AI-powered webshop is that when the customer is leaving the web store, AI warns about the products left in the cart, to complete the purchase. AI-powered webshops often use chatbots, i.e. a virtual assistant is available if the customer has any questions, and visual (image-based) search is also possible: after uploading a product picture, AI recommends the most similar ones to that. Virtual changing rooms, voice recognition and automatic search completion are also available in AI-powered webshops such as Amazon, e-Bay, Alibaba, AliExpress, GearBest, eMAG.hu, PICland.hu, Ecipo, Bonprix, Answear, Reserved, Fashiondays, Fashionup, Spartoo, Orsay, to mention just a few. |
2309.15420 | The Triad of Failure Modes and a Possible Way Out | We present a novel objective function for cluster-based self-supervised
learning (SSL) that is designed to circumvent the triad of failure modes,
namely representation collapse, cluster collapse, and the problem of invariance
to permutations of cluster assignments. This objective consists of three key
components: (i) A generative term that penalizes representation collapse, (ii)
a term that promotes invariance to data augmentations, thereby addressing the
issue of label permutations and (ii) a uniformity term that penalizes cluster
collapse. Additionally, our proposed objective possesses two notable
advantages. Firstly, it can be interpreted from a Bayesian perspective as a
lower bound on the data log-likelihood. Secondly, it enables the training of a
standard backbone architecture without the need for asymmetric elements like
stop gradients, momentum encoders, or specialized clustering layers. Due to its
simplicity and theoretical foundation, our proposed objective is well-suited
for optimization. Experiments on both toy and real world data demonstrate its
effectiveness | Emanuele Sansone | 2023-09-27T05:54:14Z | http://arxiv.org/abs/2309.15420v1 | # The Triad of Failure Modes and a Possible Way Out
###### Abstract
We present a novel objective function for cluster-based self-supervised learning (SSL) that is designed to circumvent _the triad of failure modes_, namely representation collapse, cluster collapse, and the problem of invariance to permutations of cluster assignments. This objective consists of three key components: (i) A generative term that penalizes representation collapse, (ii) a term that promotes invariance to data augmentations, thereby addressing the issue of label permutations and (ii) a uniformity term that penalizes cluster collapse. Additionally, our proposed objective possesses two notable advantages. Firstly, it can be interpreted from a Bayesian perspective as a lower bound on the data log-likelihood. Secondly, it enables the training of a standard backbone architecture without the need for asymmetric elements like stop gradients, momentum encoders, or specialized clustering layers. Due to its simplicity and theoretical foundation, our proposed objective is well-suited for optimization. Experiments on both toy and real world data demonstrate its effectiveness.
## 1 Background
**Model**. Let us introduce the random quantities used in the model shown in Figure 1: (i) \(x\in\Omega\), where \(\Omega\) is a compact subset of \(\mathbb{R}^{d}\), represents a data vector drawn independently from an unknown distribution \(p(x)\) (for instance an image), (ii) \(x^{\prime}\in\Omega\) represents a transformed version of \(x\) using a stochastic data augmentation strategy \(\mathcal{T}(x^{\prime}|x)\) (obtained by adding for instance noisy or cropping the original image), and (iii) \(y\in\{1,\ldots,c\}\) is the symbolic representation of an input data point defined over \(c\) categories (namely the cluster defined over the embedding representation). The corresponding probabilistic graphical model is given in Figure 1. The
Figure 1: Probabilistic graphical model for cluster-based SSL. \(i\) is used to index different training instances, i.e. \(i=1,\ldots,n\).
generative process (solid arrows) is defined using the following conditional densities, namely: \(p(x^{\prime}|x,\xi)=\mathcal{T}(x^{\prime}|x)\) and \(p(y|x)=\text{Softmax}(out(proj(enc(x))))\), where \(enc:\Omega\rightarrow\mathbb{R}^{h}\) is an encoder used to compute the latent representation, \(proj:\mathbb{R}^{h}\rightarrow\mathcal{S}^{h-1}\) is a projector head used to compute the embedding representation, and \(out\) computes the cosine similarity between the embedding representation and the column vectors of a matrix of parameters \(U\in\mathbb{R}^{h\times c}\) known as the cluster centers/prototypes [2]. The inference process (dashed arrow) is defined as \(q(y|x)=\text{SK}(out(proj(enc(x^{\prime}))))\), viz. a distribution over cluster/prototype assignments obtained through the Sinkhorn-Knopp algorithm (SK). Please refer to [2] for additional details.
**Objective**. The training objective is based on an evidence lower bound on the negative entropy, derived from the probabilistic graphical model of Figure 1(a), namely:
\[\mathbb{E}_{p(x_{1:n})}\{\log p(x_{1:n};\Theta)\} =-H_{p}(x_{1:n})+\mathbb{E}_{p(x_{1:n})\mathcal{T}(x^{\prime}_{1: n}|x_{1:n})}\left\{\log\sum_{y_{1:n}}p(y_{1:n}|x_{1:n};\Theta)\right\} \tag{1}\] \[\geq-H_{p}(x_{1:n})+\underbrace{\sum_{i=1}^{n}\mathbb{E}_{p(x_{i })\mathcal{T}(x^{\prime}_{i}|x_{i})}\left\{\mathbb{E}_{q(y_{i}|x^{\prime}_{i} )}\log p(y_{i}|x_{i};\Theta)+H_{q}(y_{i}|x^{\prime}_{i})\right\}}_{\text{ Discriminative term $\mathcal{L}_{DI}(\Theta)$}} \tag{2}\]
where \(H_{q}(y|x^{\prime})\) is the entropy computed over \(q(y|x^{\prime})\) and \(\Theta\) includes all parameters of the encoder, projector head and the output layer of the discriminative model. Intuitively, the first addend in \(\mathcal{L}_{DI}(\Theta)\) in Eq. 2 forces the symbolic representations of the input data and its augmented version to be similar, whereas the second addend enforces uniformity on the cluster assignments, so as to avoid that all representations collapse to a single cluster. It is important to mention that the objective in Eq. 2 is general enough to cover several proposed criteria in the literature of cluster-based self-supervised learning (cf. [15, 14]), such as DeepCluster [1], SwAV [2] and DINO [3].
## 2 Objective Function and The Triad of Failure Modes
We devise a new lower bound for cluster-based SSL which avoids introducing asymmetries in the optimization procedure and in the discriminative backbone. We theoretically analyze the properties of the different loss terms involved in the GEDI instantiation with respect to important failure modes.
We are ready to state the following proposition (the proof can be found in Appendix A of the Supplementary Material):
**Proposition 2.1**.: _Eq. (1) can be lower bounded by the following quantity:_
\[-H_{p}(x_{1:n})\underbrace{-\sum_{i=1}^{n}\mathbb{E}_{p(x_{i})\mathcal{T}(x^{ \prime}_{i}|x_{i})}\left\{CE(p(y_{i}|x^{\prime}_{i};\Theta),\,p(y_{i}|x_{i}; \Theta))\right\}}_{\mathcal{L}_{INV}(\Theta)}-\underbrace{\sum_{i=1}^{n}CE\left( p(y_{i}),\,q(y_{i})\right)}_{\mathcal{L}_{PRIOR}(\Theta)} \tag{3}\]
_with \(q(y)=\frac{1}{n}\sum_{j=1}^{n}p(y_{j}=y|x_{j};\Theta)\) and \(CE\) the cross-entropy loss. Additionally, the corresponding maximum value for the last two addends in Eq. (3) is given by the following inequality:1_
Footnote 1: Here, we assume that the predictive model \(p(y|x;\Theta)\) has enough capacity to achieve the optimal solution.
\[\mathcal{L}_{INV}(\Theta)+\mathcal{L}_{PRIOR}(\Theta)\leq-\,H_{p}(y_{1:n}) \tag{4}\]
The above proposition has interesting implications. First of all, by maximizing the discriminative term \(\mathcal{L}_{INV}(\Theta)\) with respect to \(\Theta\), we enforce two properties, namely: (i) label invariance, as we ensure that the predictive distributions of the discriminative model for a sample and its augmented version match each other and (ii) confident predictions, as maximizing the cross-entropy forces also to decrease the entropy of these distributions.2 Secondly, by choosing a uniform prior, viz. \(p(y_{i})=\text{Uniform}(\{1,\ldots,c\})\), and by maximizing \(\mathcal{L}_{PRIOR}(\Theta)\) with respect to \(\Theta\), we ensure to obtain a balanced cluster assignment, typical of approaches based on optimal transport objectives and corresponding surrogates [1, 2, 4]. Finally, the proposed lower bound allows for an important key difference over existing cluster-based SSL, as we don't need to introduce asymmetries in the discriminative backbones. Indeed, we note that cluster-based SSL, specifically SwAV, assume \(p(y|x;\Theta)=\text{Softmax}(U^{T}g(x)/\tau)\) and \(q(y|x^{\prime})=\text{Sinkhorn}(\text{StopGrad}(U^{T}g(x^{\prime})/\tau))\), where Sinkhorn and StopGrad are two operators performing the Sinkhorn-Knopp algorithm and stopping the gradients, respectively. In contrast, we require that \(q(y|x)=p(y|x;\Theta)=\text{Softmax}(f(enc(x))/\tau)\), where \(f:\mathbb{R}^{h}\rightarrow\mathbb{R}^{c}\) is a simple discriminative network head.
Footnote 2: Indeed, recall that \(CE(p,q)=H_{p}+KL(p\|q)\). Therefore, maximizing \(-CE(p,q)\) forces to have both \(KL(p\|q)=0\) and \(H_{p}=0\).
Additionally, we lower bound the first addend in Eq. 3 by exploiting the inequality \(-H_{p}(x_{1:n})\geq-CE(p,p_{\Theta})\), and obtain the overall objective, called GEDI (aka GEnerative DIscriminative objective):
\[\mathbb{E}_{p(x_{1:n})}\{\log p(x_{1:n};\Theta)\}\geq\underbrace{\mathcal{L} _{GEN}(\Theta)}_{\text{GEnerative term }-CE(p,\,p_{\Theta})}+\underbrace{\mathcal{L}_{ INV}(\Theta)+\mathcal{L}_{PRIOR}(\Theta)}_{\text{DIscriminative terms}} \tag{5}\]
Importantly, we can reinterpret the discriminative model \(p(y|x;\Theta)=\frac{p(y,x;\Theta)}{p(x;\Theta)}\) as an energy-based generative model \(p_{\Theta}=p(x;\Theta)\), similarly to what is done in the context of supervised learning [8, 10], namely:
\[p(y,x;\Theta)=\frac{e^{f_{y}(enc(x))/\tau}}{\Gamma(\Theta)}\qquad p_{\Theta} \doteq p(x;\Theta)=\frac{\sum_{y=1}^{c}e^{f_{y}(enc(x))/\tau}}{\Gamma(\Theta) }=\frac{e^{\log\sum_{y=1}^{c}e^{f_{y}(enc(x))/\tau}}}{\Gamma(\Theta)} \tag{6}\]
Training is performed by simply maximizing the lower bound in Eq 5. We leave detailed discussion about the training and its computational requirements to Appendix B in the Supplementary Material. We are now ready to analyze the properties of the GEDI objective.
**The Triad of Failure Modes**. Here, we formalize three main failure modes for cluster-based SSL [16]. Then, we study the GEDI loss landscape and show that these undesired trivial solutions are not admitted by our objective. This
result holds without introducing asymmetries in the optimization procedure and/or network architecture.
Let's start by defining the most important failure modes, namely:
**Definition 1** (Failure Mode 1 - Representational Collapse).: _There exists a constant vector \(k\in\mathbb{R}^{h}\) such that for all \(x\in\mathbb{R}^{d}\), \(enc(x)=k\)._
**Definition 2** (Failure Mode 2 - Cluster Collapse).: _There exists a cluster \(j\in\{1,\ldots,c\}\) such that for all \(x\in\mathbb{R}^{d}\), \(p(y=j|x;\Theta)=1\)._
**Definition 3** (Failure Mode 3 - Permutation Invariance to Cluster Assignments).: _For all possible permutations \(\pi:\{1,\ldots,c\}\rightarrow\{1,\ldots,c\}\), a dataset \(\mathcal{D}=\{(x_{i},t_{i},t_{i}^{\prime})\}_{i=1}^{n}\), its permuted version \(\mathcal{D}^{\pi}=\{(x_{i},t_{\pi(i)},t_{i}^{\prime})\}_{i=1}^{n}\) and a loss \(\mathcal{L}(\Theta;\cdot)\), evaluated at one of the two datasets, we have that \(\mathcal{L}(\Theta;\mathcal{D})=\mathcal{L}(\Theta;\mathcal{D}^{\pi})\). For GEDI, \(t_{i}\doteq f(enc(x_{i}))\) and \(t_{i}^{\prime}\doteq f(enc(x_{i}^{\prime}))\)._
In other words, Definition 1 considers the case where the encoder maps (collapses) every input to the same output. Definition 2 considers the situation where the predictive model assigns all samples to the same cluster with high confidence. And Definition 3 considers the case where a hypothetical adversary swaps the predictions made by the model on different pair of inputs. Ideally, we would like to have an objective that does not admit these failure modes.
Now, we state the properties of the loss landscape of GEDI with the following theorem (we leave the proof to Section G in the Supplementary Material):
**Theorem 1**.: _Given definitions 1-3, the following statements tells for a particular loss, which modes are admitted as optimal solutions:_
1. \(\mathcal{L}_{GEN}(\Theta)\) _admits failure modes 2 and 3._
2. \(\mathcal{L}_{INV}(\Theta)\) _admits failure modes 1 and 2._
3. \(\mathcal{L}_{PRIOR}(\Theta)\) _admits failure modes 1 and 3._
Importantly, Theorem 1 tells us that \(\mathcal{L}_{GEN}(\Theta)\) can be used to penalize representational collapse, \(\mathcal{L}_{INV}(\Theta)\) can be used to break the problem of permutation invariance for the cluster assignments, while \(\mathcal{L}_{PRIOR}(\Theta)\) can be used to penalize cluster collapse. Consequently, by maximizing the objective in Eq. (5), we are guaranteed to learn solutions which are non-trivial. A table summarizing all these properties is given below.
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Does \(\downarrow\) penalize \(\rightarrow\)?** & **Repr. collapse** & **Clus. collapse** & **Perm. Inv.** \\ \hline \(\mathcal{L}_{GEN}(\Theta)\) & **Yes** & No & No \\ \(\mathcal{L}_{INV}(\Theta)\) & No & No & **Yes** \\ \(\mathcal{L}_{PRIOR}(\Theta)\) & No & **Yes** & No \\ Eq. (5) & **Yes** & **Yes** & **Yes** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of loss landscape
## 3 Experiments
We perform experiments to evaluate the discriminative performance of GEDI and its competitors, namely an energy-based model JEM [8] and a self-supervised baseline based on SwAV [2]. The whole analysis is divided into two main experimental settings, the first one based on two synthetic datasets, including moons and circles, the second one based on real-world data, including SVHN, CIFAR-10 and CIFAR-100. We use existing code both as a basis to build our solution and also to run the experiments for the different baselines. In particular, we use the code from [7] for training energy-based models and the repository from [5] for all self-supervised baselines. Implementation details as well as additional experiments on generation, OOD detection and linear probe evaluation are reported in the Supplementary Material (Appendices D-G).
**Moons and Circles**.In Table 2, we observe that JEM fails to solve the clustering task for both datasets. This is quite natural, as JEM is a purely generative approach, mainly designed to perform implicit density estimation. SwAV can only solve the clustering task for the moons dataset, highlighting the fact that its objective function admits failure mode 3. Indeed, we observe in the circles dataset that half of the labels are permuted across the two manifolds (cf. Figure 3 in the Supplementary Material). In contrast, GEDI can recover the true clusters in both datasets, as it is guaranteed to avoid trivial solutions and learn more meaningful cluster assignments. We conduct an ablation study to understand the impact of the different loss terms in GEDI and empirically validate the theoretical results obtained in Section 4.3. We compare four different versions of GEDI, namely the full version (called simply GEDI), GEDI trained without \(\mathcal{L}_{GEN}(\Theta)\) (called _no gen_), GEDI trained without \(\mathcal{L}_{INV}(\Theta)\) (called _no inv_) and GEDI trained without \(\mathcal{L}_{PRIOR}(\Theta)\) (called _no unif_). From the results in Table 2, we observe that: (i) GEDI _no unif_ is subject to cluster collapse on both datasets. This is expected as failure mode 2 is not penalized during training due to the omission of \(\mathcal{L}_{PRIOR}(\Theta)\); (ii) GEDI _no inv_ is subject to the problem of permutation invariance to cluster assignments. Consequently, the obtained cluster labels are not informative and consistent with the underlying manifold structure of the data distribution. Again, this confirms the result of Theorem 1, as failure mode 3 could be avoided by the use of \(\mathcal{L}_{INV}(\Theta)\); (iii) GEDI _no gen_
\begin{table}
\begin{tabular}{l r r r r r r} \hline
**Dataset** & **JEM**[8] & **SwAV**[2] & **GEDI no unif** & **GEDI no inv** & **GEDI no gen** & **GEDI** \\ \hline Moons & 0.00\(\pm\)0.00 & 0.76\(\pm\)0.36 & 0.00\(\pm\)0.00 & 0.11\(\pm\)0.15 & **0.98\(\pm\)0.00** & 0.94\(\pm\)0.07 \\ Circles & 0.00\(\pm\)0.00 & 0.00\(\pm\)0.00 & 0.00\(\pm\)0.00 & 0.22\(\pm\)0.13 & 0.83\(\pm\)0.12 & **1.00\(\pm\)0.01** \\ \hline SVHN & 0.00 & 0.21 & - & - & 0.21 & **0.25** \\ CIFAR10 & 0.00 & 0.43 & - & - & 0.43 & **0.45** \\ CIFAR100 & 0.00 & 0.65 & - & - & 0.86 & **0.87** \\ \hline \end{tabular}
\end{table}
Table 2: Clustering performance based on normalized mutual information (NMI) on test set (toy data, viz. moons and circles, and real data, viz. SVHN, CIFAR-10, CIFAR-100). Higher values indicate better clustering performance. Mean and standard deviations are computed from 5 different runs.
achieves superior performance over other SSL baselines. While in theory the objective function for this approach admits representational collapse, in practice we never observed such issue. It might be the case that the learning dynamics of gradient-based optimisation are enough to avoid the convergence to this trivial solution. However, further analysis is required in order to verify this statement; finally (iv) GEDI is guaranteed to avoid the most important failure modes and therefore solve the discriminative task.
**SVHN, CIFAR-10, CIFAR-100**. From Table 2, we observe that GEDI is able to outperform all other competitors by a large margin. Additionally, we note a difference gap in clustering performance with increasing number of classes (cf. CIFAR-100). This might be explained by the fact that the number of possible label permutations increases with the number of classes and that our loss is more robust to the permutation invariance problem as from Theorem 1. Finally, GEDI _no gen_ is comparable and often superior to SwAV, despite being simpler (i.e. avoiding the use of asymmetries and the running of iterative clustering). Please refer to Appendices F and G for further details.
|
2309.07742 | Interpretability is in the Mind of the Beholder: A Causal Framework for
Human-interpretable Representation Learning | Focus in Explainable AI is shifting from explanations defined in terms of
low-level elements, such as input features, to explanations encoded in terms of
interpretable concepts learned from data. How to reliably acquire such concepts
is, however, still fundamentally unclear. An agreed-upon notion of concept
interpretability is missing, with the result that concepts used by both
post-hoc explainers and concept-based neural networks are acquired through a
variety of mutually incompatible strategies. Critically, most of these neglect
the human side of the problem: a representation is understandable only insofar
as it can be understood by the human at the receiving end. The key challenge in
Human-interpretable Representation Learning (HRL) is how to model and
operationalize this human element. In this work, we propose a mathematical
framework for acquiring interpretable representations suitable for both
post-hoc explainers and concept-based neural networks. Our formalization of HRL
builds on recent advances in causal representation learning and explicitly
models a human stakeholder as an external observer. This allows us to derive a
principled notion of alignment between the machine representation and the
vocabulary of concepts understood by the human. In doing so, we link alignment
and interpretability through a simple and intuitive name transfer game, and
clarify the relationship between alignment and a well-known property of
representations, namely disentanglment. We also show that alignment is linked
to the issue of undesirable correlations among concepts, also known as concept
leakage, and to content-style separation, all through a general
information-theoretic reformulation of these properties. Our conceptualization
aims to bridge the gap between the human and algorithmic sides of
interpretability and establish a stepping stone for new research on
human-interpretable representations. | Emanuele Marconato, Andrea Passerini, Stefano Teso | 2023-09-14T14:26:20Z | http://arxiv.org/abs/2309.07742v1 | Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
###### Abstract
Focus in Explainable AI is shifting from explanations defined in terms of low-level elements, such as input features, to explanations encoded in terms of _interpretable concepts learned from data_. How to reliably acquire such concepts is, however, still fundamentally unclear. An agreed-upon notion of concept interpretability is missing, with the result that concepts used by both _post-hoc_ explainers and _concept-based_ neural networks are acquired through a variety of mutually incompatible strategies. Critically, most of these neglect the human side of the problem: _a representation is understandable only insofar as it can be understood by the human at the receiving end_. The key challenge in Human-interpretable Representation Learning (hrl) is how to model and operationalize this human element. In this work, we propose a mathematical framework for acquiring _interpretable representations_ suitable for both post-hoc explainers and concept-based neural networks. Our formalization of hrl builds on recent advances in causal representation learning and explicitly models a human stakeholder as an external observer. This allows us derive a principled notion of _alignment_ between the machine's representation and the vocabulary of concepts understood by the human. In doing so, we link alignment and interpretability through a simple and intuitive _name transfer_ game, and clarify the relationship between alignment and a well-known property of representations, namely _disentanglement_. We also show that alignment is linked to the issue of undesirable correlations among concepts, also known as _concept leakage_, and to content-style separation, all through a general information-theoretic reformulation of these properties. Our conceptualization aims to bridge the gap between the human and algorithmic sides of interpretability and establish a stepping stone for new research on human-interpretable representations.
explainable AI causal representation learning alignment disentanglement causal abstractions concept leakage
## 1 Introduction
The field of Explainable AI (XAI) has developed a wealth of attribution techniques for unearthing the reasons behind the decisions of black-box machine learning models (Guidotti et al., 2018). Traditionally
ally, explaining a prediction involves identifying and presenting those low-level _atomic elements_ - like input variables (Strumbelj and Kononenko, 2014; Ribeiro et al., 2016) and training examples (Kim et al., 2016; Koh and Liang, 2017) - that are responsible for said prediction. Explanations output by white-box models, such as sparse linear classifiers (Ustun and Rudin, 2016) and rule-based predictors (Wang et al., 2017), follow the same general setup. These atomic elements, however, are not very expressive and, as such, can be ambiguous (Rudin, 2019). To see this, consider an image of a red sports car that is tagged as "positive" by a black-box predictor. In this example, a saliency map would highlight those _pixels_ that are most responsible for this prediction: these do not say whether the prediction depends on the image containing a "car", on the car being "red", or on the car being "sporty". As a consequence, it is impossible to understand what the model is "thinking" and how it would behave on other images based on this explanation alone (Teso et al., 2023).
This is why focus in XAI has recently shifted toward explanations expressed in terms of higher-level symbolic representations, or _concepts_ for short. These promise to ensure explanations are rich enough they can capture the machine's reasoning patterns, while being expressed in terms that can be naturally understood by stakeholders (Rudin, 2019; Kambhampati et al., 2022).
This trend initially emerged with (_post-hoc_) _concept-based explainers_(CBEs) like TCAV (Kim et al., 2018) and Net2Vec (Fong and Vedaldi, 2018), among others (Ghorbani et al., 2019; Zhang et al., 2021; Fel et al., 2023a), which match the latent space of a deep neural network to a vocabulary of pre-trained concept detectors.1 These were quickly followed by a variety of _concept-based models_ (CBMs) - including Self-Explainable Neural Networks (Alvarez-Melis and Jaakkola, 2018), Part-Prototype Networks (Chen et al., 2019), Concept-Bottleneck Models (Koh et al., 2020), GlanceNets (Marconato et al., 2022), and Concept Embedding Models (Espinosa Zarlenga et al., 2022) - that support representation learning while retaining interpretability. Specifically, these approaches learn a neural mapping from inputs to concepts, and then leverage the latter for both computing predictions - in a simulatable manner (Lipton, 2018) - and providing _ante-hoc_ explanations thereof. See (Schwalbe, 2022) for a review. Since concepts act as a _bottleneck_ through which all information necessary for inference must flow, CBMs hold the promise of avoiding the lack of faithfulness typical of post-hoc techniques, while enabling a number of useful operations such as interventions (Koh et al., 2020) and debugging (Stammer et al., 2021; Bontempelli et al., 2023) using concepts as a human-friendly interface.
Footnote 1: The idea of using higher-level concepts was foreshadowed in the original LIME paper (Ribeiro et al., 2016).
### Limitations of Existing Works
The promise of conceptual explanations rests on the assumption that learned concepts are themselves interpretable. This begs the question: _what does it mean for a vocabulary of concepts to be interpretable_?
Researchers have proposed a variety of practical strategies to encourage the interpretability of the learned concepts, but no consistent recipe. Some CBMs constrain their representations according to intuitive heuristics, such as similarity to concrete training examples (Chen et al., 2019) or activation sparsity (Alvarez-Melis and Jaakkola, 2018). However, the relationship between these properties and interpretability is unclear, and unsurprisingly there are well known cases in which CBMs acquire concepts activating on parts of the input with no obvious semantics (Hoffmann et al., 2021; Xu-Darme et al., 2023). A more direct way of controlling the semantics of learned concepts is to leverage _supervision_ on the concepts themselves, a strategy employed by both CBEs (Kim et al., 2018) and CBMs (Koh et al., 2020; Chen et al., 2020; Marconato et al., 2022). Unfortunately, this is no panacea, as doing so cannot prevent _concept leakage_(Margeloiu et al., 2021; Mahinpei et al., 2021), whereby information from a concept "leaks" into another, seemingly unrelated concept, compromising its meaning.
At the same time, concept quality is either assessed qualitatively in a rather unsystematic fashion - _e.g._, by inspecting the concept activations or saliency maps on a handful of examples - or quantitatively, most often by measuring how well learned concepts match annotations. This so-called _concept accuracy_, however, is insufficient to capture issues like concept leakage.
Besides these complications, existing approaches neglect a critical aspect of this learning problem: that _interpretability is inherently subjective_. For instance, explaining a prediction to a medical doctor requires different concepts than explaining it to a patient: the notion of "intraepithelial" may be essential for the former, while being complete gibberish to the latter. However, even when concept annotations are employed, they are gathered from offline repositories and as such they may not capture concepts that are
meaningful to a particular expert, or that despite being associated with a familiar name follow semantics incompatible with those the user attaches to that name.2
Footnote 2: Of course, there are exceptions to this rule. These are discussed in section 6.
### Our Contributions
Motivated by these observations, _we propose to view interpretability as the machine's ability to communicate with a specific human-in-the-loop_. Specifically, we are concerned with the problem of learning conceptual representations that enable this kind of communication for both _post-_ and _ante-hoc_ explanations. We call this problem **human-interpretable representation learning**, or hrl for short. Successful communication is essential for ensuring human stakeholders can understand _explanations_ based on the learned concepts and, in turn, realizing the potential of CBEs and CBMs. This view is compatible with recent interpretations of the role of symbols in neuro-symbolic AI (Silver and Mitchell, 2023; Kambhampati et al., 2022). The key question is how to model this human element in a way that can be actually _operationalized_. We aim to fill this gap.
Our first contribution is a conceptual and mathematical model - resting on techniques from causal representation learning (Scholkopf et al., 2021) - of hrl that _explicitly models the human-in-the-loop_.
As a second contribution, we leverage our formalization to develop an intuitive but sound notion of _alignment_ between the conceptual representation used by the machine and that of the human observer. Alignment is strictly related to _disentanglement_, a property of learned representations frequently linked to interpretability (Bengio et al., 2013; Higgins et al., 2018), but also strictly _stronger_, in the sense that disentanglement alone is insufficient to ensure concept interpretability. We propose that alignment is key for evaluating interpretability of both CBEs and CBMs.
Our formalization improves on the work of Marconato et al. (2022) and looks at three settings of increasing complexity and realism: (\(i\)) a simple but non-trivial setting in which the human's concepts are _disentangled_ (_i.e._, individual concepts can be changed independently from each other without interference). (\(ii\)) a more general setting in which the human's concepts are constrained to be disentangled in blocks; (\(iii\)) an unrestricted setting in which the human concepts can influence each other in arbitrary manners. In addition, we identify a and previously ignored link between interpretability of representations and the notion of _causal abstraction_(Beckers and Halpern, 2019; Beckers et al., 2020; Geiger et al., 2023a).
As a third contribution, we formally show that _concept leakage_ can be viewed as a lack of disentanglement, and therefore of alignment. This strengthens existing results and allows to reinterpret previous empirical observations (Marconato et al., 2022; Lockhart et al., 2022).
As a fourth contribution, we discuss key questions arising from our mathematical framework, including whether perfect alignment is sufficient and necessary for interpretability, how to measure it, how to implement it in representation learning, and how to collect the necessary concept annotations.
### Outline
The remainder of this paper is structured as follows. In the next section, we introduce prerequisite material and then proceed in section 3 to formalize the problem of human-interpretable representation learning and cast concept interpretability in terms of _alignment between representations_. Next, in section 4 we analyze in depth the notion of alignment in three settings of increasing complexity and study its relationship to the issue of concept leakage, and then look at the consequences of our formalization in section 5. Finally, we discuss related works in section 6 and offer some concluding remarks in section 7.
## 2 Preliminaries
In the following, we indicate scalar constants \(x\) in lower-case, random variables \(X\) in upper case, ordered sets of constants \(\mathbf{x}\) and random variables \(\mathbf{X}\) in bold typeface, and index sets \(\mathcal{I}\) in calligraphic typeface. We also use the shorthand \([n]:=\{1,\ldots,n\}\). Letting \(\mathbf{X}=(X_{1},\ldots,X_{n})\) and \(\mathcal{I}\subseteq[n]\), we write \(\mathbf{X}_{\mathcal{I}}:=(X_{i}:i\in\mathcal{I})\) to indicate the ordered subset indexed by \(\mathcal{I}\) and \(\mathbf{X}_{-\mathcal{I}}:=\mathbf{X}\setminus\bar{\mathbf{X}}_{\mathcal{I}}\) to denote its complement, and abbreviate \(\mathbf{X}\setminus\{X_{i}\}\) as \(\mathbf{X}_{-i}\).
### Structural Causal Models and Interventions
A _structural causal model_ (SCM) is a formal description of the causal relationships existing between parts of a (stochastic) system (Pearl, 2009; Peters et al., 2017). Formally, an SCM \(\mathfrak{C}\) specifies a set of _structural assignments_ encoding direct causal relationships between variables,3 in the form:
Footnote 3: As customary, we work with SCMs that are _acyclic_, _causally sufficient_ (_i.e._, there are no external, hidden variables influencing the system), and _causally Markovian_ (_i.e._, each variable \(X_{i}\) is independent of its non-descendant given its parents in the SCM) (Pearl, 2009).
\[X_{i}\gets f_{i}(\mathbf{Pa}_{i},N_{i}) \tag{1}\]
where \(\mathbf{X}=(X_{1},\ldots,X_{n})\) are variables encoding the state of the system, \(\mathbf{Pa}_{i}\subseteq\mathbf{X}\) are the direct causes of \(X_{i}\), and \(N_{i}\) are noise terms. Variables without parents are _exogenous_, and play the role of inputs to the system, while the others are _endogenous_. The full state of the system can be sampled by propagating the values of the exogenous variables through the structural assignments in a top-down fashion. SCMs can be viewed as _graphs_ in which nodes represent variables, arrows represent assignments, and noise variables are usually suppressed, cf. fig. 1.
Following common practice, we assume the noise terms to be mutually independent from each other and also independent from the variables not appearing in the corresponding structural equations, that is, it holds that \(N_{i}\perp\!\!\!\perp N_{j}\) for all \(i\neq j\) and \(N_{i}\perp\!\!\!\perp X_{j}\) for all \(i,j\). This is equivalent to assuming there are no hidden confounders. This assumption carries over to all SCMs used throughout the paper.
An SCM \(\mathfrak{C}\) describes both a _joint distribution_\(p(\mathbf{X})=\prod_{i}p(X_{i}\mid\mathbf{Pa}_{i})\) and how this distribution _changes_ upon performing _interventions_ on the system. These are modifications to the system's variables and connections performed by an external observer. Using Pearl's \(do\)-operator (Pearl, 2009), (atomic) interventions can be written as \(do(X_{i}\gets x_{i})\), meaning that the value of the variable \(X_{i}\) is forcibly changed to the value \(x_{i}\), regardless of the state of its parents and children. Carrying out an atomic intervention yields a _manipulated SCM_ identical to \(\mathfrak{C}\) except that all assignments to \(X_{i}\) are deleted (_i.e._, the corresponding links in the graph disappear) and all occurrences of \(X_{i}\) in the resulting SCM are replaced by the constant \(x_{i}\). The resulting manipulated distribution is \(p(\mathbf{X}\mid do(X_{i}\gets x_{i}))=\mathbbm{1}\left\{X_{i}=x_{i} \right\}\cdot\prod_{j\neq i}\ p(X_{j}\mid\mathbf{Pa}_{j})\). Non-atomic interventions of the form \(do(\mathbf{X}_{\mathcal{I}}\leftarrow\mathbf{x}_{\mathcal{I}})\) work similarly. Expectations of the form \(\mathbb{E}[\cdot\mid do(X_{j}\gets x_{j})]\) are just regular expectations evaluated with respect to the manipulated distribution.
### Disentanglement
Central to our work is the notion of disentanglement (Higgins et al., 2018; Eastwood and Williams, 2018; Scholkopf et al., 2021) in both its two acceptations, namely _disentanglement of variables_ and _disentanglement of representations_. We henceforth rely on the causal formalization given by Suter et al. (2019) and Reddy et al. (2022). We refer the reader to those papers for more details.
Intuitively, _a set of variables_\(\mathbf{G}=(G_{1},\ldots,G_{n})\)_is disentangled if the variables can be changed independently from one another_. For instance, if \(G_{1}\) represents the "color" of an object and \(G_{2}\) its "shape", disentanglement of variables implies that changing the object's color does not impact its shape. This should hold even if the variables \(\mathbf{G}\) have a common set of parents \(\mathbf{C}\) - playing the role of counfounders,
Figure 1: SCMs illustrating two different notions of disentanglement. _Left_: The variables \(\mathbf{G}=\{G_{1},\ldots,G_{n}\}\) are disentangled. _Right_: Typical data generation and encoding process used in deep latent variable models. The machine representation \(\mathbf{M}=\{M_{1},\ldots,M_{k}\}\) is _disentangled with respect to_ the generative factors \(\mathbf{G}\) if and only if each \(M_{j}\) encodes information about at most one \(G_{i}\).
such as sampling bias or choice of source domain (Pearl, 2009) - meaning that they can be both disentangled _and_ correlated (via \(\mathbf{C}\)). From a causal perspective, disentanglement of variables can be defined as follows:
**Definition 1** (Disentanglement of variables): _A set of variables \(\mathbf{G}\) are disentangled if and only if \(p(G_{i}\mid\mathbf{C},do(\mathbf{G}_{\mathcal{I}}\leftarrow\mathbf{g}_{ \mathcal{I}}^{\prime}))\equiv p(G_{i}\mid\mathbf{C})\) for all possible choices of \(\mathcal{I}\subseteq[n]\setminus\{i\}\) and \(\mathbf{g}_{\mathcal{I}}^{\prime}\)._
Now, consider the SCM in fig. 1 (left). It is easy to see that the variables \(\mathbf{G}\) are disentangled: any intervention \(do(\mathbf{G}_{\mathcal{I}}\leftarrow\mathbf{g}_{\mathcal{I}}^{\prime})\) breaks the links from \(\mathbf{C}\) to \(\mathbf{G}_{\mathcal{I}}\), meaning that changes to the latter will not affect \(G_{i}\). In this case, the variables \(\mathbf{G}\) are also conditionally independent from one another given \(\mathbf{C}\), or equivalently \(G_{i}\perp\!\!\!\perp G_{j}\mid\mathbf{C}\) for every \(i\neq j\).
Later on, we will be concerned with data generation processes similar to the one illustrated in fig. 1 (right). Here, a set of _generative factors_\(\mathbf{G}=(G_{1},\ldots,G_{n})\) with common parents \(\mathbf{C}\) cause an observation \(\mathbf{X}\), and the latter is encoded into a _representation_\(\mathbf{M}=(M_{1},\ldots,M_{k})\) by a machine learning model \(p_{\theta}(\mathbf{M}\mid\mathbf{X})\). Specifically, \(\mathbf{M}\) is obtained by marginalizing over the inputs \(\mathbf{X}\):
\[p_{\theta}(\mathbf{M}\mid\mathbf{G}):=\mathbb{E}_{\mathbf{x}\sim p(\mathbf{X} \mid\mathbf{G})}[p_{\theta}(\mathbf{M}\mid\mathbf{x})] \tag{2}\]
This can also be viewed as a _stochastic map_\(\alpha:\mathbf{g}\mapsto\mathbf{m}\). Maps of this kind are central to our discussion.
Since \(\mathbf{G}\) is disentangled (cf. definition 1), we can talk about _disentanglement of representations_ for \(\mathbf{M}\). We say that \(\mathbf{M}\)_is disentangled with respect to_\(\mathbf{G}\) if, roughly speaking, each \(M_{j}\) encodes information about at most one \(G_{i}\), or - more precisely - _as long as \(G_{i}\) is kept fixed, the value of \(M_{j}\) does not change even when the remaining factors \(\mathbf{G}\setminus\{G_{i}\}\) are forcibly modified via interventions._ The degree by which a representation _violates_ disentanglement of representations can be measured using the \(\mathsf{PIDA}\) metric:
**Definition 2** (\(\mathsf{PIDA}\)(Suter et al., 2019)): _Let \(G_{i}\) be a generative factor and \(M_{j}\) an element of the machine representation. \(\mathsf{PIDA}\) measures how much fixing \(G_{i}\) to a given value \(g_{i}\) insulates \(M_{j}\) from changes to the other generative factors \(\mathbf{G}_{-i}\), and it is defined as:_
\[\mathsf{PIDA}(G_{i},M_{j}\mid g_{i},\mathbf{g}_{-i}):=d\big{(}p_{\theta}(M_{j} \mid do(G_{i}\gets g_{i})),p_{\theta}(M_{j}\mid do(G_{i}\gets g_{i}, \mathbf{G}_{-i}\leftarrow\mathbf{g}_{-i}))\big{)} \tag{3}\]
_where \(d\) is a divergence.4 The average worst case over all possible choices of \(g_{i}\) and \(\mathbf{g}_{-i}\) is given by:_
Footnote 4: The original definition (Suter et al., 2019) fixes \(d\) to be the difference between means. Here, we slightly generalize \(\mathsf{PIDA}\) to arbitrary divergences, as doing so can account for changes in higher-order moments too.
\[\mathsf{EMPIDA}(G_{i},M_{j}):=\mathbb{E}_{g_{i}}[\max_{\mathbf{g}_{-i}}\mathsf{ PIDA}(G_{i},M_{j}\mid g_{i},\mathbf{g}_{-i})] \tag{4}\]
**Definition 3** (Disentanglement of representations): _We say that a representation \(\mathbf{M}\) is disentangled with respect to \(\mathbf{G}\) if and only if \(\max_{j}\min_{i}\mathsf{EMPIDA}(G_{i},M_{j})\) is exactly zero._
In other words, \(\mathbf{M}\) is disentangled with respect to \(\mathbf{G}\) if, for every \(M_{j}\) there exists a \(G_{i}\) such that fixing the latter _insulates_\(M_{j}\) from changes to the other generative factors \(\mathbf{G}_{-i}\). In section 4, we will build on both types of disentanglement to derive our notion of alignment between representations.
Another important notion is that of _context-style separation_, which can be viewed as a special case of disentanglement of representations von Kugelgen et al. (2021). Let the generative factors \(\mathbf{G}\) be partitioned into two disentangled sectors \(\mathbf{G}_{\mathcal{I}}\) and \(\mathbf{G}_{-\mathcal{I}}\), representing task-relevant information (content) and task-irrelevant factors of variations (style), respectively. Then, \(\mathbf{M}\) satisfies content-style separation if the following holds:
**Definition 4** (Content-style separation): _Let \((\mathbf{G}_{\mathcal{I}},\mathbf{G}_{-\mathcal{I}})\) be two disentangled sectors. Then, \(\mathbf{M}\) separates content from style iff it can be partitioned into \((\mathbf{M}_{\mathcal{J}},\mathbf{M}_{-\mathcal{J}})\) such that:_
\[\mathsf{EMPIDA}(\mathbf{G}_{\mathcal{I}},\mathbf{M}_{\mathcal{J}})=0 \tag{5}\]
This means that, if the content \(\mathbf{G}_{\mathcal{I}}\) is fixed, the machine representation \(\mathbf{M}_{\mathcal{J}}\) are isolated from changes to the style \(\mathbf{G}_{-\mathcal{I}}\). This property is asymmetrical: it holds even if \(\mathbf{M}_{-\mathcal{J}}\)_is_ affected by interventions to \(\mathbf{G}_{\mathcal{I}}\). Also, there is no requirement that the elements of \(\mathbf{M}_{\mathcal{J}}\) are disentangled with respect to \(\mathbf{G}_{\mathcal{I}}\).
## 3 Human Interpretable Representation Learning
We are concerned with acquiring interpretable machine representations. Our key intuition is that a representation is only interpretable as long as it can be _understood by the human at the receiving end_. Based on this, we formally state our learning problem as follows:
**Definition 5**: _Human-interpretable representation learning (hrl) is the problem of learning a (possibly stochastic) mapping between inputs \(\mathbf{x}\in\mathbb{R}^{d}\) and a set of machine representations \(\mathbf{z}\in\mathbb{R}^{k}\) that enables a machine and a specific human stakeholder to communicate using those representations._
This mapping can be modeled without loss of generality as a conditional distribution \(p_{\theta}(\mathbf{Z}\mid\mathbf{X})\), whose parameters \(\theta\) are estimated from data. While definition 5 encompasses both CBEs and CBMs, the meaning of \(\mathbf{Z}\) differs in the two cases, as we show next.
### Machine Representations: The Ante-hoc Case
CBMs are neural predictors that follow the generative process shown in fig.2 (left). During inference, a CBM observes an input \(\mathbf{x}\), caused by generative factors \(\mathbf{G}\), and extracts a representation \(\mathbf{M}\) by performing MAP inference (Koller and Friedman, 2009) on a distribution \(p_{\theta}(\mathbf{M}\mid\mathbf{x})\) implemented as a neural network. This representation is partitioned into two subsets: \(\mathbf{M}_{\mathcal{J}}\)_are constrained to be interpretable, while \(\mathbf{M}_{-\mathcal{J}}\) are not_. As shown in fig.2, only the interpretable subset is used for inferring a prediction \(\hat{y}\), while \(\mathbf{M}_{-\mathcal{J}}\) - if present - is used for other tasks, such as reconstruction (Marconato et al., 2022). Specifically, the predicted concepts \(\mathbf{M}_{\mathcal{J}}\) are fed to a simulatable top layer \(p_{\theta}(Y\mid\mathbf{M})\) - most often a sparse linear layer - from which an explanation can be easily derived. Assuming \(\mathbf{M}_{\mathcal{J}}\) is in fact interpretable, CBMs can provide local _explanations_ summarizing what concepts are responsible for a particular prediction in an _ante hoc_ fashion and essentially for free Bontempelli et al. (2021); Schwalbe (2022). For instance, if \(p_{\theta}(Y\mid\mathbf{M}_{\mathcal{J}})\) is a linear mapping with parameters \(w_{wj}\), the explanation for predicting \(\hat{y}\) is given by Alvarez-Melis and Jaakkola (2018); Chen et al. (2019, 2020); Koh et al. (2020); Marconato et al. (2022); Zarlenga et al. (2022):
\[\mathcal{E}=\{(w_{\hat{y}j},m_{j})\;:\;j\in\mathcal{J}\} \tag{6}\]
where each concept activation \(m_{j}\) is associated with a "level of responsibility" inferred from the top layer's weights. Specific CBMs are outlined in section6.
Summarizing, in the case of CBMs the concepts \(\mathbf{Z}\) used for communicating with users (cf. definition5) are embodied by the interpretable machine representation \(\mathbf{M}_{\mathcal{J}}\).
### Machine Representations: The Post-hoc Case
For CBEs, the generative process is different, see fig.2 (right). In this case, the internal representation \(\mathbf{M}\) of the model mapping from inputs \(\mathbf{X}\) to labels \(Y\)_is not required to be interpretable_. For instance, it
Figure 2: **Left**: generative process followed by concept-based models _CBMs_. A prediction is inferred based on a subset of “interpretable” concepts \(\mathbf{M}_{\mathcal{J}}\subseteq\mathbf{M}\), so it is to \(\mathbf{M}_{\mathcal{J}}\) that our notion of alignment (section4, in **red**) applies. **Right**: generative process followed by _concept-based explainers_ (CBEs). Here, the machine representation \(\mathbf{M}\) is _not_ required to be interpretable. Rather, the explainer maps it to extracted concepts \(\hat{\mathbf{H}}\) and then infers how these contribute to the prediction. Here it is \(\hat{\mathbf{H}}\) that alignment applies to.
might represent the state of all neurons in a neural network or that of the neurons in the second-to-last layer. CBEs explain the reasoning process in a _post hoc_ fashion by extracting the activations of high-level concepts \(\hat{\mathbf{H}}\) from \(\mathbf{M}\), and then inferring a concept-based explanation \(\mathcal{E}\) specifying the contribution of each \(\hat{H}_{i}\) to the model's prediction, often in the same form as eq. (6).
Here, we are concerned with the interpretability of \(\hat{\mathbf{H}}\). Some approaches extract them by (indirectly) relying on concept annotations. For instance, TCAV (Kim et al., 2018) takes a set of linear classifiers, one for each concept, pre-trained on a densely annotated dataset, and then adapts them to work with machine representation \(\mathbf{M}\). Unsupervised approaches instead mine the concepts directly in the space of machine representations through a linear decomposition (Ghorbani et al., 2019; Zhang et al., 2021; Fel et al., 2023a,b). Specific examples are discussed in section 6. In general, there is no guarantee that the symbolic and sub-symbolic representations \(\hat{\mathbf{H}}\) and \(\mathbf{M}\) capture exactly the same information. This introduces a _faithfulness_ issue, meaning that CBE explanations may not portray a reliable picture of the model's inference process (Kim et al., 2018; Teso, 2019; Pfau et al., 2021; Fel et al., 2023b).
However, the issue we focus on is that the representation \(\mathbf{Z}=\hat{\mathbf{H}}\) used by CBEs to communicate with users is in fact interpretable, regardless of whether it is also faithful.
### From Symbolic Communication to Alignment
What makes symbolic communication possible? While a complete answer to this question is beyond the scope of this paper, we argue that communication becomes challenging unless the concepts \(\mathbf{Z}\) with which the machine and the human communicate are "_aligned_", in the sense that concepts having the same _name_ share the same (or similar enough) _semantics_. Other factors contributing to interpretability will be discussed in section 5.
In order to formalize this intuition, we focus on the generative process shown in fig. 3. In short, we assume observations \(\mathbf{x}\) - _e.g._, images or text observed during training and test - are obtained by mapping generative factors \(\mathbf{G}\sim p^{*}(\mathbf{G}\mid\mathbf{C})\) through a hidden ground-truth distribution \(p^{*}(\mathbf{X}\mid\mathbf{G})\).
The observations \(\mathbf{x}\) are then received by _two observers_: a machine and a human. The machine maps them to its own learned representation \(\mathbf{M}\), which may or may not be interpretable. The interpretable representations \(\mathbf{Z}\) - which correspond to \(\mathbf{M}_{\mathcal{J}}\) for CBMs (cf. section 3.1) and to \(\hat{\mathbf{H}}\) for CBEs (section 3.2) - is then derived from \(\mathbf{M}\).
At the same time, the human observer maps the same observations to its own vocabulary of concepts \(\mathbf{H}\). For instance, if \(\mathbf{x}\) is an image portraying a simple object on a black background, \(\mathbf{h}\) may encode the "color" or "shape" of that object, or any other properties deemed relevant by the human. The choice and semantics of these concepts depend on the background and expertise of the human observer and possibly on the downstream task the human may be concerned with (_e.g._, medical diagnosis or loan approval), and as such may vary between subjects. It is to _these_ concepts that the human associates names - like in fig. 4 - and it is these concepts that they would use for communicating the properties of \(\mathbf{x}\) to other people.
Notice that the human concepts \(\mathbf{H}\) may be arbitrarily different from the ground-truth factors \(\mathbf{G}\): whereas the latter include all information necessary to determine the observations, and as such may be complex and uninterpretable (Gabbay et al., 2021), the former are those aspects of the observation that matter _to the human observer_. A concrete example is that of color blindness: an observer may be unable to discriminate between certain wavelengths of visible light, despite these being causes of the generated image \(\mathbf{X}\). Another, more abstract, example are the generative factors that cause a particular apple to appear ripe, _e.g._, those biological processes occurring during the apple tree's reproductive cycle, which are beyond the understanding of most non-experts.5 In stark contrast, the concept of "redness" is not causally related to the apple's appearance, and yet easily understood by most human observers, precisely because it is a feature that is evolutionarily and culturally useful to those observers. In this sense, _the concepts \(\mathbf{H}\) are understandable by definition_.
Footnote 5: They are so opaque that a whole science had to be developed to identify and describe them.
We argue that symbolic communication is feasible whenever the names associated (by the human) to elements of \(\mathbf{H}\) can be transferred to the elements of \(\mathbf{Z}\) in a way that preserves semantics. That is, _concepts with the same name should have the same meaning_. In order to ensure information expressed in terms of
\(\mathbf{Z}\) - say, an explanation stating that \(Z_{1}\) is irrelevant for a certain prediction - is understood by the human observer, we need to make sure that \(\mathbf{Z}\) itself is somehow "aligned" with the human's representation \(\mathbf{H}\).
## 4 Alignment as Name Transfer
### Alignment: The Disentangled Case
What does it mean for two representations to be aligned? We start by looking at the simplest (but non-trivial) case in which the ground-truth factors \(\mathbf{G}\) are _disentangled_, cf. definition 1. For ease of exposition, let us also temporarily assume that some of the generative factors are inherently interpretable, as in [10]. Namely, we assume all factors in \(\mathbf{G}_{\mathcal{I}}\subseteq\mathbf{G}\), where \(\mathcal{I}\subseteq[n]\), can be understood by the human observer, while those in \(\mathbf{G}_{\mathcal{-I}}\) cannot. The corresponding data generation process is illustrated in fig. 4. Under these assumptions, we aim to recover machine representations \(\mathbf{M}\) that are aligned to the interpretable factors \(\mathbf{G}_{\mathcal{I}}\).
To this end, we generalize the notion of alignment introduced by Marconato et al. [2022].6 As anticipated, our definition revolves around the conditional distribution on \(\mathbf{M}\) given by \(\mathbf{G}\), or equivalently the stochastic map \(\alpha:\mathbf{g}\mapsto\mathbf{m}\) defined in eq. (2) and shown in **red** in fig. 4. The key intuition is that _two concept vocabularies \(\mathbf{G}\) and \(\mathbf{M}\) are aligned if and only if \(\alpha\) preserves the semantics of the interpretable generative factors \(\mathbf{G}_{\mathcal{I}}\)_.
Footnote 6: Our definition extends that of [10] to the general case in which the mapping \(\alpha\) – which is defined as a marginal distribution in eq. (2) – is stochastic rather than deterministic. Doing so allows us to cater to more realistic applications and to draw an explicit connection with PIDA in proposition 1.
More specifically, alignments holds if \(\alpha\) allows to _transfer the names_ of the interpretable factors in a way that preserves semantics. If \(p_{\theta}(\mathbf{M}\mid\mathbf{X})\) is learned in an unsupervised fashion, names are generally transferred by collecting or constructing inputs annotated with the corresponding human concepts, feeding them to the concept extractor, and looking for matches between the annotations and the elements of \(\mathbf{M}_{\mathcal{J}}\).7 In a sense, this process is analogous to giving the human observer access to a set of "knobs", each one controlling the value of one \(G_{i}\in\mathbf{G}_{\mathcal{J}}\), and to a visualization of the machine representation \(\mathbf{M}_{\mathcal{I}}\). Turning a knob is akin to _intervening_ on the corresponding factor \(G_{i}\). If, by turning a knob, the user is able to figure out what \(G_{i}\) corresponds to what \(M_{j}\), then they will associate them the same name. Since we are assuming \(\mathbf{G}_{\mathcal{I}}\) is disentangled, turning one knob does not affect the others, which simplifies the process.
Footnote 7: If concept-level annotations are used, the names are automatically transferred along with them, but we still wish the user to be able to match the learned concepts with its own.
The formal definition of alignment is as follows:
Figure 3: Graphical model of our data generation process. In words, \(n\) (correlated) generative factors exist in the world \(\mathbf{G}=(G_{1},\ldots,G_{n})\) that _cause_ an observed input \(\mathbf{X}\). The machine maps these to an internal representation \(\mathbf{M}=(M_{1},\ldots,M_{k})\), while the human observer maps them to its own internal concept vocabulary \(\mathbf{H}=(H_{1},\ldots,H_{\ell})\). Notice that the observer’s concepts \(\mathbf{H}\) may and often do differ from the ground-truth factors \(\mathbf{G}\). The concepts \(\mathbf{H}\) are what the human can understand and attach names to, _e.g._, the “color” and “shape” of an object appearing in \(\mathbf{X}\). The association between names and human concepts is denoted by dotted lines. We postulate that communication is possible if the machine and the human representations are _aligned_ according to definition 6.
**Definition 6** (Alignment): _Given generative factors \(\mathbf{G}\) of which \(\mathbf{G}_{\mathcal{I}}\) are interpretable, a machine representation \(\mathbf{M}\) is aligned iff the map \(\alpha\) between \(\mathbf{G}\) and \(\mathbf{M}\) can be written as:_
\[\mathbf{M}_{\mathcal{J}}=\alpha(\mathbf{G},\mathbf{N})_{\mathcal{J}}=(\mu_{j}( G_{\pi(j)},N_{j})\::\:j\in\mathcal{J}) \tag{7}\]
_where \(\mathbf{M}_{\mathcal{J}}\subseteq\mathbf{M}\) are the machine representations that ought to be interpretable, \(\mathbf{N}\) are independent noise variables, and \(\pi\) and \(\mu\) satisfy the following properties:_
* _The index map_ \(\pi:\mathcal{J}\mapsto\mathcal{I}\) _is surjective and, for all_ \(j\in\mathcal{J}\)_, it holds that, as long as_ \(G_{\pi(j)}\) _is kept fixed,_ \(M_{j}\) _remains unchanged even when the other generative factors_ \(\mathbf{G}\setminus\{G_{\pi(j)}\}\) _are forcibly modified._
* _Each element-wise transformation_ \(\mu_{j}\)_, for_ \(j\in\mathcal{J}\)_, is monotonic in expectation over_ \(N_{j}\)_:_ \[\exists\bowtie\in\{>,<\}\text{ such that }\forall g^{\prime}_{\pi(j)}>g_{\pi(j)},\: \left(\mathbb{E}_{N_{j}}[\mu_{j}(g_{\pi(j)},N_{j})]-\mathbb{E}_{N_{j}}[\mu_{j} (g^{\prime}_{\pi(j)},N_{j})]\right)\bowtie 0\] (8)
Let us motivate our two desiderata. In line with prior work on disentangled representations (Bengio et al., 2013; Higgins et al., 2018), **D1** requires that \(\alpha\) should not "mix" multiple \(G_{i}\)'s into a single \(M_{j}\), regardless of whether the former belong to \(\mathbf{G}_{\mathcal{I}}\) or not. For instance, if \(M_{j}\) blends together information about both color and shape, or about color and some uninterpretable factor, human observers would have trouble pinning down which one of their concepts it matches. If it does not, then turning the \(G_{\pi(j)}\) knob only affects \(M_{j}\), facilitating name transfer.8 We will show in section 4.2 that this is equivalent to disentanglement.
Footnote 8: The converse is not true: as we will see in section 4.4, interpretable concepts with “compatible semantics” can in principle be blended together without compromising interpretability.
**D2** is also related to name transfer. Specifically, it aims to ensure that, whenever the user turns a knob \(G_{\pi(j)}\), they can easily understand _what_ happens to \(M_{j}\) and thus figure out the two variables encode the same information. To build intuition, notice that both **D1** and **D2** hold for the _identity_ function, as well as for those maps \(\alpha\) that _reorder_ or _rescale_ the elements of \(\mathbf{G}_{\mathcal{I}}\), which clearly preserve semantics and naturally support name transfer. Monotonicity captures all of these cases and also more expressive _non-linear_ element-wise functions, while _conservatively_ guaranteeing a human would be able to perform name transfer. Notice also that **D2** can be constrained further based on the application.
A couple of remarks are in order. Most importantly, notice that _our definition of alignment immediately applies also to the mapping between \(\mathbf{Z}\) and the human vocabulary \(\mathbf{H}\)_. In this case, \(\alpha\) is the map between human concepts \(\mathbf{h}\) and machine representations \(\mathbf{z}\), obtained by marginalizing over \(\mathbf{X}\), \(\mathbf{G}\), and \(\mathbf{C}\) (see fig. 3), and it is only aligned if it satisfies **D1** and **D2**. More generally, alignment can hold for _any_ mapping between representations. We also observe that, since \(\pi\) maps \(\mathcal{J}\) exclusively into \(\mathcal{I}\), alignment entails a form of _content-style separation_ (definition 4), in that \(\mathbf{M}_{\mathcal{J}}\) does not encode any information about \(\mathbf{G}_{-\mathcal{I}}\). We will show in section 4.3 that representations that do not satisfy this condition can be affected by _concept leakage_, while aligned representations cannot. Finally, we note that \(\mathbf{M}\) can be aligned and still contain multiple transformations of the same \(G_{i}\in\mathbf{G}_{\mathcal{I}}\). This does not compromise interpretability in that all "copies" can always be traced back to the same \(G_{i}\).
Figure 4: Simplified generative process with a single observer, adapted from (Marconato et al., 2022). Here, \(\mathbf{C}\) are unobserved confounding variables influencing the generative factors \(\mathbf{G}\), and \(\mathbf{M}\) is the latent representation learned by the machine. The **red** arrow represents the map \(\alpha\).
### Disentanglement Does Not Entail Alignment
Next, we clarify the relationship between alignment and disentanglement of representations by showing that the latter is exactly equivalent to **D1**:
**Proposition 1**: _Assuming noise terms are independent, as per section 2, **D1** holds if and only if the representations are disentangled in \((\mathbf{G}_{\mathcal{I}},\mathbf{M}_{\mathcal{J}})\) (cf. definition 3.)_
All proofs can be found in appendix A. The equivalence between disentanglement of representations and **D1** implies that _disentanglement is insufficient for interpretability_: even if \(\mathbf{M}\) is disentangled, _i.e._, each \(M_{j}\) encodes information about at most one \(G_{i}\in\mathbf{G}_{\mathcal{I}}\), nothing prevents the transformation from \(G_{i}\) to its associated \(M_{j}\) from being arbitrarily complex, complicating name transfer. In the most extreme case, \(\alpha(\cdot)_{j}\) may not be _injective_, making it impossible to distinguish between different \(g_{i}\)'s, or could be an arbitrary shuffling of the continuous line: this would clearly obfuscate any information present about \(G_{i}\). This means that, during name transfer, a user would be unable to determine what value of \(M_{j}\) corresponds to what value of \(G_{i}\) or to anticipate how changes to the latter affect the former.
This is why **D2** in definition 6 requires the map between each \(G_{i}\in\mathbf{G}_{\mathcal{I}}\) and its associated \(M_{j}\) to be "simple". This extra desideratum makes alignment _strictly stronger_ than disentanglement.
### Alignment Entails No Concept Leakage
_Concept leakage_ is a recently discovered phenomenon whereby the "interpretable" concepts \(\mathbf{M}_{\mathcal{J}}\) unintentionally end up encoding information about extraneous concepts [16]. Empirically, leaky concepts are predictive for inference tasks that - in principle - do not depend on them. Situations like the following occur in practice, even if full concept supervision is used [13, 14, 15]:
**Example 1**: _Let \(\mathbf{X}\) be a \(\mathtt{dSprites}\) image [14] picturing a white sprite, determined by generative factors including "position", "shape", and "size", on a black background. Now imagine training a concept extractor \(p_{\theta}(\mathbf{M}\mid\mathbf{X})\) so that \(\mathbf{M}_{\mathcal{J}}\) encodes \(\mathtt{shape}\) and \(\mathtt{size}\) - but not \(\mathtt{position}\) - by using full concept-level annotations for \(\mathtt{shape}\) and \(\mathtt{size}\). The concept extractor is then frozen. During inference, the goal is to classify sprites as either positive (\(Y=1\)) or negative (\(Y=0\)) depending on whether they are closer to the top-right corner or the bottom-left corner. When concept leakage occurs, the label - which clearly depends only on \(\mathtt{position}\) - can be predicted with above random accuracy from \(\mathbf{M}_{\mathcal{J}}\), meaning these concepts somehow encode information about \(\mathtt{position}\), which they are not supposed to._
The only existing formal account of concept leakage was provided by Marconato et al. [15], who view it in terms of (lack of) out-of-distribution (OOD) generalization. Other works instead focus on indististribution behavior and argue that concept leakage is due to encoding discrete generative factors using a continuous representation [11, 14]. We go beyond these works by providing the first general formulation of concept leakage and showing that it is related to alignment. Specifically, we propose to view _concept leakage as a (lack of) content-style separation_, and show that this explains how concept leakage can arise both in- _and_ out-of-distribution.
We start by formalizing the intuition that concept leakage is excess prediction accuracy - gained by leveraging leaky concepts - compared to a leak-free baseline [13, 14]. The corresponding generative process is reported in fig. 5. We assume the generative factors \(\mathbf{G}\) are partitioned as \((\mathbf{G}_{\mathcal{I}},\mathbf{G}_{-\mathcal{I}})\) such that _only_\(\mathbf{G}_{-\mathcal{I}}\) are _informative_ for predicting a label \(Y\), mediated by the conditional distribution \(p(Y\mid\mathbf{G}_{-\mathcal{I}})\). This implies that their mutual information is positive, that is, \(I(\mathbf{G}_{-\mathcal{I}},Y)>0\).9 Now, fix a concept encoder \(p_{\theta}(\mathbf{M}_{\mathcal{J}}\mid\mathbf{X})\) and let \(q_{\lambda}(Y\mid\mathbf{M}_{\mathcal{J}})\) be a predictor learned on top of it (in orange in the figure). _To quantify concept leakage, we look at how well the best possible such predictor can infer the label \(Y\) using \(\mathbf{M}_{\mathcal{J}}\) after intervening on \(\mathbf{G}_{-\mathcal{I}}\). Analogously to \(\mathtt{EMPIDA}\) (definition 2), the intervention detaches \(\mathbf{G}_{-\mathcal{I}}\) from \(\mathbf{C}\), thus ensuring the label \(Y\) cannot be influenced by the irrelevant factors \(\mathbf{G}_{\mathcal{I}}\). The resulting manipulated distribution on \(\mathbf{G}\) is:_
Footnote 9: In this section we are mostly concerned with the non-informativeness of \(\mathbf{G}_{\mathcal{I}}\), hence we allow \(\mathbf{G}_{-\mathcal{I}}\) to potentially contain also interpretable factors.
\[p^{\prime}(\mathbf{G})=p(\mathbf{G}\mid do(\mathbf{G}_{-\mathcal{I}}\leftarrow \mathbf{g}_{-\mathcal{I}}))q(\mathbf{g}_{-\mathcal{I}}):=\mathbb{E}_{\mathbf{C} }[p(\mathbf{G}_{\mathcal{I}}\mid\mathbf{C})]\mathbb{I}\left\{\mathbf{G}_{- \mathcal{I}}=\mathbf{g}_{-\mathcal{I}}\right\}q(\mathbf{g}_{-\mathcal{I}}) \tag{9}\]
where \(q(\mathbf{g}_{-\mathcal{I}})\) is a distribution over possible interventions. This can be _any_ distribution, with the only requirement that under any intervention \(do(\mathbf{G}_{-\mathcal{I}}\leftarrow\mathbf{g}_{-\mathcal{I}})\) the model observes different variations of \(Y\).10
Footnote 10: If \(Y\) is constant, leakage is impossible, since \(I(\mathbf{G}_{-\mathcal{I}},Y)=0\).
From the causal factorization in fig. 5, the joint probability of \((\mathbf{X},Y)\) resulting from the post-interventional distribution \(p^{\prime}(\mathbf{G})\) is given by:
\[p(\mathbf{X},Y)=\mathbb{E}_{\mathbf{g}\sim p^{\prime}(\mathbf{G}|do(\mathbf{G }_{-\mathcal{I}}\leftarrow\mathbf{g}_{-\mathcal{I}}))}[p(Y\mid\mathbf{g}_{- \mathcal{I}})p(\mathbf{X}|\mathbf{g})] \tag{10}\]
Data of this kind appear, for example, in the dSprites experiment [10] outlined in example 1. Here, during training the "position" of the sprite is fixed (_i.e._, \(\mathbf{G}_{pos}=\mathbf{G}_{-\mathcal{I}}\) are fixed to the center), while at test time the data contains different interventions over the position \(\mathbf{G}_{pos}=\mathbf{G}_{-\mathcal{I}}\), and free variations of the other factors \(\mathbf{G}_{\mathcal{I}}\) (_e.g._, "shape" and "size"). Essentially, these interventions move the sprite around the top-right and bottom-left borders, where the factors \(\mathbf{G}_{pos}\) are extremely informative for the label \(Y\).
In order to measure the degree of concept leakage in \(p_{\theta}(\mathbf{M}_{\mathcal{J}}\mid\mathbf{X})\), _we compare the prediction performance of the best possible predictor \(q_{\lambda}(Y\mid\mathbf{M}_{\mathcal{J}})\) with that of the best possible predictor \(r_{\gamma}(Y)\) that does not depend on \(\mathbf{M}_{\mathcal{J}}\) at all_. This is equivalent to comparing the behavior of two Bayes optimal predictors, one of which has access to the learned (possibly leaky) concepts whereas the other does not. In the following, we assume the distributions \(q_{\lambda}\) an \(r_{\gamma}\) to be sufficiently expressive, _i.e._, they can encode any sufficiently well behaved stochastic function. This is the case, for instance, when they are implemented as deep neural networks. We are now ready to define concept leakage:
**Definition 7** (Concept Leakage): _Given a classifier \(q_{\lambda}(y\mid\mathbf{z})\), an uninformed Bayes optimal predictor \(r_{\gamma}(y)\), and data samples \((\mathbf{x},y)\in\mathcal{D}\), concept leakage \(\Lambda\) is the difference:_
\[\Lambda=\max_{\lambda}[\mathcal{L}_{CL}(\lambda)]-\max_{\gamma}[\mathcal{L}_{ r}(\gamma)] \tag{11}\]
_where:_
\[\mathcal{L}_{CL}=\mathbb{E}_{(\mathbf{x},y)\sim p(\mathbf{X},Y)}\log q_{ \lambda,\theta}(y|\mathbf{x})\qquad\mathcal{L}_{r}=\mathbb{E}_{(\mathbf{x},y )\sim p(\mathbf{X},Y)}\log r_{\gamma}(y) \tag{12}\]
_are the average log-likelihood of the classifier \(q_{\lambda,\theta}(Y|\mathbf{X}):=\mathbb{E}_{\mathbf{m}_{\mathcal{J}}\sim p _{\theta}(\mathbf{M}_{\mathcal{J}}|\mathbf{X})}p(Y\mid\mathbf{m}_{\mathcal{J}})\) and of the uninformed Bayes optimal classifier, respectively._
By definition 7, concept leakage occurs if and only if there exists a \(\lambda\) that allows to predict \(Y\) better than the best uninformed predictor. In the following analysis, we characterize concept leakage evaluated on the ground-truth distribution \(p(\mathbf{X},Y)\). We proceed to show that this quantity is bounded by two terms:
Figure 5: Generative process for Concept Leakage. A predictor observes examples \((\mathbf{X},Y)\) and infers \(Y\) from its interpretable representation \(\mathbf{M}_{\mathcal{J}}\) using a learnable conditional distribution \(q_{\lambda}(Y\mid\mathbf{m}_{\mathcal{J}})\), indicated in orange. Since the label \(Y\) depends solely on \(\mathbf{G}_{-\mathcal{I}}\), we would expect that it _cannot_ be predicted better than at random: intuitively, if this occurs it means that information from \(\mathbf{G}_{-\mathcal{I}}\) has leaked into the interpretable concepts \(\mathbf{M}_{\mathcal{J}}\). Any intervention \(do(\mathbf{G}_{-\mathcal{I}}\leftarrow\mathbf{g}_{-\mathcal{I}})\) on the uninterpretable/unobserved concepts detaches these from \(\mathbf{C}\), meaning that the label truly only depends on \(\mathbf{G}_{-\mathcal{I}}\).
**Proposition 2**: _Assuming the causal factorization in fig. 5, it holds that:_
\[I(\mathbf{M}_{\mathcal{J}},Y)\leq\Lambda\leq I(\mathbf{G}_{-\mathcal{I}},Y) \tag{13}\]
_where \(I(\mathbf{A},\mathbf{B})\) denotes the mutual information between \(\mathbf{A}\) and \(\mathbf{B}\)._
The bounds in eq. (13) are useful for understanding how concept leakage behaves. They show, for instance, that \(\Lambda\) cannot exceed the mutual information between \(\mathbf{G}_{-\mathcal{I}}\) and \(Y\). Second, applying the data-processing inequality (Cover, 1999) to the lower bound yields \(I(\mathbf{M}_{\mathcal{J}},Y)\geq I(\mathbf{M}_{\mathcal{J}},\mathbf{G}_{- \mathcal{I}})\). The latter quantifies the information contained in \(\mathbf{M}_{\mathcal{J}}\) about \(\mathbf{G}_{-\mathcal{I}}\). In other words, concept leakage can only be zero if indeed the machine concepts \(\mathbf{M}_{\mathcal{J}}\) contain no information about them, because \(I(\mathbf{M}_{\mathcal{J}},\mathbf{G}_{-\mathcal{I}})\leq\Lambda=0\). Next, we also show that if \(\mathbf{M}_{\mathcal{J}}\) does not encode information about \(\mathbf{G}_{-\mathcal{I}}\) - or equivalently, it satisfies content-style separation (definition 4) - then it has zero concept leakage.
**Proposition 3**: _Suppose that \(\mathbf{M}_{\mathcal{J}}\) does not encode any information of \(\mathbf{G}_{-\mathcal{I}}\), consistently with content-style separation (definition 4), then \(\Lambda\) is zero._
This result leads to two consequences. Let us start by looking at the _out-of-distribution case_ investigated in (Marconato et al., 2022). Here, the concept extractor is trained only on some fixed variations of \(\mathbf{G}_{-\mathcal{I}}\). However, when the support of \(\mathbf{G}_{-\mathcal{I}}\) changes drastically, the model is not likely to ensure content-style separation outside of the support of the training distribution, even if \(\mathbf{D1}\) holds in-distribution. This failure can be explained by the difficulty of disentanglement techniques to ensure disentanglement for out-of-distribution samples- in the context of combinatorial generalization - by Montero et al. (2020, 2022). Consider for dSprites example. Here, during training, sprites are located in the dead center of the background, and when observing the sprites on the borders of the image, which is far away from the support of the training set, the concept encoder fails to ensure their representations are disentangled. This failure of disentanglement techniques to ensure disentanglement for out-of-distribution inputs was also observed - in the context of combinatorial generalization - by Montero et al. (2020, 2022). Our results show that if content-style separation does not hold, concept leakage may be non-zero, meaning that techniques like open-set recognition (Sun et al., 2020) must be adopted to detect OOD inputs and process them separately.
Next, we look at concept leakage for _in-distribution_ scenarios. Following Havasi et al. (2022), consider a model leveraging two concepts - presence of "tail" and "fur" - and the task of distinguish between images of cats and dogs using these (clearly non-discriminative) concepts. According to (Havasi et al., 2022), concept leakage can occur when binary concepts like these are modelled using continuous variables, meaning the concept extractor can unintentionally encode "spurious" discriminative information. In light of our analysis, we argue that concept leakage is instead due to lack of content-style separation, and thus of alignment. To see this, suppose there exists a concept \(G_{k}\in\mathbf{G}_{-\mathcal{I}}\) useful for distinguishing cats from dogs and that it is disentangled as in definition 1 from the concepts of fur \(G_{fur}\) and of tail \(G_{tail}\). Then, by content-style separation, any representation \(\mathbf{M}_{\mathcal{J}}\) that is aligned to \(G_{fur}\) and \(G_{tail}\) does not encode any information about \(G_{k}\), leading to zero concept leakage.
In both cases, concept leakage arises as a failure in content-style separation between relevant and irrelevant generative factors, and as such it can be used as a proxy for measuring the latter. Moreover, since alignment implies content-style separation, aligned representations cannot suffer from concept leakage.11
Footnote 11: Note that the converse is not true: while alignment entails content-style separation, the latter can hold independently from alignment.
An extension of this result includes the case where \(G_{k}\in\mathbf{G}_{\mathcal{I}}\), _i.e._, the ground-truth factor \(G_{k}\) is relevant for in-distribution predictions \(Y\) and representations \(\mathbf{M}_{\mathcal{J}}\) encode it somewhere. In this case, concept leakage is evaluated among the elements of \(\mathbf{M}_{\mathcal{J}}\) that should not encode \(G_{k}\). That is, if a subset of \(\mathbf{M}_{\mathcal{J}}\) encodes only the concepts \(G_{fur}\) and \(G_{tail}\) it must not be discriminative for \(Y\). Without loss of generality,12 we suppose that only a single \(M_{j^{\prime}}\) is aligned to \(G_{k}\), that is \(\pi(j^{\prime})=k\), whereas other \(\mathbf{M}_{\mathcal{J}}\setminus M_{j^{\prime}}\) are aligned to other concepts, among which \(G_{fur}\) and \(G_{tail}\). Then, the following holds:
Footnote 12: The general case includes all representations \(\mathbf{M}_{\mathcal{J}}\), where \(\pi^{-1}\) is the pre-image of the map \(\pi\).
**Corollary 1**: _Consider a representation \(\mathbf{M}_{\mathcal{J}}\) that is aligned to a set of disentangled concepts \(\mathbf{G}_{\mathcal{I}}\), among which only \(G_{k}\) is discriminative for the label \(Y\). Then, all \(M_{j}\in\mathbf{M}_{\mathcal{J}}\) that are not associated by \(\alpha\) to \(G_{k}\), i.e., \(\pi(j)\neq k\), do not suffer from concept leakage._
Ultimately, an aligned representation prevents concept leakage among the encoded concepts. If some of the representations \(\mathbf{M}_{\mathcal{J}}\) are aligned only to the concepts \(G_{fur}\) and \(G_{tail}\), they cannot be used to discriminate between _cats_ and _dogs_.
### Alignment: The Block-wise Case
So far, we assumed the generative factors \(\mathbf{G}_{\mathcal{I}}\) - or, equivalently, the human concepts \(\mathbf{H}\) - are disentangled. We now extend alignment to more complex cases in which the human concepts _can_ be mixed together without compromising interpretability. This covers situations in which, for instance, the machine captures a single categorical generative factor using multiple variables via one-hot encoding, or uses polar coordinates to represent the 2D position of an object.
To formalize this setting, we assume \(\mathbf{G}_{\mathcal{I}}\) and \(\mathbf{M}_{\mathcal{J}}\) are partitioned into non-overlapping "blocks" of variables \(\mathbf{G}_{\mathcal{I}^{\prime}}\subseteq\mathbf{G}_{\mathcal{I}}\) and \(\mathbf{M}_{\mathcal{J}^{\prime}}\subseteq\mathbf{M}_{\mathcal{J}}\), respectively. The idea is that each block \(\mathbf{M}_{\mathcal{J}^{\prime}}\) captures information about only a single block \(\mathbf{G}_{\mathcal{I}^{\prime}}\), and that while mixing across blocks is not allowed, mixing the variables within each block _is_. From the human's perspective, this means that name transfer is now done block by block. With this in mind, we define _block alignment_ as follows:
**Definition 8** (Block-wise Alignment): _A machine representation \(\mathbf{M}\) is block-wise aligned to \(\mathbf{G}_{\mathcal{I}}\) if and only if there exists a subset \(\mathbf{M}_{\mathcal{J}}\subseteq\mathbf{M}\), a partition \(\mathcal{P}_{\mathbf{M}}\) of \(\mathcal{J}\), and a mapping \(\alpha:(\mathbf{g},\mathbf{N})\mapsto\mathbf{m}\) such that:_
\[\mathbf{M}_{\mathcal{J}^{\prime}}=\alpha(\mathbf{G},\mathbf{N})_{\mathcal{J} ^{\prime}}:=\mu_{\mathcal{J}^{\prime}}(\mathbf{G}_{\Pi(\mathcal{J}^{\prime})},\mathbf{N}_{\mathcal{J}^{\prime}})\qquad\forall\mathcal{J}^{\prime}\in \mathcal{P}_{\mathbf{M}} \tag{14}\]
_where the maps \(\Pi\) and \(\mu\) satisfy the following properties._
* _There exists a partition_ \(\mathcal{P}_{\mathbf{G}}\) _of_ \(\mathcal{I}\)13 _such that_ \(\Pi:\mathcal{P}_{\mathbf{M}}\rightarrow\mathcal{P}_{\mathbf{G}}\)_. We call this condition block-wise disentanglement_ Footnote 13: In principle, we can extend this notion to a family of subsets \(\mathcal{P}_{\mathbf{G}}\) of \(\mathcal{I}\). As an example, for \(xyz\) positions, one can consider blocks \(\{xy,yz,xz\}\) that are mapped to respectively block aligned representations. Footnote 14: For continuous variables, we require it to be a diffeomorphism. Footnote 15: This condition is identical to “weak identifiability” in representation learning Hyvarinen and Morioka (2017); Khemakhem et al. (2020).
* _Each map_ \(\mu_{\mathcal{J}^{\prime}}\)_is simulatable and invertible_16 _on the first statistical moment, that is, there exists a unique pre-image_ \(\alpha^{-1}\) _defined as:_
Footnote 16: In principle, we can extend this notion to a family of subsets \(\mathcal{P}_{\mathbf{G}}\) of \(\mathcal{I}\). As an example, for \(xyz\) positions, one can consider blocks \(\{xy,yz,xz\}\) that are mapped to respectively block aligned representations.
Footnote 16: For continuous variables, we require it to be a diffeomorphism.
\[\mathbf{G}_{\Pi(\mathcal{J}^{\prime})}=\alpha^{-1}(\mathbb{E}[\mathbf{M}_{ \mathcal{J}}])_{\mathcal{J}^{\prime}}:=\left(\mathbb{E}_{\mathbf{N}_{\mathcal{ J}}}[\mu_{\mathcal{J}^{\prime}}(\cdot,\mathbf{N}_{\mathcal{J}^{\prime}})] \right)^{-1}(\mathbf{G}_{\Pi(\mathcal{J}^{\prime})}) \tag{15}\]
By **D1**, changes to any block of human concepts only impact a single block of machine concepts, and by **D2** the change can be anticipated by the human observer, that is the human interacting with the machine grasps what is the general mechanism behind the transformation from \(\mathbf{G}\) (or \(\mathbf{H}\)) and \(\mathbf{M}\) (and vice versa). Both properties support name transfer.
A priori, it is not clear to say what transformations are simulatable (Lipton, 2018), as this property depends crucially on the human's cognitive limitations and knowledge. However, simulatability can be assessed via user studies in practice. We remark that **D2** implicitly constraints the variables within each block to be "semantically compatible" because this property impacts simulatable. In the context of image recognition, for instance, placing concepts such as "nose shape" and "sky color" in the same block is likely to make name transfer substantially more complicated, as changes to "nose shape" might end affecting the representation of "'sky color". Semantic compatibility is fundamentally a psychological issue. An example of semantic compatibility is that of rototranslation of the coordinates followed by element-wise rescaling17. A counter example would be a map \(\alpha\) given by a conformal map for the 2D position of an object in a scene. Albeit invertible, it may not be simple at all to simulate.
Footnote 17: This condition is identical to “weak identifiability” in representation learning Hyvarinen and Morioka (2017); Khemakhem et al. (2020).
Notice that with this definition we include two possible scenarios: (i) the case where some of the ground-truth concepts belonging to the same block are transformed in a single block, and (ii) the case where semantically compatible, but disentangled, concepts \(\mathbf{G}_{I}\) are mixed together in \(\mathbf{M}_{\mathcal{J}}\), which is often neglected in current disentanglement literature. The latter includes and extends the special case of alignment for disentangled \(\mathbf{G}\).
The limitation of definition 8 reflects the fact that taking into account the possible user's grasp of the representation is not straightforward to define and poses a challenge to provide a uniquely accepted definition that considers the human factor.
### Alignment: The General Case
In the most general case the generative factors \(\mathbf{G}\) are causally related to each other according to an arbitrary ground-truth SCM \(\mathfrak{C}_{\mathbf{G}}\). This entails that \(\mathbf{G}_{\mathcal{I}}\) is no longer (block) disentangled. Hence, during name transfer turning a "knob" (_i.e._, a variable \(G_{i}\)) affects those knobs that are causally dependent on it.
Naturally, the semantics of the user's comprises the causal relations between them. To see this, let \(G_{1}\) be the temperature and \(G_{2}\) the color of a metal object: the user knows that temperature affects color, and would not assign the same name to a representation of temperature that does not have a similar effect on the representation of color. In order ensure preservation of these semantics, we say that a machine representation \(\mathbf{M}\) is _aligned_ to \(\mathbf{G}\) if, whenever the human intervenes on one \(G_{i}\), affecting those generative factors \(\mathbf{G}_{\mathcal{I}^{\prime}}\) that depend on it, an analogous change occurs in the machine representation.
We now show that _block-alignment_ is sufficient to satisfy this desideratum. Even in this more general case, the distribution over the representation can be obtained by marginalizing over the input variables:
\[p(\mathbf{M}):=\mathbb{E}_{\mathbf{x}\sim p(\mathbf{X})p_{\theta}}(\mathbf{M} \mid\mathbf{x})\equiv\mathbb{E}_{\mathbf{g}\sim p(\mathbf{G})p_{\theta}}( \mathbf{M}\mid\mathbf{g}) \tag{16}\]
Notice that the definition of block alignment does not make any assumption about absence or presence of causal relations between blocks of generative factors, meaning that we it is still well-defined in this more general setting.16 Specifically, a map \(\alpha\) can be block aligned if the variables \(G\) within each block are disentangled with each other, although there may exist causal relations across blocks.
Footnote 16: This generalizes block alignment beyond disentangled factors [Suter et al., 2019].
Now, imagine having a stochastic map \(\alpha\) between \(\mathbf{G}\) and \(\mathbf{M}\) that does satisfy block alignment, and also that there exist causal relations between the blocks \(\mathbf{G}_{\mathcal{I}^{\prime}}\). Whenever the user turns a "knob" corresponding to a ground-truth block, this yields an interventional distribution \(p(\mathbf{G}\mid do(\mathbf{G}_{\mathcal{I}^{\prime}}\leftarrow\mathbf{g}_{ \mathcal{I}^{\prime}}))\). Through \(\alpha\), this determines a new interventional distribution on the machine representations, namely:
\[p(\mathbf{M}\mid do(\mathbf{G}_{\mathcal{I}^{\prime}}\leftarrow\mathbf{g}_{ \mathcal{I}^{\prime}}))=\mathbb{E}_{\mathbf{g}\sim p(\mathbf{G})do(\mathbf{G}_ {\mathcal{I}^{\prime}}\leftarrow\mathbf{g}_{\mathcal{I}^{\prime}}))}p_{\theta}( \mathbf{M}\mid\mathbf{g}) \tag{17}\]
This implies a representation \(\mathbf{M}\) where the (interventional) distribution is obtained by mapping the state \(\mathbf{g}_{\mathcal{I}^{\prime}}\) through \(\alpha\). The same operation can be performed to obtain the state of all other machine representations aligned with the blocks that are causally related to \(\mathbf{G}_{\mathcal{I}^{\prime}}\) and affected by the intervention.
Note that this distribution _automatically takes causal relations between generative factors into account and treats them as causal relations between machine representations_. To see this, consider the following example:
**Example 2**: _Consider two generative factors \(G_{1}\) and \(G_{2}\) causally connected via a structural assignment \(G_{2}\gets f(G_{1},N_{2})\), as in fig. 6. As before, \(G_{1}\) could be the temperature and \(G_{2}\) the color of a metal solid. Correspondingly, the aligned representation \(\mathbf{M}\) encodes the temperature in two distinct variables, \(M_{1}\) and \(M_{3}\), respectively to the temperature, say, measured in Celsius and Fahrenheit degrees. \(M_{2}\) encodes the color variable._
_The consequence of block-wise alignment is sketched in the fig. 6 in three distinct cases: (left) intervening on the temperature \(G_{1}\) affects both the aligned variables \((M_{1},M_{3})\) and the color \(G_{2}\). Correspondingly this has an effect also on \(M_{2}\) that changes according to \(G_{2}\). (center) An intervention on \(G_{2}\) influences only \(M_{2}\) through \(\alpha\) and it does not affect \(M_{1}\) and \(M_{3}\). (right) The effect of an intervention on whole variables \(\mathbf{G}\) is localized such that interventions on the temperature factor \(G_{1}\) will affect \(M_{1}\) and \(M_{3}\), whereby the interventions on \(G_{2}\) only affect \(M_{2}\), isolating it from the intervention on \(G_{1}\)._
Next, we formalize our observation that, thanks to _block alignment_, interventions on \(\mathbf{G}\) are automatically mirrored on \(\mathbf{M}\):
**Proposition 4**: _Given a block-wise aligned representation \(\mathbf{M}\) to \(\mathbf{G}\), it holds that for each distinct block \(\mathbf{M}_{\mathcal{C}}\) of representations \(\mathbf{M}\), an intervention on \(\mathbf{G}_{\Pi(\mathcal{K})}\) isolates \(\mathbf{M}_{\mathcal{K}}\) from interventions of other ground-truth blocks \(\mathbf{G}_{-\Pi(\mathcal{K})}\). Moreover, distinct interventions on \(\mathbf{G}_{\Pi(\mathcal{K})}\) corresponds on average to different interventions on \(\mathbf{M}_{\mathcal{K}}\)._
Importantly, this means that the effect of an intervention on the whole \(\mathbf{G}\) isolates each block in \(\mathbf{M}\) from the others, _i.e._, there is no explicit causal relation appearing in the learned representation. This matches the intuition that an intervention on a specific generative factor affects the corresponding block and removes the dependencies on other blocks of the representation.
Summarizing, block alignment entails interventions to the ground-truth concepts are mapped properly. At the same time, alignment between blocks ensures the transformation \(\alpha\) is simulatable, meaning that users can understand changes happening to all of the variables involved. This is sufficient to guarantee name transfer can be completed successfully in the general case, assuming not too many factors are changed at a time.
### Alignment and Causal Abstractions
One important observation is that the form of name transfer we have considered is _asymmetrical_, in the sense that the user intervenes on its own representation \(\mathbf{H}\) only, to then check how this impacts \(\mathbf{M}\). The other direction is not considered: it is not necessary to consider how intervening on \(\mathbf{M}\) impacts \(\mathbf{M}\). This leads to the setup depicted in fig. 7 (right) in which, given \(\mathfrak{C}_{\mathbf{H}}\), the effects of interventions on \(H_{i}\) are propagated to \(\mathbf{M}\) via a map \(\beta:\mathbf{H}\mapsto\mathbf{M}\), which may or may not be block-aligned.
We now consider a scenario in which the SCM of the representation \(\mathfrak{C}_{\mathbf{M}}\) is also provided17 and the effects of interventions on \(\mathbf{M}\) can be propagated leveraging its structural assignments.
Footnote 17: In practice, \(\mathfrak{C}_{\mathbf{M}}\) can be uncovered from data via causal discovery (Pearl, 2009).
Ideally, we would expect that, as long as \(\mathbf{M}\) is block-aligned to \(\mathbf{H}\), we can always find analogous post-interventional effects when intervening on \(H_{i}\) and on its aligned variable \(M_{j}\). This underlies a consistency condition between the two "worlds" that are described with \(\mathfrak{C}_{\mathbf{H}}\) and \(\mathfrak{C}_{\mathbf{M}}\), respectively, by requiring that they both lead to similar conclusions when intervened in a equivalent manner. Clearly, this does not depend solely on the nature of the map \(\beta\) but also on the structure of the machine SCM \(\mathfrak{C}_{\mathbf{M}}\).
The presence of a consistency property between \(\mathfrak{C}_{\mathbf{H}}\) and \(\mathfrak{C}_{\mathbf{M}}\) is what defines a _causal abstraction_(Beckers and Halpern, 2019; Beckers et al., 2020; Rubenstein et al., 2017), cf. (Zennaro, 2022) for an overview. Causal abstractions have been proposed to define (approximate) equivalence between causal graphs and have recently been employed in the context of explainable AI (Geiger et al., 2023, 2020). The existence of a causal abstraction ensures two systems are _interventionally equivariant_: interventions on one system can always be mapped (modulo approximations) to equivalent interventions in the other and lead to the same interventional distribution.
All causal abstractions check the consistency between two maps under the same intervention \(do(\mathbf{H}_{\mathcal{I}})\): one is defined by the post-interventional distribution of \(\mathfrak{C}_{\mathbf{H}}\) that is mapped on \(\mathbf{M}\) via \(\beta\), the other one consists of first matching on \(\mathbf{M}\) the correspondent action \(do(\mathbf{M}_{\mathcal{J}})\) and propagate it via \(\mathfrak{C}_{\mathbf{M}}\). Intuitively, this means that, under \(\beta\), interventions on \(\mathbf{H}\) lead to the same conclusion as interventions on \(\mathbf{M}\). We formalize this idea from constructive causal abstractions18(Geiger et al., 2023) by adapting it to the case where \(\mathbf{H}\) and \(\mathbf{M}\) are connected by block-alignment:
Figure 6: **Block-aligned representation** when \(\mathfrak{C}_{\mathbf{G}}\) has causal connections. (left) An intervention on \(G_{1}\) affects all representation (displayed in blue), since \((M_{1},M_{3})\) are block-aligned to \(G_{1}\) and \(M_{2}\) is aligned to \(G_{2}\). (center) Conversely, an intervention on \(G_{2}\) only affects \(M_{2}\), leaving the remaining representations untouched. (right) Intervening on all \(\mathbf{G}\) has the effect of isolating the corresponding aligned representations from other interventions. In this case, intervening on \(G_{2}\) removes the causal connection with \(G_{1}\), so that \(M_{2}\) does not depend on the intervention of \(G_{1}\). Refer to example 2 for further details.
**Definition 9** (\(\beta\)-Aligned Causal Abstraction): _The \(\mathfrak{C}_{\mathbf{M}}\) is a causal abstraction of \(\mathfrak{C}_{\mathbf{H}}\) under block-alignment \(\beta\) if, for all possible interventions \(do(\mathbf{H}_{\mathcal{I}}\leftarrow\mathbf{h}_{\mathcal{I}})\) with \(\mathbf{H}_{\mathcal{I}}\subseteq\mathbf{H}\), the following diagram commutes:_
\[\begin{CD}do(\mathbf{H}_{\mathcal{I}}\leftarrow\mathbf{h}_{\mathcal{I}})@>{ \mathfrak{C}_{\mathbf{M}}}>{}>p(\mathbf{H}\mid do(\mathbf{H}_{\mathcal{I}} \leftarrow\mathbf{h}_{\mathcal{I}}))\\ @V{}V{\beta}V@V{}V{\beta_{*}}V\\ do(\mathbf{M}_{\mathcal{J}}\leftarrow\mathbf{m}_{\mathcal{J}})@>{ \mathfrak{C}_{\mathbf{M}}}>{}>p(\mathbf{M}\mid do(\mathbf{M}_{\mathcal{J}} \leftarrow\mathbf{m}_{\mathcal{J}}))\end{CD} \tag{18}\]
_where \(\beta_{*}\) denotes the push-forward operation applied to the probability \(p(\mathbf{H}\mid do(\mathbf{H}_{\mathcal{I}}\leftarrow\mathbf{h}_{\mathcal{I}}))\), and \(\mathcal{J}=\Pi^{-1}(\mathcal{I})\) is the pre-image of \(\mathcal{I}\) under \(\Pi\)._
In other words, aligned causal abstractions extend block alignment by enforcing a _symmetrical_ consistency condition over interventions when both SCMs \(\mathfrak{C}_{\mathbf{H}}\) and \(\mathfrak{C}_{\mathbf{M}}\) are known: interventions on \(\mathbf{M}\) have analogues on \(\mathbf{H}\) and vice-versa. This becomes relevant in situations where the user cannot parse the effect of an intervention on \(H_{i}\) on the input \(\mathbf{X}\), _i.e._, they do not have access to \(p(\mathbf{X}\mid do(H_{i}\gets h_{i}))\), and they are left to validate the effects of their actions through \(\beta\). In this case, leveraging on the SCM \(\mathfrak{C}_{\mathbf{M}}\), the user can check how the mirrored intervention on \(M_{j}\) spreads in the machine representations, and compare it with the corresponding representations given by \(\beta\) when the intervention is propagated on the user's factors \(\mathbf{H}_{-i}\).
Therefore, while a map \(\beta\) being aligned is a necessary condition, it is not sufficient to guarantee a successful _name transfer_ if \(\mathfrak{C}_{\mathbf{M}}\) is highly dissimilar from \(\mathfrak{C}_{\mathbf{H}}\). We show this situation explicitly where, despite having alignment between the user and the machine, the consistency condition in eq.18 does not hold.
**Example 3**: _We consider two SCMs, one over user variables \(\mathfrak{C}_{\mathbf{H}}\) and one over the machine ones \(\mathfrak{C}_{\mathbf{M}}\). As shown in fig.7 (left), the two SCMs have a different structure and for ease of reference we refer to \(H_{1}\) and \(M_{1}\) as the temperature variable and to \(H_{2}\) and \(M_{2}\) as the color variable. Despite the different structure, we suppose \(M_{1}\) and \(M_{2}\) are aligned to \(H_{1}\) and \(H_{2}\), respectively, via an aligned map \(\beta\). We indicate the overall causal graph as \(\mathfrak{C}_{\mathbf{H}\rightarrow\mathbf{M}}\), see fig.7 (right)._
_We can now check that \(\mathfrak{C}_{\mathbf{M}}\) is not an aligned abstraction of \(\mathfrak{C}_{\mathbf{H}}\) under \(\beta\). In fact, intervening on \(H_{1}\) leads to different results on \(\mathfrak{C}_{\mathbf{M}}\) and \(\mathfrak{C}_{\mathbf{H}\rightarrow\mathbf{M}}\). For the former, changing the temperature amounts to modify only the corresponding variable \(M_{1}\) and does not affect \(M_{2}\), as evident in fig.7 (left). Conversely, a change in the temperature under alignment corresponds also to a change in color for the variable \(M_{2}\), as depicted in fig.7 (right). The two interventional effects, hence, do not coincide and \(\mathfrak{C}_{\mathbf{M}}\) is not an aligned causal abstraction of \(\mathbf{H}\)._
## 5 Discussion and Limitations
Our work provides a crisp requirement that machine representations should satisfy to ensure interpretability, namely _alignment_ with the human's concept vocabulary. Next, we address important issues arising from this requirement.
Figure 7: **Absence of Aligned Causal Abstraction. (left) The user’s \(\mathfrak{C}_{\mathbf{H}}\) incorporates a causal connection between \(H_{1}\) to \(H_{2}\), while the machine one \(\mathfrak{C}_{\mathbf{M}}\) presents no causal connections. (right) The total SCM \(\mathfrak{C}_{\mathbf{H}\rightarrow\mathbf{M}}\) of user’s and machine’s concepts resulting from an aligned map \(\beta:\mathbf{H}\rightarrow\mathbf{M}\) (in blue). Refer to example 3 for further discussion.**
### Is Perfect Alignment Sufficient and Necessary?
It is natural to ask whether perfect alignment is a _sufficient_ and _necessary_ condition for interpretability of machine concepts. Recall that alignment is born out of two desiderata. The first one is that of _subjectivity_: a concept is understandable _to_ a particular human observer, with different observers having different expertise and knowledge. This is captured by the human's vocabulary \(\mathbf{H}\) in our definition. The second one is that of guaranteeing that _machine and human concepts sharing the same name also share the same semantics_, translated into the desideratum that whenever a human concept changes the human can anticipate how this will change the machine representation. For instance, if the human and the machine see the same picture of a dog, the _human_ can easily figure out what concept encodes the notion of "dog" and how it would change if they were to delete the dog from the picture.19
Footnote 19: This last point takes into account, at least partially, the limited cognitive processing abilities of human agents.
_Is alignment sufficient?_ Simply ensuring that two agents share aligned representations does not automatically entail that symbolic communication will be successful. For instance, a human observer may misinterpret a machine explanation built out of aligned concepts simply due to inattention, confusion, or information overload. These are all important elements in the equation of interpretability, and we do not intend to dismiss them. The way in which information is _presented_ is about as important as the _contents_ of the information being conveyed. The problem of designing interfaces that ensure the presentation is engaging and easy to understand is however beyond the scope of this paper. This does not impact our core message, that is, that _lack_ of alignment can severely hamper communication and that therefore approaches for learning and evaluating conceptual representations should be designed with this requirement in mind.
_Is alignment necessary?_ We also point out that perfect alignment is not strictly _necessary_, for two reasons. First, it is enough that alignment holds only _approximately_. Slight differences in semantics between machine and human concepts are unlikely to have major effects on communication. This is compatible with the empirical observation that people can often successfully communicate even without fully agreeing on the semantics of the words they exchange [10]. In practice, the degree of misalignment, and its impact of the communication, can be defined and measured, at which point the maximum allowed misalignment becomes an application-specific variable. Second, it may not be necessary that alignment holds _everywhere_. If two agents exchange only a subset of possible messages (_e.g._, explanations), concepts not appearing in those messages need not be aligned. For instance, ensuring a CBM classifying apples as ripe or not to be interpretable only requires the concepts appearing in its explanations to be aligned, and possibly only those values that actually occur in the explanations (_e.g._, \(\texttt{color}=\texttt{red}\) but not \(\texttt{color}=\texttt{blue}\)). This can be understood as a more lax form of alignment applying only to a certain subset of (values of) the generative factors \(\mathbf{g}_{\mathcal{I}}\), _e.g._, those related to apples. It is straightforward to relax definition 6 in this sense by restricting it to a subset of the support of \(p^{*}(\mathbf{G}_{\mathcal{I}})\) from which the inputs \(\mathbf{X}\) are generated, as these constrain the messages that the two agents can exchange.
### Measuring Alignment
While there exist several metrics for measuring interpretability of concepts (discussed in section 6.4), here we are concerned with techniques for assessing _alignment_.
Considering the relation between alignment and disentanglement (**D1**), one option is to leverage one of the many measures of disentanglement proposed in the literature [10]. The main issues is that most of them provide little information about how simple the map \(\alpha\) (**D2**) is and as such they cannot be reused as-is. However, for the disentangled case (cf. section 4.1), Marconato et al. [10] noted that one can measure alignement using the linear DCI [1]. Essentially, this metric checks whether there exists a _linear regressor_ that, given \(\mathbf{m}_{\mathcal{J}}\), can predict \(\mathbf{g}_{\mathcal{I}}\) with high accuracy, such that each \(M_{j}\) is predictive for at most one \(G_{i}\). In practice, doing so involves collecting a set of annotated pairs \(\{(\mathbf{m}_{\mathcal{J}},\mathbf{g}_{\mathcal{I}})\}\), where the \(m_{j}\)'s and \(g_{i}\)'s are rescaled in \([0,1]\), and fitting a linear regressor on top of them using \(L_{1}\) regularization. DCI then considers the (absolute values of the) regressor coefficients \(B\in\mathbb{R}^{|\mathcal{J}|\times|\mathcal{I}|}\) and evaluates average dispersion of \(B_{j}\): for each machine representation \(M_{j}\). In short, if each \(M_{j}\) predicts only a single \(G_{i}\), and with high accuracy, then linear DCI is maximal. The key insight is that the existence of such a linear map implies both disentanglement (**D1**) _and_ monotonicity (**D2**), and therefore also alignment. The main downside is that the converse does not hold, that is, linear DCI cannot account for non-linear monotonic relationships.
The alternative we advocate is that of _decoupling_ the measurement of **D1** and **D2**, and to leverage causal notions for the former. **D1** can for instance be measured using the _interventional robustness score_ (IRS) [Suter et al., 2019], an empirical version of EMPIDA (definition 2) that - essentially - measures the average effect of interventions on \(\mathbf{G}_{\mathcal{I}}\) on the machine representation. Alternatives include, for instance, DCI-ES Eastwood et al. (2022), which can better capture the degree by which factors are mixed and the mutual information gap (MIG) (Chen et al., 2018). These metrics allow to establish an empirical map \(\pi\) between indices of the human and machine representations, using which it is possible to evaluate **D2** separately. One option is that of evaluating Spearman's rank correlation between the distances:
\[|g_{i}-g_{i}^{\prime}|^{2}\quad\text{and}\quad\|\mathbb{E}[\mathbf{M}_{ \mathcal{J}}\mid do(G_{i}\gets g_{i})]-\mathbb{E}[\mathbf{M}_{\mathcal{J} }\mid do(G_{i}\gets g_{i}^{\prime})]\|_{2}^{2} \tag{19}\]
for interventions \(g_{i}\) and \(g_{i}^{\prime}\), leaving \(\mathbf{G}_{-i}\) fixed, for each \(i\in\mathcal{I}\) and multiple traversals \((g_{i},g_{i}^{\prime})\).
Unfortunately, none of the existing metrics are suited for non-disentangled generative factors \(\mathbf{G}_{\mathcal{I}}\) or human representations \(\mathbf{H}\), which are central for alignment in the block-wise (section 4.4) and general (section 4.5) cases. We leave an in-depth study of more generally applicable metrics to future work.
### Consequences for Concept-based Explainers
Recall that CBEs explain the predictions of black-box models by extracting interpretable concepts \(\hat{\mathbf{H}}\) from the model's internal representation \(\mathbf{M}\) and then evaluating their contribution to the prediction (cf. section 3.1). In this case, the requirement is that \(\hat{\mathbf{H}}\) is aligned to the human's concept vocabulary \(\mathbf{H}\) - irrespective of how the former is extracted. Notice that _alignment_ is orthogonal to _faithfulness_, in the sense that an aligned representation can be unfaithful to the model, and a faithful representation misaligned with the human. In other words, _alignment is a property of the map from \(\mathbf{H}\) to \(\hat{\mathbf{H}}\), while faithfulness is a property of the map between \(\mathbf{M}\) and \(\hat{\mathbf{H}}\)_.
If the mapping from \(\mathbf{M}\) to \(\hat{\mathbf{H}}\) is _invertible_, then it is always possible map back and forth - in a lossless manner - from the machine representations \(\mathbf{M}\) to the surrogate \(\hat{\mathbf{H}}\). This is a solid basis for faithfulness: whatever information is conveyed by an explanation built on \(\hat{\mathbf{H}}\) can always be cast in terms of the machine representation itself,20 and that whatever relation the latter has with the prediction can be mapped in terms of human concepts.
Footnote 20: The resulting explanation may no longer be simple or understandable, but it still contains all the information of the original message.
In the general case, however, it is non-trivial to find a suitable invertible function. Suppose the user provides the machine with annotated examples \((\mathbf{x}_{i},\mathbf{h}_{i})\) and that these are used - as is common with supervised CBEs, see section 6.2 - to learn the mapping from \(\mathbf{M}\) to \(\hat{\mathbf{H}}\). Ensuring that this is invertible requires potentially an enormous amount of examples. To see this, consider a simple case in which the human concepts \(\mathbf{H}\) are binary and disentangled and that \(\mathbf{M}\) and \(\mathbf{H}\) are related by a (possibly complex) invertible mapping that is not an alignment. Even in this ideal case, it might take up to \(2^{\ell}\) examples - where \(\ell\) is the dimension of \(\mathbf{H}\) - to align the two representations, as this involves finding the correct permutation from \(\mathbf{M}\) to \(\mathbf{H}\). Alignment can help in this regard. In fact, if \(\mathbf{M}\) is aligned to \(\mathbf{H}\), the number of required examples scales as \(\mathcal{O}(\ell)\), because a single intervention to each user concept \(H_{i}\) is sufficient to find the corresponding aligned element \(M_{j}\).
In summary, not only do unaligned (black-box) models imply CBEs require more supervision on the user concepts to acquire a invertible transformation ensuring faithfulness, but also it is likely that the representation \(\mathbf{M}\) mixes together the interpretable factors \(\mathbf{G}_{\mathcal{I}}\) with the non-interpretable ones \(\mathbf{G}_{-\mathcal{I}}\), making it more difficult to extract a concepts \(\hat{\mathbf{H}}\) aligned to \(\mathbf{H}\).
### Consequences for Concept-based Models
As discussed in section 6.1, most CBMs acquire concepts using a variety of heuristics that do not guarantee alignment. To the best of our knowledge, GlanceNets (Marconato et al., 2022) are the only CBM that _explicitly_ optimizes for alignment, and as such avoids concept leakage. They do so by combining a variational auto-encoder mapping from the input \(\mathbf{X}\) to a machine representation \(\mathbf{M}=(\mathbf{M}_{\mathcal{J}},\mathbf{M}_{-\mathcal{J}})\) where only the first partition is used for prediction. These are computed using a simple linear layer, as is customary. The variational-auto encoder is trained with some amount of concept-level annotations. This encourages both disentanglement (Locatello et al., 2019)_and_ monotonicity - and hence alignment
- for _in-distribution_ data. In turn, this also prevents concept leakage. In order to avoid leakage for _out-of-distribution_ data, GlanceNets also implement an _open-set recognition_ step (Sun et al., 2020). This is responsible for detecting inputs encoding concepts that have never been observed during training. Whenever these are detected, GlanceNets refuse to output a prediction for them, thus avoiding leakage altogether.
From our perspective, GlanceNets have two major downsides. First, they are designed to seek alignment with respect to the generative factors underlying the observations. As we argued, however, interpretability requires alignment with respect to the human's concept vocabulary. Second, GlanceNets require a moderate but non-trivial number of annotations. How to acquire them from the human observer remains an open problem, discussed in section 5.5.
Summarizing, GlanceNets could be repurposed for solving alignment in the disentangled case discussed in section 4.1 by combining them with a suitable annotation elicitation procedure. They are however insufficient to solve disentanglement when the ground-truth concepts are not disentangled, and new solutions will be necessary to tackle these more complex and realistic settings.
### Collecting Human Annotations
Both metrics and learning strategies for alignment require some amount of annotations for the human factors \(\mathbf{H}\). This is a core requirement related to the subjective nature of interpretability. One option is that of distributing the annotation effort among crowd-workers, which however is impractical for prediction tasks that require specific types of expertise, like medical diagnosis. An alternative is that of gathering together annotations from different online resources of large language models (Oikarinen et al., 2022). Doing so, however, can lead to a lack of completeness (a necessary concept might be missing) and ambiguity (concepts annotations might mix together different views or meanings). This kind of supervision cannot guarantee alignment to a specific human observer.
Reducing the annotation effort for personalized supervision is challenging. Under the assumption that of leveraging generic concept annotations obtained using the above methods to pre-train the concept extractor, and then fine-tune the resulting model using a small amount of personalized annotations. This strategy can save annotation effort as long as the generic annotations contain most of the information necessary to retrieve the observer's concepts. An alternative is to leverage concept-level interactive learning (Lage and Doshi-Velez, 2020; Chauhan et al., 2023), to request annotations only for those concepts that are less _aligned_. Naturally, one might also consider combining these two strategies, that is, interleaving fine-tuning with interactive learning, for additional gains. How to estimate alignment (or some lower bound thereof) in absence of full concept annotations is however an open research question and left to future work.
## 6 Related Work
While concepts lie at the heart of AI (Muggleton and De Raedt, 1994), the problem of acquiring _interpretabile_ concepts has historically been neglected in representation learning (Bengio et al., 2013). Recently concepts have regained popularity in many areas of research, including explainable AI (Guidotti et al., 2018), neuro-symbolic AI (De Raedt et al., 2020), and causality (Pearl, 2009; Scholkopf et al., 2021), yet most concept acquisition strategies developed in these areas are only concerned with task accuracy, rather than interpretability. Next, we briefly overview strategies for acquiring interpretable representations and highlight their shortcomings for properly solving human-interpretable representation learning.
### Unsupervised Approaches
A first group of strategies learn concepts directly from unlabeled data. Well-known theoretical results in deep latent variable models cast doubts on the possibility of acquiring representations satisfying _any_ property of interest - including _disentanglement_ and _interpretability_ - in a fully unsupervised manner in absence of a strong architectural bias (Locatello et al., 2019; Khemakhem et al., 2020). This stems from the fact that, as long as the concept extraction layers are "flexible enough" (_i.e_., have no strong architectural bias), predictors relying interpretable and uninterpretable concepts can achieve the very same accuracy (or likelihood) on both the training and test sets. As a consequence, _unsupervised strategies that
only maximize for accuracy cannot guarantee interpretability unless they are guided by an appropriate bias._ The main challenge is determining what this bias should be.
Several, mutually incompatible alternatives have been proposed. Unsupervised CBEs discover concepts in the space of neuron activations of a target model. One common bias is that concepts can be retrieved by performing a _linear decomposition_ of the machine's representation (Fel et al., 2023b). Specific techniques include k-means (Ghorbani et al., 2019), principal component analysis (Graziani et al., 2023), and non-negative matrix factorization (Zhang et al., 2021; Fel et al., 2023a). Concept responsibility is then established via feature attribution methods.
Two common biases used in CBMs are _sparsity_ and _orthonormality_. Self-Explainable Neural Networks (Alvarez-Melis and Jaakkola, 2018) encourage the former by pairing an autoencoder architecture for extracting concepts from the input together with a (simulatable (Lipton, 2018)) task-specific prediction head, and then combining a cross-entropy loss with a penalty term encouraging concepts to have sparse activation patterns. Concept Whitening (Chen et al., 2020) implements a special bottleneck layer that ensures learned concepts are _orthogonal_, so as to minimize mutual information between them and facilitate acquiring concepts with disjoint semantics, as well as _normalized_ within comparable activation ranges. The relationship between sparsity, orthonormality, and interpretability is however unclear.
Based on the observation that humans tend to reason in terms of concrete past cases (Kim et al., 2016), other CBMs constrain concepts to capture salient training examples or parts thereof, _i.e._, _prototypes_. Methods in this group include Prototype Classification Networks (Li et al., 2018), Part-Prototype Networks (Chen et al., 2019), and many others (Rymarczyk et al., 2021; Nauta et al., 2021; Singh and Yow, 2021; Davoudi and Komeili, 2021). At a high level, they all memorize one or more prototypes (_i.e._, points in latent space) that match training examples of their associated class only. Predictions are based on the presence or absence of a match with the learned prototypes. The interpretability of this setup has however been called into question (Hoffmann et al., 2021; Xu-Darme et al., 2023). The key issue is the matching step, which is carried out in latent space. The latter is generally underconstrained, meaning that prototypes can end up matching parts of training examples that carry no useful semantics (_e.g._, arbitrary combinations of foreground and background) as long as doing so yields high training accuracy.
None of these approaches takes the human's own concept vocabulary \(\mathbf{H}\) into account.
### Supervised Strategies
A second family of approaches leverages concept _annotations_ (or some form of weak supervision). Among supervised CBEs, Net2vec (Fong and Vedaldi, 2018) defines linear combinations of convolutional filters, and fits a linear model to decide whether their denoised saliency maps encode a given concept or not, yielding a binary segmentation mask. TCAV (Kim et al., 2018) defines concepts as directions - or concept-activation vectors (CAVs) - in latent space. These are obtained by adapting the parameters of per-concept linear classifiers trained on a separate densely annotated data set to the machine's embedding space. Concept attributions are proportional to the degree by which changing their activations affects the prediction. Zhou et al. (2018) also relies on CAVs, but computes explanation by solving an optimization problem. A second group of supervised CBEs makes use of non-linear maps instead (Kazhdan et al., 2020; Gu and Tresp, 2019; Esser et al., 2020). For instance, CME (Kazhdan et al., 2020) uses all activations of the model to learn categorical concepts via semi-supervised multi-task learning, while INN (Esser et al., 2020) fits a normalizing flow from the machine representation to the concepts so as to guarantee their relationship is bijective. Similarly, supervised CBMs like Concept-Bottleneck Models (Koh et al., 2020), Concept Whitening (Chen et al., 2020), and GlanceNets (Marconato et al., 2022), among others (Yuksekgouni et al., 2022; Sawada and Nakamura, 2022; Zarlenga et al., 2022) define a loss training penalty, for instance a cross-entropy loss, encouraging the extracted concepts to predict the annotations.
This solution seems straightforward: there is no more direct way than concept supervision to guide the model toward acquiring representations with the intended semantics. It also circumvents the negative theoretical results outlined in section 6.1.
However, models that accurately match the supervision do not necessarily satisfy content-style separation or allow to have disentangled representations, which - as discussed in section 4.3 - would lead to a non-negligible amount of _concept leakage_(Margeloiu et al., 2021; Mahinpei et al., 2021). In contrast, alignment explicitly takes both properties into account. Another major issue is the supervision itself, which is frequently obtained from general sources rather than from the human observer them
selves, meaning the learned concepts may not be aligned to the concept vocabulary of the latter. Two notable exceptions are the interactive concept learning approaches of Lage and Doshi-Velez (2020) and of Erculiani et al. (2023), which are however unconcerned with concept leakage.
To the best of our knowledge, GlanceNets (Marconato et al., 2022) are the only CBM that explicitly optimizes for alignment, and as such avoid leakage, yet they do so with respect to generative factors rather than human concepts. As discussed in section 5.4, however, GlanceNets can in principle be adapted to solve human-interpretable representation learning by combining them with a suitable annotation acquisition strategy. We plan to pursue this possibility in future work.
### Disentanglement
Another relevant area of research is that on learning disentangled representations. Here, the goal is to uncover "meaningful", independent factors of variation underlying the data (Higgins et al., 2016, 2018, Locatello et al., 2019), with the hope that these are also interpretable (Bengio et al., 2013). Most current learning strategies rely on extensions of variational auto-encoders (VAEs) (Kingma and Welling, 2014, Higgins et al., 2016, Kim and Mnih, 2018, Chen et al., 2018, Esmaeili et al., 2019, Rhodes and Lee, 2021). As anticipated in section 6.1, unless suitable architectural bias is provided, unsupervised learning cannot guarantee the learned representations are disentangled. Motivated by this, follow-up works seek disentanglement via either concept supervision (Locatello et al., 2020), weak supervision (Shu et al., 2020, Locatello et al., 2020), and other techniques (Lachapelle et al., 2022, Horan et al., 2021, Stammer et al., 2022). Disentanglement however is unconcerned with the human's concept vocabulary, and furthermore it is weaker than alignment, in that is does not readily support name transfer.
Independent component analysis (ICA) also seeks to acquire independent factors of variation (Comon, 1994, Hyvarinen et al., 2001, Naik and Kumar, 2011). These assume the generative factors are independent from each other and determine an observation via an injective or invertible map. The objective of ICA is to recover the generative factors from the observations. While the linear case is well understood (Comon, 1994), the non-linear case is arguably more difficult. It was shown that _identifying_ the ground truth factor is impossible in the unsupervised setting (Hyvarinen and Pajunen, 1999). This is analogous to the results mentioned in section 6.1 and in fact a formal link between deep latent variable models and identifiability has recently been established. (Khemakhem et al., 2020). On the positive side, it is possible to show that providing auxiliary supervision on the factors guarantees identification up to permutation and negation, a property known as _strong identifiability_. _Weak identifiability_(Buchholz et al., 2022) relaxes it whereby the generative factors are recovered up to a transformation of the form \(A\mathbf{g}+\mathbf{b}\), where \(rank(A)\geq\min(\dim\mathbf{G},\dim\mathbf{M})\) and \(\mathbf{M}\) is the machine representation and \(\mathbf{b}\) is an offset. Hyvarinen and Morioka (2017) also contemplate _identifiability up to element-wise non-linearities_, that it, given by the class of transformations \(A\sigma[\mathbf{g}]+\mathbf{b}\), where \(\sigma\) can be a non-linear. If \(\sigma\) is restricted to be monotonic and \(A\) is an element-wise transformation, according to condition **D1** in definition 6, then this form of identifiability matches that of alignment in the disentangled case. However, this formulation refers to identification of the generative factors, while alignment is defined specifically in terms of human concepts. Moreover, we do not assume to the map from human to machine concepts to be injective, nor to be exact.
### Metrics of Concept Quality
Several metrics have been proposed for assessing the quality of extracted concepts and of explanations built on them. Standard measures include accuracy and surrogates thereof (Kim et al., 2018), Jaccard similarity (Fong and Vedaldi, 2018), sparsity, stability and ability to reconstruct the model's internal representation (Fel et al., 2023), and the degree by which concepts constitute a sufficient statistics for the prediction (Yeh et al., 2020). We refer to (Schwalbe, 2022) for an overview. These metrics, however, either entirely neglect the role of the human observer - in that concept annotations are either not used or not obtained from the observer themselves - or fail to account for disentanglement and concept leakage. Alignment fills these gaps. Recently, two new metrics have been proposed to measure the concept impurity across individual learnt concepts and among sets of representations (Zarlenga et al., 2023), but the relation with alignment has not been uncovered yet.
There also exist a number of metrics for measuring disentanglement, such as \(\beta\)-VAE score (Higgins et al., 2016), Factor-VAE score (Kim and Mnih, 2018), mutual information gap (Chen et al., 2018), DCI (Eastwood and Williams, 2018), and IRS (Suter et al., 2019). DCI provides also information about the
informativeness of its estimate, and - following (Marconato et al., 2022), it can be repurposed to measure a form of alignment where the \(\mu\) transformations are linear definition 6. Suter et al. (2019) propose EMPIDA to analyze disentanglement from a causal perspective, upon which we base the construction of alignment. As mentioned in section 5.2, these metrics can be used to evaluate **D1** in the definition of alignment, and therefore alignment itself when paired with a metric for measuring the complexity of \(\alpha\) (**D2**). Their properties are extensively discussed in Zaidi et al. (2020).
### Neuro-Symbolic Architectures
The decomposition between low-level perception - that is, mapping inputs to concepts, also known as _neural predicates_ in this setting - and high-level inference outlined in section 3.1 applies also to many neuro-symbolic (NeSy) models. Examples include DeepProblog (Manhaeve et al., 2018), Logic Tensor Networks (Donadello et al., 2017), and related architectures (Diligenti et al., 2017; Fischer et al., 2019; Giunchiglia and Lukasiewicz, 2020; Yang et al., 2020; Huang et al., 2021; Marra and Kuzelka, 2021; Ahmed et al., 2022; Misino et al., 2022; Winters et al., 2022; van Krieken et al., 2022; Ciravegna et al., 2023). The biggest differences between CBMs and NeSy architectures is how they implement the top layer: the former rely on simulatable layers, while the latter on reasoning layers that take prior symbolic knowledge into account and are not necessarily simulatable.
Recent works (Marconato et al., 2023a,b) showed that learning a NeSy model consistent with prior knowledge using only label supervision is insufficient to guarantee the neural predicates capture the intended semantics. For instance, it is not uncommon that NeSy architectures attain high prediction accuracy by acquiring neural predicates that encode information about distinct and unrelated concepts. Interpretability of the _neural predicates_ however also requires alignment, meaning that our results apply to these NeSy architectures as well.
## 7 Conclusion
Motivated by the growing importance of interpretable representations for both _post-hoc_ and _ante-hoc_ explainability, we have introduced and studied the problem of _human-interpretable representation learning_. Our key intuition is that concepts are interpretable only as long as they support symbolic communication with an interested human observer. Based on this, we developed a formal notion of alignment between distributions, rooted in causality, that ensures concepts can support symbolic communication and that applies to both _post-hoc_ concept-based explainers and concept-based models. In addition, we clarified the relationship between alignment and the well-known notions of disentanglement, illustrating why the latter is not enough for interpretability, and uncovered a previously unknown link between alignment and concept leakage. Finally, looking at alignment in the most general case, we also unearthed its link to causal abstractions, which further cements the link between interpretability and causality and that we plan to expand on in future work. With this paper, our aim is that of bridging the gap between the human and the algorithmic sides of interpretability, with the hope of providing a solid, mathematical ground on which new research on human-interpretable representation learning can build.
## Acknowledgements
We acknowledge the support of the MUR PNRR project FAIR - Future AI Research (PE00000013) funded by the NextGenerationEU. The research of ST and AP was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.
## Appendix A Proofs
### Proof of Proposition 1
The proof requires to average over the confounds \(\mathbf{C}\), encompassing the general case where different \(G\)'s may be correlated. To this end, we define the distributions \(p(\mathbf{g})=\mathbb{E}_{\mathbf{C}}[p(\mathbf{G})]\) and \(p(\mathbf{G}\mid do(G_{i}\gets g_{i}))=\mathbbm{1}\{G_{i}=g_{i}\}\,\mathbb{ E}_{\mathbf{C}}[p(\mathbf{G}_{-i}\mid\mathbf{C})]\).
The proof is split into two parts: (_i_) proving that **D1** implies disentanglement, and (_ii_) the other way around.
(\(i\)) Assume that **D1** holds. Then, the conditional distribution of \(\mathbf{M}\) can be written as:
\[p_{\theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{g})=\prod_{j\in\mathcal{J}}p_{ \theta}(m_{j}\mid g_{\pi(j)}) \tag{20}\]
We proceed to show that eq. (20) is disentangled in \((\mathbf{G}_{\mathcal{I}},\mathbf{M}_{\mathcal{J}})\). For each \(j\in\mathcal{J}\), it holds that the minimum value of \(\mathsf{EMPIDA}(G_{i},M_{j})\) is obtained when \(i=\pi(j)\). That is because:
\[\begin{split} p_{\theta}(M_{j}\mid do(G_{\pi(j)}\gets g_{ \pi(j)}))&=\mathbb{E}_{\mathbf{g}_{-\pi(j)}}[p_{\theta}(M_{j} \mid g_{\pi(j)})]\\ p_{\theta}(M_{j}\mid do(G_{\pi(j)}\gets g_{\pi(j)},\mathbf{G} _{-\pi(j)}\leftarrow\mathbf{g}_{-\pi(j)}))&=p_{\theta}(M_{j} \mid g_{\pi(j)})\end{split} \tag{21}\]
Note that the first distribution is independent of \(\mathbf{g}_{-\pi(j)}\), so it is equivalent to the latter. Hence, \(\mathsf{EMPIDA}(G_{\pi(j)},M_{j})\) vanishes \(\forall j\in\mathcal{J}\), yielding the claim.
(\(ii\)) Let now \(\mathbf{M}_{\mathcal{J}}\) be disentangled with respect to \(\mathbf{G}_{\mathcal{I}}\), that is:
\[\max_{j\in\mathcal{J}}\min_{i\in\mathcal{I}}\mathsf{EMPIDA}(G_{i},M_{j})=0 \tag{22}\]
which is verified _iff_ it holds that \(\min_{i\in\mathcal{I}}\mathsf{EMPIDA}(G_{i},M_{j})=0\) for all \(j\). We now proceed by contradiction to show that vanishing \(\mathsf{EMPIDA}\) is only consistent with **D1**. Suppose there exist at least one \(j\in\mathcal{J}\) such that:
\[\alpha(\mathbf{m}_{\mathcal{J}})_{j}=\mu_{j}(\mathbf{g}_{-\mathcal{I}},N_{j}) \tag{23}\]
where \(\mathcal{K}\subseteq\mathcal{I}\) containing at least two elements. Therefore, the probability distribution for \(M_{j}\) can be written in general as \(p(m_{j}\mid\mathbf{g}_{\mathcal{K}})\). Plugging this condition in the evaluation of \(\mathsf{EMPIDA}\) we obtain for every \(k\in\mathcal{K}\):
\[\begin{split} p(M_{j}\mid do(G_{k}\gets g_{k}))& =\mathbb{E}_{\mathbf{g}_{\mathcal{K}\setminus\{k\}}}[p_{\theta}(m_ {j}\mid g_{k},\mathbf{g}_{\mathcal{K}\setminus\{k\}})]\\ p(M_{j}\mid do(G_{k}\gets g_{k},\mathbf{G}_{-k}\leftarrow \mathbf{g}_{-k}^{\prime}))&=p_{\theta}(m_{j}\mid g_{k},\mathbf{ g}_{\mathcal{K}\setminus\{k\}})\end{split} \tag{24}\]
Then, the two distributions coincide, and \(\mathsf{EMPIDA}\) is zero, _iff_ there exists a \(k\in\mathcal{K}\) such that all possible interventions \(\mathbf{G}_{\mathcal{K}\setminus\{k\}}\leftarrow\mathbf{g}_{\mathcal{K} \setminus\{k\}}^{\prime}\) do not deviate from the expected distribution, formally:
\[\forall\mathbf{g}_{\mathcal{K}\setminus\{k\}}^{\prime}\quad p(m_{j}\mid g_{ k},\mathbf{g}_{\mathcal{K}\setminus\{k\}}^{\prime})=\mathbb{E}_{\mathbf{g}_{ \mathcal{K}\setminus\{k\}}}p_{\theta}(m_{j}\mid\mathbf{g}_{\mathcal{K}}) \tag{25}\]
which holds _iff_, which is a contradiction. This proves the claim.
### Proof of Proposition 2
In the following, we adopt the shorthand \(\mathbf{m}=\mathbf{m}_{\mathcal{J}}\), and reintroduce the dependency on \(\mathbf{m}_{-\mathcal{J}}\) at the end. First, we show that the maximum of the second term in \(\Lambda\) in eq. (12) coincides with the Shannon entropy of \(Y\):
\[\begin{split}\mathcal{L}_{r}(\gamma)&=\mathbb{E}_{p (\mathbf{x},y)}\big{[}\log r_{\gamma}(y)\big{]}\\ &=\int p(\mathbf{x},y)\log r_{\gamma}(y)\,\mathrm{d}\mathrm{x} \mathrm{d}y\\ &=\int p(y)\log\frac{r_{\gamma}(y)p(y)}{p(y)}\,\mathrm{d}y\\ &=-H(Y)-\mathsf{KL}(p(Y)\mid\mid r_{\gamma}(Y))\end{split} \tag{26}\]
where \(p(Y)\) denotes the marginal distribution of \(Y\), \(H(Y)\) is the Shannon entropy given by \(p(Y)\), and \(\mathsf{KL}\) is the Kullback-Leibler divergence. Since the KL is always non-negative, the previous equation yields the upper bound:
\[\max_{\gamma}[\mathcal{L}_{r}(\gamma)]=-H(Y) \tag{27}\]
We proceed similarly to obtain a lower-bound:
\[\mathcal{L}_{CL}(\lambda) =\int p(\mathbf{x},y)\log\Big{(}\int q_{\lambda}(y\mid\mathbf{m})p_{ \theta}(\mathbf{m}\mid\mathbf{x})\,\mathrm{d}\mathbf{m}\Big{)}\mathrm{d} \mathbf{x}\mathrm{d}y \tag{28}\] \[\geq\int p(\mathbf{x})p_{\theta}(\mathbf{m}\mid\mathbf{x})p(y\mid \mathbf{x})\log q_{\lambda}(y\mid\mathbf{m})\,\mathrm{d}\mathbf{x}\,\mathrm{d} \mathbf{m}\,\mathrm{d}y\] \[=\int p_{\theta}(\mathbf{m},y)\log q_{\lambda}(y\mid\mathbf{m}) \,\mathrm{d}\mathbf{m}\mathrm{d}y\] \[=\int p_{\theta}(\mathbf{m},y)\log\frac{q_{\lambda}(y\mid\mathbf{ m})p_{\theta}(\mathbf{m})p(y)p_{\theta}(\mathbf{m},y)}{p_{\theta}(\mathbf{m})p(y)p_{ \theta}(\mathbf{m},y)}\,\mathrm{d}\mathbf{m}\mathrm{d}y\] \[=-H(Y)-\mathsf{KL}(p_{\theta}(\mathbf{M},Y)\mid\mid q_{\lambda, \theta}(\mathbf{M},Y))+\mathrm{I}(\mathbf{M},Y)\]
where \(p_{\theta}(\mathbf{m},y)=\int p(\mathbf{x})p_{\theta}(\mathbf{m}\mid\mathbf{x })p(y\mid\mathbf{x})\,\mathrm{d}\mathbf{x}\), \(p_{\theta}(\mathbf{m})\) is the posterior of the encoding distribution, \(q_{\lambda,\theta}(\mathbf{m},y):=q_{\lambda}(y\mid\mathbf{m})p_{\theta}( \mathbf{m})\) denotes the joint probability, and \(\mathrm{I}(\mathbf{M},Y)\) is the mutual information for the random variables \(\mathbf{M}\) and \(Y\), distributed according to \(p_{\theta}(\mathbf{M},y)\). Maximizing the lower-bound implies learning a predictor \(q_{\lambda}(y\mid\mathbf{m})\) that minimizes the KL term. By the previous equation this happens _iff_\(q_{\lambda}(y,\mathbf{m})\) matches \(p_{\theta}(\mathbf{m},y)\). The lower-bound for the first term of \(\Lambda\) hence becomes:
\[\max_{\lambda}[\mathcal{L}_{CL}(\lambda)]\geq-H(Y)+\mathrm{I}(\mathbf{M},Y) \tag{29}\]
Adding this term to the second one shows retrieves the definition of concept leakage and shows that it is lower-bounded by:
\[\Lambda\geq\mathrm{I}(\mathbf{M}_{\mathcal{J}},Y) \tag{30}\]
We now proceed deriving the upper-bound for the first term:
\[\mathcal{L}_{CL}(\lambda) =\int p(\mathbf{x},y)\log\Big{(}\int q_{\lambda}(y\mid\mathbf{m} )p_{\theta}(\mathbf{m}\mid\mathbf{x})\,\mathrm{d}\mathbf{m}\Big{)}\mathrm{d} \mathbf{x}\mathrm{d}y \tag{31}\] \[=\int p(\mathbf{g}_{I})q(\mathbf{g}_{-\mathcal{I}})\Big{[}\int p( \mathbf{x}\mid\mathbf{g})p(y\mid\mathbf{g}_{-\mathcal{I}})\log\Big{(}\int q_ {\lambda}(y\mid\mathbf{m})p_{\theta}(\mathbf{m}\mid\mathbf{x})\,\mathrm{d} \mathbf{m}\Big{)}\mathrm{d}\mathbf{x}\,\mathrm{d}y\Big{]}\,\mathrm{d}\mathbf{g} _{\mathcal{I}}\,\mathrm{d}\mathbf{g}_{-\mathcal{I}}\] \[\leq\int q(\mathbf{g}_{-\mathcal{I}})\Big{[}\int p(y\mid\mathbf{ g}_{-\mathcal{I}})\log q_{\lambda,\theta}(y\mid\mathbf{g}_{-\mathcal{I}}) \mathrm{d}y\Big{]}\,d\mathbf{g}_{-\mathcal{I}}\] \[=\int q(\mathbf{g}_{-\mathcal{I}})\Big{[}\int p(y\mid\mathbf{g}_ {-\mathcal{I}})\log\frac{q_{\lambda,\theta}(y\mid\mathbf{g}_{-\mathcal{I}})p(y )p(y\mid\mathbf{g}_{-\mathcal{I}})q(\mathbf{g}_{-\mathcal{I}})}{p(y)p(y\mid \mathbf{g}_{-\mathcal{I}})q(\mathbf{g}_{-\mathcal{I}})}\mathrm{d}y\Big{]}\, d\mathbf{g}_{-\mathcal{I}}\] \[=\int p(y)\log p(y)\,\mathrm{d}y+\int q(\mathbf{g}_{-\mathcal{I} })p(y\mid\mathbf{g}_{-\mathcal{I}})\log\Big{[}\frac{q_{\lambda,\theta}(y\mid \mathbf{g}_{-\mathcal{I}})}{p(y\mid\mathbf{g}_{-\mathcal{I}})}+\frac{p(y, \mathbf{g}_{-\mathcal{I}})}{p(y)q(\mathbf{g}_{-\mathcal{I}})}\Big{]}\,\mathrm{ d}y\mathrm{d}\mathbf{g}_{-\mathcal{I}}\] \[=-H(Y)-\mathbb{E}_{\mathbf{g}_{-\mathcal{I}}\sim\mathbf{g}_{- \mathcal{I}}}|\mathsf{KL}(p(Y\mid\mathbf{G}_{-\mathcal{I}})\mid\mid q_{ \lambda,\theta}(Y\mid\mathbf{G}_{-\mathcal{I}}))]+I(\mathbf{G}_{-\mathcal{I} },Y)\]
where in the second line we decomposed \(p(\mathbf{x},y)\) with the data generation process, and in the third line we made use of Jensen inequality when bringing \(\int p(\mathbf{x}\mid\mathbf{g}_{\mathcal{I}})p(\mathbf{g}_{I})\mathrm{d} \mathbf{x}\,\mathrm{d}\mathbf{g}_{\mathcal{I}}\) in the logarithm, and we denoted with \(q_{\lambda,\theta}(y\mid\mathbf{g}_{-\mathcal{I}})\) the conditional distribution obtained by marginalizing over all expectations in the logarithm. Overall, the only part depending on \(\lambda\) appears in the KL term and \(I(\mathbf{G}_{-\mathcal{I}},Y)\) is the mutual information for the probability distribution \(p(y\mid\mathbf{g}_{-\mathcal{I}})q(\mathbf{g}_{-\mathcal{I}})\). Notice that the maximum of the upper-bound for \(\mathcal{L}_{CL}(\lambda)\) corresponds to a vanishing KL term and hence the upper-bound for \(\Lambda\) results in:
\[\Lambda\leq I(\mathbf{G}_{-\mathcal{I}},Y) \tag{32}\]
Finally, we arrive at the claim:
\[I(\mathbf{M}_{\mathcal{J}},Y)\leq\Lambda\leq I(\mathbf{G}_{-\mathcal{I}},Y) \tag{33}\]
which concludes the proof.
### Proof of Proposition 3
**D1** in definition 6 entails that the conditional probability of \(\mathbf{M}_{\mathcal{J}}\) can be written in general as:
\[p_{\theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{g})=p_{\theta}(\mathbf{m}_{ \mathcal{J}}\mid\mathbf{g}_{\mathcal{I}}) \tag{34}\]
The same holds for **D1** in definition 8. We make use of this fact for deriving a different upper-bound for \(\Lambda\). We focus only on the first term of eq. (11); the analysis of the second one does not change.
\[\mathcal{L}_{CL}(\lambda) =\int p(\mathbf{x},y)\log\Big{(}\int q_{\lambda}(y\mid\mathbf{m}_{ \mathcal{J}})p_{\theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{x})\,\mathrm{d} \mathbf{m}_{\mathcal{J}}\Big{)}\,\mathrm{d}\mathbf{x}\mathrm{d}y\] \[=\int p^{\prime}(\mathbf{g})\Big{[}\int p(\mathbf{x}\mid\mathbf{ g})p(y\mid\mathbf{g}_{-\mathcal{I}})\log\Big{(}\int q_{\lambda}(y\mid\mathbf{m}_{ \mathcal{J}})p_{\theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{x})\,\mathrm{d} \mathbf{m}_{\mathcal{J}}\Big{)}\,\mathrm{d}\mathbf{x}\mathrm{d}y\Big{]}\, \mathrm{d}\mathbf{g}\] \[\leq\int q(\mathbf{g}_{-\mathcal{I}})\Big{[}\int p(y\mid\mathbf{ g}_{-\mathcal{I}})\log\Big{(}\int q_{\lambda}(y\mid\mathbf{m}_{\mathcal{J}})p_{ \theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{g}_{\mathcal{I}})p(\mathbf{g}_{ \mathcal{I}})\,\mathrm{d}\mathbf{m}_{\mathcal{J}}\mathrm{d}\mathbf{g}_{ \mathcal{I}}\Big{)}\mathrm{d}y\Big{]}\,\mathrm{d}\mathbf{g}_{-\mathcal{I}}\] \[=\int p(y)\log p_{\lambda,\theta}(y)\,\mathrm{d}y\] \[=\int p(y)\log\frac{p_{\lambda,\theta}(y)p(y)}{p(y)}\,\mathrm{d}y\] \[=-H(Y)-\mathsf{KL}(p(Y)\mid\mid p_{\lambda,\theta}(Y)) \tag{35}\]
In the second line we decomposed the data generation process, in the third line we made use of Jensen's inequality to introduce in the logarithm the term \(\int p(\mathbf{g}_{\mathcal{I}})\,\mathrm{d}\mathbf{g}_{\mathcal{I}}\,\int p( \mathbf{x}\mid\mathbf{g})\mathrm{d}\mathbf{x}\). The marginalization of \(p_{\theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{x})\) with \(p(\mathbf{x}\mid\mathbf{g})\) gives \(p_{\theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{g})\), that by **D1** reduces to \(p_{\theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{g}_{\mathcal{I}})\), hence the term appearing in the third line. In the fourth line, we denoted with \(p_{\lambda,\theta}(y)=\int q_{\lambda}(y\mid\mathbf{m}_{\mathcal{J}})p_{ \theta}(\mathbf{m}_{\mathcal{J}}\mid\mathbf{g}_{\mathcal{I}})p(\mathbf{g}_{ \mathcal{I}})\,\mathrm{d}\mathbf{m}_{\mathcal{J}}\mathrm{d}\mathbf{g}_{ \mathcal{I}}\) and reduced the first integral in \(p(y)\). Finally, we obtain the upper bound for the first term of \(\Lambda\), where the maximum implies having a vanishing \(\mathsf{KL}\) term. Therefore, we have that:
\[\Lambda\leq 0 \tag{36}\]
Now, since \(\Lambda\) is lower bounded by the mutual information \(I(\mathbf{M}_{\mathcal{J}},Y)\), it cannot be negative and hence must be zero. This concludes the proof.
### Proof of Corollary 1
The result of the corollary follows from proposition 3 by considering only the subset of representations \(\mathbf{M}_{\mathcal{J}}\) that are not aligned to \(G_{k}\). Denote them with \(\mathbf{M}_{\mathcal{K}}\), where \(\mathcal{K}=\{j:\pi(j)\neq k\}\) and set \(\mathbf{\bar{G}}_{-\mathcal{I}}=\mathbf{G}_{-\mathcal{I}}\cup G_{k}\). Then, we have:
\[p_{\theta}(\mathbf{m}_{\mathcal{K}}\mid\mathbf{g})=p_{\theta}(\mathbf{m}_{ \mathcal{K}}\mid\mathbf{g}_{\mathcal{I}\setminus\{g_{k}\}}) \tag{37}\]
Similarly to proposition 3, we then obtain that \(\Lambda=0\), _i.e._, concept leakage vanishes. This proves the claim.
### Proof of Proposition 4
For a given block \(\mathbf{M}_{\mathcal{K}}\) aligned to \(\mathbf{G}_{\Pi(\mathcal{K})}\), recall that by **D1** in definition 8 it holds that:
\[\mathbf{M}_{\mathcal{K}}=\mu_{\mathcal{K}}(\mathbf{G}_{\Pi(\mathcal{K})}, \mathbf{N}_{\mathcal{K}}) \tag{38}\]
To prove the first claim, we have to show that after intervening on \(\mathbf{G}_{\Pi(\mathcal{K})}\) interventions on distinct \(\mathbf{G}_{-\Pi(\mathcal{K})}\) do not affect \(\mathbf{M}_{\mathcal{K}}\). Fix \(do(\mathbf{G}_{\Pi(\mathcal{K})}\leftarrow\mathbf{g}_{\Pi(\mathcal{K})})\). Upon performing a second intervention on the remaining variables \(do(\mathbf{G}_{-\Pi(\mathcal{K})}\leftarrow\mathbf{g}_{-\Pi(\mathcal{K})})\), we get:
\[p(\mathbf{G}|do(\mathbf{G}_{\Pi(\mathcal{K})}\leftarrow\mathbf{g}_{\Pi( \mathcal{K})},\mathbf{G}_{-\Pi(\mathcal{K})}\leftarrow\mathbf{g}_{-\Pi( \mathcal{K})}))=\mathbb{1}\Big{\{}(\mathbf{G}_{\Pi(\mathcal{K})},\mathbf{G}_{- \Pi(\mathcal{K})})=(\mathbf{g}_{\Pi(\mathcal{K})},\mathbf{g}_{-\Pi(\mathcal{ K})})\Big{\}} \tag{39}\]
By **D1** of definition 8, it holds that the corresponding probability distribution on \(\mathbf{M}_{\mathcal{K}}\) can be written as:
\[p(\mathbf{M}_{\mathcal{K}}\mid do(\mathbf{G}_{\Pi(\mathcal{K})}\leftarrow \mathbf{g}_{\Pi(\mathcal{K})}))=p(\mathbf{M}_{\mathcal{K}}\mid\mathbf{g}_{\Pi( \mathcal{K})}) \tag{40}\]
which by a similar argument to proposition 1 leads to a vanishing \(\mathsf{PIDA}(\mathbf{G}_{\Pi(\mathcal{K})},\mathbf{M}_{\mathcal{K}}\mid\mathbf{ g}_{\Pi(\mathcal{K})},\mathbf{g}_{-\Pi(\mathcal{K})})\), for all possible interventions \(do(\mathbf{G}_{-\Pi(\mathcal{K})}\leftarrow\mathbf{g}_{-\Pi(\mathcal{K})})\). This proves that after intervening on \(\mathbf{G}_{\Pi(\mathcal{K})}\), arbitrary interventions on \(\mathbf{G}_{-\Pi(\mathcal{K})}\) do not affect \(\mathbf{M}_{\mathcal{K}}\).
For the second claim, we consider two different intervened values \(\mathbf{g}_{\Pi(\mathcal{K})}^{\prime}\) and \(\mathbf{g}_{\Pi(\mathcal{K})}^{\prime\prime}\) for \(\mathbf{G}_{\Pi(\mathcal{K})}\). Recall that by **D2** in definition 8 it holds that the mean value of \(\mathbf{M}_{\mathcal{K}}\) is connected to \(\mathbf{G}_{\Pi(\mathcal{K})}\) by an invertible map. Therefore, it holds that:
\[\mathbf{g}_{\Pi(\mathcal{K})}^{\prime}\neq\mathbf{g}_{\Pi(\mathcal{K})}^{\prime \prime\prime}\implies\mathbb{E}_{\mathbf{N}_{\mathcal{K}}}[\mu_{\mathcal{K}}( \mathbf{g}_{\Pi(\mathcal{K})}^{\prime},\mathbf{N}_{\mathcal{K})}]\neq\mathbb{ E}_{\mathbf{N}_{\mathcal{K}}}[\mu_{\mathcal{K}}(\mathbf{g}_{\Pi(\mathcal{K})}^{\prime \prime},\mathbf{N}_{\mathcal{K})}] \tag{41}\]
by invertibility. This concludes the proof. |
2309.07756 | Study and evaluation of the Ronen Method accuracy at material interfaces | The Ronen method (RM) demands for successive resolutions of the diffusion
equation where local diffusion constants are modified to reproduce more
accurate estimates of the currents by a transport operator. The methodology is
currently formulated by using the formalism of the collision probability method
(CPM) for the current evaluation and RM was recently tested on a complete suite
of one-dimensional multigroup benchmark problems. Small differences in the flux
(less than 2%) were reported at material interfaces and close to the vacuum
boundary with respect to the reference solution from transport (CPM). In this
work, a verification check is first set to prove an equivalence between
diffusion and transport when optimal diffusion coefficients are computed by the
transport solution itself and employed in a standard diffusion calculation. 1G
and 2G criticality problems from the same criticality benchmark test suite of
previous publications are tested. Then, the accuracy of the flux distribution
near the vacuum boundary and material interfaces is computed using the RM for
different approximations of the vacuum boundary and with respect to decreasing
values of the RM convergence criterion set in its iterative scheme. Indeed, the
RM calculates more accurate flux distribution at all material interfaces,
regardless of the initial values used for the diffusion coefficient and the
extrapolated distance at the beginning of the iterative process. Maximal flux
deviations fall everywhere around 0.01% when the RM convergence criterion is
set to ten significant digits, leading to two orders of magnitude improvement
in the flux deviation. | Johan Cufe, Daniele Tomatis, Erez Gilad | 2023-09-14T14:42:48Z | http://arxiv.org/abs/2309.07756v1 | # Study and evaluation of the Ronen Method
###### Abstract
The Ronen method (RM) demands for successive resolutions of the diffusion equation where local diffusion constants are modified to reproduce more accurate estimates of the currents by a transport operator. The methodology is currently formulated by using the formalism of the collision probability method (CPM) for the current evaluation and RM was recently tested on a complete suite of one-dimensional multigroup benchmark problems. Small differences in the flux (less than 2%) were reported at material interfaces and close to the vacuum boundary with respect to the reference solution from transport (CPM). In this work, a verification check is first set to prove an equivalence between diffusion and transport when optimal diffusion coefficients are computed by the transport solution itself and employed in a standard diffusion calculation. 1G and 2G criticality problems from the same criticality benchmark test suite of previous publications are tested. Then, the accuracy of the flux distribution near the vacuum boundary and material interfaces is computed using the RM for different approximations of the vacuum boundary and with respect to decreasing values of the RM convergence criterion set in its iterative scheme. Indeed, the RM calculates more accurate flux distribution at all material interfaces, regardless of the initial values used for the diffusion coefficient and the extrapolated distance at the beginning of the iterative process. Maximal flux deviations fall everywhere around 0.01% when the RM convergence criterion is set to ten significant digits, leading to two orders of magnitude improvement in the flux deviation.
keywords: Ronen method, neutron transport, diffusion coefficient, vacuum boundary condition +
Footnote †: journal: Journal of the American Statistical Association
###### Contents
* 1 Introduction
* 2 Theoretical background
* 2.1 Iterative redefinition of the diffusion coefficient
* 2.2 The drift current implementation
* 2.3 Boundary conditions
* 2.4 Inherent limitation of the \(P_{1}\) approximation at vacuum boundary
* 3 Equivalence and customized diffusion
* 3.1 Optimal diffusion coefficients
* 3.2 Analytical diffusion coefficients
* 4 Status and analysis of the RM performance
* 5
## 1 Introduction
Full-core calculations aim at obtaining accurate quantities like the neutron flux and reaction rates over the complete reactor. These calculations consist of solving the diffusion equation or some other second-order approximations of transport using a few energy groups G. Two energy groups are sufficient for thermal reactors, while more are necessary for fast reactors. The actual trend in the design of nuclear reactors shows heterogeneous loading patterns where the transport effects are more evident, such that classic diffusion finds quickly its shortcomings to provide reliable results. Indeed, diffusion theory makes the assumptions of smooth flux variations, small absorption compared to scattering and low scattering anisotropy which becomes all questionable for modern core configurations.
The limitations implied by diffusion can be remedied by introducing transport corrections though, as done by the Ronen method (RM), which belongs to the class of transport approximation methods employing diffusion solvers [4, 12]. In the RM, an integral transport equation expresses the net current used in an iterative and nonlinear scheme to force the solution from diffusion to fulfil the same integral equation. The RM was recently tested on a complete suite of one-dimensional multigroup benchmark problems [5, 12]. Although an excellent agreement was always obtained in eigenvalue problems on integral quantities like the neutron multiplication factor \(k_{\text{eff}}\), differences in the flux distribution were observed with respect to the reference solution from transport at positions where transport effects are more pronounced, namely near vacuum and material interfaces.
Further investigations on the method are needed to determine whether the RM can reproduce the transport solution with sufficient accuracy at every spatial position. In this work, we first determine optimal values for the diffusion coefficient and the extrapolated distance, aiming to check and enforce an equivalence between diffusion and transport everywhere in the domain. Tests are carried out in 1G-2G homogeneous and 1G heterogeneous slab problems from Sood's critical benchmark test suite [9]. In addition, for the homogeneous case, analytical expressions for the diffusion coefficient and the extrapolated distance are derived by solving the neutron transport equation using Case's method, following the approach by Mitsis [7]. Standard diffusion is tested using these coefficients as input data. Once this check is verified, the RM accuracy is investigated with decreasing convergence tolerance set in its iterative loop, providing an update on RM accuracy estimation.
The theoretical background of the RM is presented in Section 2. The equivalence procedure between diffusion and transport is described in Section 3. The status of the
RM performance is summarized in Section 4. The results section including the equivalence check and RM accuracy evaluation is given in Section 5. The article ends with the conclusion in Section 6.
## 2 Theoretical background
In the RM, successive solutions of the diffusion equation are performed with local corrections to the diffusion coefficients that are introduced to reproduce new estimates of the currents obtained by an integral transport operator [12]. The neutron source, used in the integral expression for the current, is calculated with the scalar flux obtained by diffusion. Convergence on the scalar flux and net current distributions is sought through non-linear iterations alternating the diffusive solver and the evaluation of the integral expression. Two main different implementations are currently available, depending on how the current correction is implemented in the streaming term of the neutron balance equation. One is based on the drift-like current while the other redefines _online_ the diffusion coefficient using Fick's law. What mainly differs between these two implementations is the way physics is reproduced by the local corrections in the diffusive solver.
In 1D slab geometry with standard finite differences formalism, using integer and rational subscripts for cell-averaged and interface quantities, the neutron current using Fick's law is expressed as
\[J_{g,i+1/2}^{D}\cong-2D_{g,i+1/2}\frac{\Phi_{g,i+1}-\Phi_{g,i}}{\Delta_{i+1}+ \Delta_{i}},\quad D_{g,i+1/2}=\frac{\Delta_{i}+\Delta_{i+1}}{\Delta_{i}/D_{g,i }+\Delta_{i+1}/D_{g,i+1}} \tag{1}\]
where \(\Delta_{i}=(x_{i+1/2}-x_{i-1/2})\) and the diffusion coefficient is approximated by first-order Taylor expansions and \(D_{g,i}=1/3\sigma_{g,i}\). The spatially discretized integral neutron current with vacuum boundary conditions, accounting also linearly anisotropic scattering sources, is formulated as
\[\begin{split}& J_{g,i+1/2}=\frac{1}{2}\sum_{j=0}^{I-1}\frac{q_{0,g, j}}{\sigma_{g,j}}\Big{(}E_{3}\left[\tau_{g}(x_{j+1/2},x_{i+1/2})\right]-E_{3} \left[\tau_{g}(x_{j-1/2},x_{i+1/2})\right]\Big{)}+\\ &\text{Sgn}(i-j)+\frac{3}{2}\sum_{j=0}^{I-1}\frac{q_{1,g,j}}{ \sigma_{g,j}}\Big{(}E_{4}\left[\tau_{g}(x_{j+1/2},x_{i+1/2})\right]-E_{4}\left[ \tau_{g}(x_{j-1/2},x_{i+1/2}\right]\Big{)}\end{split} \tag{2}\]
with the introduction of the spatial dependent integral exponential functions \(E_{n}\), function of the optical path length \(\tau_{g}\), for order \(n=3,4\)[5]. The sources \(q_{0,g,j}\) and \(q_{1,g,j}\) indicate the isotropic and linearly anisotropic scattering sources, respectively defined as
\[q_{0,g,j}=\sum_{g^{\prime}=1}^{G}(\sigma_{s_{0},g^{\prime}\to g,j}+\frac{ \chi_{g}}{k_{\text{eff}}}\nu_{\sigma f,g^{\prime},j})\Phi_{g^{\prime},j},\quad q _{1,g,j}=\sum_{g^{\prime}=1}^{G}\sigma_{s_{1},g^{\prime}\to g,j}J_{g^{ \prime},j} \tag{3}\]
with the current \(J_{g^{\prime},j}\) appearing in the linearly anisotropic scattering source as averaged in the cell \(j\). The sums involved in Eq. (2), keeping the isotropic scattering sources only, can be reduced by half if we consider only contributions coming from \(i>j\) with the others being just opposite in sign. Thus, Eq. (2) can be also reformulated for the partial currents. Assuming vacuum boundary conditions, we can define the following
\[J_{g,i+1/2}^{\pm}=\sum_{j=1}^{I}q_{0,g,j}\Delta_{i}\tilde{e}_{g,i+1/2,j}^{\pm} \tag{4}\]
which makes explicit use of the theory of the collision probability method [6], where \(\tilde{e}\) represents the escape probability. In case of reflective boundary conditions, the term \(J_{g,-1/2}^{\pm}\tilde{t}_{g,i+1/2,j}\) must be added to Eq. (4), where we made use of the transmission probability \(\tilde{t}\). Details of this formulation can be found in [12]. The expression of Eq. (2) is used to correct the "diffusive" current of Eq. (1), leading to a more accurate current estimation.
### Iterative redefinition of the diffusion coefficient
This implementation was originally suggested by Ronen [8]. The diffusion coefficient is redefined according to Fick's law, taking the current from the integral expression and the scalar flux used for the source at its integrand
\[\tilde{D}_{g,i+1/2}=-\frac{J_{g,i+1/2}}{2\frac{\Phi_{g,i+1}-\Phi_{g,i}}{\Delta x _{i+1}+\Delta x_{i}}}. \tag{5}\]
The corrected diffusion coefficient can be recast as \(\tilde{D}_{g,i+1/2}=D_{g,i+1/2}+\delta D_{g,i+1/2}\) with \(D_{g,i+1/2}\) obtained by Eq. (1). The term \(\delta D_{g,i+1/2}\) accounts for the correction to the diffusion coefficient. Eq. (5) can show indeterminate division by zero in case of flat flux, requiring proper numerical fix-up [5]. In the RM, a new generalized eigenvalue problem arises at each r-th iteration. Using the operator form, we have 1
Footnote 1: \(\mathcal{A}\) is a three diagonal banded matrix whose entries are the diffusion coefficient, plus removal terms for nuclear events. \(\mathcal{P}\) is the neutron generator operator.
\[\Phi^{(r+1)}=\frac{{\mathcal{A}^{(r+1)}}^{-1}(\Phi^{(r)})}{k_{\text{eff}}^{(r) }}\mathcal{P}\Phi^{(r)},\quad k_{\text{eff}}^{(r+1)}=k_{\text{eff}}^{(r)} \frac{\langle\Phi^{(r+1)},\mathcal{P}\Phi^{(r+1)}\rangle}{\langle\Phi^{(r+1)}, \mathcal{P}\Phi^{(r)}\rangle} \tag{6}\]
which is solved iteratively by power iterations. At every iteration, the removal matrix \(\mathcal{A}\) contains the corrections given by the new estimate of the current by the integral expression. Hence, it can be written as \(\mathcal{A}_{0}\) reproducing standard diffusion plus \(\delta\mathcal{A}(\Phi)\) that contains the transport corrections. As mentioned, RM iterations must take into account the ordinary iteration schemes, like outers-inners. They can be implemented outside or inside the outers for instance, or even merge into a single iteration level. The inside-outer scheme is here employed.
### The drift current implementation
The computational scheme of this algorithm is based on the introduction of the transport-corrected currents in the numerical scheme using drift terms
\[\delta J_{g,i+1/2}=J_{g,i+1/2}-J_{g,i+1/2}^{D}=-\delta D_{g,i+1/2}\frac{\Phi_ {g,i+1}+\Phi_{g,i}}{(\Delta x_{i+1}+\Delta x_{i})/2}. \tag{7}\]
In this case, the current acquires a contribution proportional to the flux, which is physically different from being proportional to the gradient of the flux, as in the previous implementation. The sum of the cell fluxes is proportional to the average local flux, representing a drift-advection term for neutron leakage. This option avoids indeterminate division by zero and it was originally introduced by the coarse mesh finite difference scheme (CMFD) in nodal diffusion codes [13]. For this implementation, we also need to specify the form for the current correction at the boundary, which has been defined as \(\delta J_{g,-1/2}=-\delta D_{g,-1/2}\Phi_{g,0}\), without any division for the spatial width since no physical meaning is attributed to the correction itself [4]. The solving system is still expressed by
Eq. (6). A flow chart of the RM implementations is presented in Figure 1. The iterative algorithm, representative of both the implementations, is shown along with the Anderson acceleration performed through the DAAREM algorithm, which is crucial in finding the fixed point solution throughout the non-linear RM iterations [12].
The convergence criterion is based on the residuals of the relative flux differences between two successive iterations below a set threshold. Specifically, \(\omega_{\Phi}^{(r,r+1)}=\max\big{|}(\Phi^{(r+1)}-\Phi^{(r)})\big{/}\Phi^{(r+1)} \big{|}\leq\epsilon_{RM}\). The computational drift scheme is also available through the implementation of the partial currents correction [5]. The correction terms in this scheme are two degrees of freedom (per interface per energy group) instead of one as done in the original drift implementation although no actual calculation benefit has been reported [5].
### Boundary conditions
A generalized form of the boundary condition of (homogeneous) Robin type based on the use of the extrapolated distance \(d_{ext,g}\) follows as \(-d_{x}\Phi_{g}=\Phi_{g}/d_{ext,g}\). After the multiplication of the diffusion coefficient at both sides, this yields at the left boundary [12]
\[J_{g,\to 1/2}^{D}\simeq-\frac{D_{g,0}\Phi_{g,0}}{(\Delta_{0}/2+d_{ext,g})} \tag{8}\]
where \(\Phi_{g,0}\) and \(D_{g,0}\) are respectively the averaged flux and diffusion coefficient at the left border cell (\(i=0\)), \(\Delta_{0}/2\) is the half-width cell and \(d_{ext,g}=\zeta_{g}D_{g,0}\) the extrapolated distance in the case of vacuum; usually, \(\zeta_{g}=2.13\) is the recommended value in homogeneous slab geometry [10]. Different boundary conditions can also be reproduced by this equation; for instance, reflection can be simulated by \(\zeta_{g}\rightarrow\infty\) while the zero-flux comes with \(\zeta_{g}=0\). Focusing on Eq. (8), the first cell-averaged flux \(\Phi_{g,0}\) has been used as the boundary flux. The boundary value is not available according to the finite difference numerical scheme and hence an approximation is introduced. Studies related to a better approximation of
Figure 1: Flow chart of the RM.
the boundary flux have been investigated; specifically, we focused on a quadratic fit of the flux in the proximity of the boundary. The implementation is reported in A. This was a tentative to reduce the flux errors due to the approximation by discretization of the finite difference scheme. However, no major improvement concerning previous results is found when using higher-order numerical approximations of the boundary equation.
### Inherent limitation of the \(P_{1}\) approximation at vacuum boundary
When dealing with vacuum interfaces, it is worth recalling that an intrinsic limitation of diffusion theory in approximating the vacuum boundary exists. This is crucial for understanding the actual limitations of diffusion in modelling the vacuum boundary. Let us consider a finite 1D slab system of width \(a\) with vacuum at the right boundaries. The actual transport boundary condition implies that the angular flux, \(\psi(a,\mu)\), for all entering directions, is zero. In elementary diffusion theory, it is intuitive that we cannot satisfy this condition rigorously since we only deal with the first two flux moments. In diffusion, we accordingly require that the inwardly directed partial current \(J_{-}(a)\) is vanishing for instance on the right surface. Specifically, for \(\mu<0\), we have the following
\[J_{-}(a)=\int_{-1}^{0}d\mu\mu\psi(a,\mu)=0. \tag{9}\]
Diffusion theory can only yield an approximate description of the angular flux and therefore of the corresponding boundary condition. Recalling the \(P_{n}\) theory, we can express the angular flux at the boundary as [3]
\[\psi(a,\mu)=\sum_{l=0}^{n}\bigg{(}\frac{2l+1}{2}\bigg{)}\Phi_{l}(a)P_{l}(\mu) \tag{10}\]
which for \(n=1\) leads to the diffusion approximation; hence, we can write
\[\psi(a,\mu)\simeq\frac{1}{2}\Phi(a)+\frac{3}{2}J(a)\mu, \tag{11}\]
where we identify the first two moments of the flux expansion as the scalar flux and the neutron current, respectively. Substituting the previous to the zero incoming current condition of Eq. (9) and performing the integration on \(\mu\) leads to the following
\[\Phi(a)=2J(a), \tag{12}\]
which is the well-known Marshak condition [3]. If substituted to Eq.(11), the following approximated expression of the angular flux at the boundary is expressed as
\[\psi(a,\mu)\simeq\frac{1}{2}\Phi(a)\bigg{(}1+\frac{3}{2}\mu\bigg{)}. \tag{13}\]
This relation gives negative values of \(\psi(a,\mu)\) for \(-1\leq\mu<-2/3\), which is of course unphysical. This proves that we cannot reproduce the transport boundary condition \(\psi(a,\mu)=0\) with the \(P_{1}\) approximation. A popular mathematical expedient is to introduce the extrapolated distance, as done in Eq. (8). It must be stressed that with this approximation, the flux does not vanish outside the boundary and that diffusion gives a poor representation of the true flux near the boundary. With the RM, a more accurate flux distribution to transport at the boundary can be obtained through the non-linear iterations but the intrinsic limitation of the approximated boundary condition persists.
## 3 Equivalence and customized diffusion
The RM allows having new estimates of the diffusion coefficient using Fick's law with a more accurate estimation of the currents by a transport operator. In practice, the best diffusion coefficient would be the one satisfying exactly Fick's law with the same flux and current provided by transport. An optimal diffusion coefficient can be defined using Fick's law with the flux distribution obtained from a transport solver. Moreover, an optimal extrapolated distance can also be redefined with this distribution to correctly reproduce the vacuum boundary condition. These new quantities are only meant to verify equivalence between diffusion and transport in terms of flux distribution calculations since usually the solution is neither known _a priori_ nor available. The goal is to overcome the limitations of diffusion theory and prove an equivalence between the two physical models. Indeed, this is among the goals of the RM.
### Optimal diffusion coefficients
Considering for simplicity a multi-group homogeneous slab with isotropic scattering only, the optimal diffusion coefficient \(D_{g}^{opt}\), according to Fick's law, can be defined as
\[D_{g}^{opt}(x)=-\frac{J_{g}(x)}{d_{x}[\Phi_{g}(x)]}=-\frac{\int_{0}^{x}dx^{ \prime}E_{2}[\tau_{g}(x,x^{\prime})]q_{0,g}(x^{\prime})-\int_{x}^{a}dx^{\prime }E_{2}[\tau_{g}(x^{\prime},x)]q_{0,g}(x^{\prime})}{\sigma_{g}(\int_{0}^{x}dx^{ \prime}E_{0}[\tau_{g}(x,x^{\prime})]q_{0,g}(x^{\prime})-\int_{x}^{a}dx^{\prime }E_{0}[\tau_{g}(x^{\prime},x)]q_{0,g}(x^{\prime}))} \tag{14}\]
where both the neutron current \(J_{g}(x)\) and flux derivative \(d_{x}[\Phi_{g}(x)]\) are expressed by transport operators with well-known integrand quantities. The first-order flux derivative in Eq. (14) is computed using Leibniz integral rule and the transport kernel is written by employing the exponential integral function with \(\tau_{g}(x^{\prime},x)\) the optical path length and \(\sigma_{g}\) the total cross section [1]. We also notice that the denominator of Eq. (14) involves integrals with singular kernels if \(x=x^{\prime}\) since \(E_{0}[\tau_{g}(x,x)]=e^{-\tau_{g}(x,x)}/\tau_{g}(x,x)\rightarrow\infty\)[1]. However, it can still be proved that those integrals are finite, as demonstrated in Appendix C. Although the expressions for the current and the flux derivative are obtained for the continuum, here we use a numerical solution for the scalar flux when calculating the neutron source in the above integrals. The scalar flux is obtained by resolving the 1D multi-group problems with the CPM. The full slab is considered because of the intrinsic limitation of the CPM in treating reflection at the centre of the slab [12]. Using the standard notation for the spatial discretization of the numerical solution, Eq. (14) can be discretized as follows
\[D_{g,i+1/2}^{opt}=-\frac{\sum_{j=0}^{I-1}q_{0,g,j}I_{g,i,j,2}-\sum_{j=i+1}^{I- 1}q_{0,g,j}I_{g,i,j,2}}{\sum_{j=0}^{I-1}\sigma_{g,j}q_{0,g,j}I_{g,i,j,0}-\sum_ {j=i+1}^{I-1}\sigma_{g,j}q_{0,g,j}I_{g,i,j,0}} \tag{15}\]
where
\[I_{g,i,j,n}=\int_{x_{j-1/2}}^{x_{j+1/2}}dx^{\prime}E_{n}[\tau_{g}(x_{i+1/2},x ^{\prime})]\text{ for }n=0,2. \tag{16}\]
The previous integrals in the cells are computed by quadrature formulae (e.g., the midpoint rule). The extension to the heterogeneous media is straightforward for the current, using the expressions of Eq. (2). For the first-order flux derivative instead, although an analytical expression is possible but rather awkward, we have approximated it by central finite differences with a very fine spatial discretization. For test cases with no steep flux gradients at material interfaces, this is an acceptable approximation.
Supposing again that the solution from transport is known, we can also define an optimal extrapolated coefficient, called \(\zeta_{g}^{opt}\), using its definition
\[\zeta_{g}^{opt}=-\frac{\Phi_{g,-1/2}}{J_{g,-1/2}}, \tag{17}\]
where the index \(i=-1/2\) marks the quantities at the left boundary. A similar definition holds for the right one. We also note that the determination of the extrapolated distance is case-dependent because of the scalar flux. Moreover, for small problems (few mean free paths or less), this distance is expected to have a stronger effect not only in the proximity of the boundary. These new optimal quantities are tested in Section 5.1 as input data with a standard diffusion solver without requiring RM iterations.
### Analytical diffusion coefficients
Alternative formulations of the diffusion coefficient and extrapolated distance based on the derivation of the exact analytical solution of the transport solution obtained by Case's method can be introduced. This method leads to exact expressions for the neutron distribution and criticality conditions. These expressions depend on expansion coefficients which are shown to satisfy a Fredholm integral equation. However, here we only focus on the results of diffusion theory with the exact Milne-problem extrapolated distance, which correspond to the zeroth-order approximation of the Neumann series solution to the Fredholm equation. In fact, no diffusion coefficient can be derived for higher-order approximations. The complete derivation, including also transport corrections, was accurately derived by George J. Mitsis [7]. The goal of introducing alternative definitions is to understand if different initial diffusion constants, obtained by physical assumptions or analytical methods, can affect the solution despite the intrinsic iterative procedure of the RM. The derivation of the diffusion coefficient and extrapolated distance, indicated as \(D_{\nu_{0}}\) and \(\zeta_{0}\), is provided in C.
## 4 Status and analysis of the RM performance
The RM has been recently tested with a complete test suite of 1G and 2G homogeneous and heterogeneous slab problems with isotropic and linearly-anisotropic scattering [5]. These cases belong to a test set of analytical benchmarks for code verification [9]. Reference flux distributions have been obtained using both reference solutions from Sood, a CPM solver for isotropic problems and \(S_{n}\) solver for anisotropic ones. For the sake of brevity, RM performances compared to diffusion are summarized in Table 1, with test cases labelled according to the energy group, media property and scattering source. Details on results for each of the criticality problems can be found in [5]. Using the RM, the maximal deviation in criticality is less than ten pcm, and the maximal deviation in the spatial distribution of the flux is less than 2% and located mostly at the boundary with vacuum. Results are representative of both the RM implementations. A convergence criterion is applied in the iterative solving scheme of the RM, as highlighted in Figure 1. Indeed, previous results were obtained with a value of \(\epsilon_{RM}=10^{-6}\). However, the effects of the convergence criterion have not been investigated in previous research. Lowering this threshold will eventually affect the convergence rate and require in theory more RM iterations but no effects on the flux distribution can be predicted. Results related to this investigation are shown in Section 5.2.
## 5 Results
We here present the results of the investigations reported in Section 3 and Section 4. Results are shown for homogeneous and heterogeneous problems from Sood's analytical benchmarks test set [9]. Problem specifications are available in D.
### Transport-diffusion equivalence check
The numerical solution of the diffusion equation is presented for homogeneous and heterogeneous problems2 using different diffusion coefficients as inputs, as shown in Figure 3. A schematic representation of the slab configurations with reference flux positions is given in Figure 2. Standard diffusion coefficients are denoted with \(D_{0}=1/3\sigma_{0}\) and \(\zeta_{0}=2.13\). The analytical diffusion coefficients for 1G homogeneous cases only are referred to as \(D_{\nu_{0}}\) and \(\zeta_{\nu_{0}}\). Finally, the optimal coefficients are labelled as \(D^{opt}\) and \(\zeta^{opt}\).
Footnote 2: The heterogeneous test case Ue-Fe-Na-1-0-SL is only employed for evaluating the second part of the results. No equivalence is provided for this case due to very steep flux gradients in the computation of \(D^{opt}\), invalidating the procedure.
A very fine spatial discretization, \(\Delta=0.001\) mfp, has been used in all calculations. The solution by standard diffusion in these particular test cases are very different from the reference transport solution provided by the CPM. All flux deviations are computed as \(\Delta\Phi\%=(1-\Phi/\Phi_{CPM})*100\), whether the flux is computed by a standard diffusion calculation or the RM.
\begin{table}
\begin{tabular}{l c c} \hline Test case & (\(\approx\)) max\(|(1-k)|\) (pcm) & (\(\approx\)) max\(|(\Delta\Phi^{(RM)})|\) \% \\ \hline \(\epsilon_{RM}=10^{-6}\), \(\Delta=0.01\) mfp & **Diffusion / RM** & **Diffusion / RM** \\ \hline \multicolumn{3}{l}{_Isotropic cases_} \\
1G homogeneous & **12,000 / 4** & **30 / 1.2 (v.b.)** \\
2G homogeneous & **14,000 / 7** & **80 / 1.1 (v.b.)** \\
1G heterogeneous & **15,000 / 5** & **16 / 0.8 (m.i.)** \\
2G heterogeneous & **300 / 3** & **25 / 1.5 (v.b)** \\ \hline \multicolumn{3}{l}{_Linearly-anisotropic cases_} \\
1G homogeneous & **10,000 / 2** & **30 / 1.4 (v.b.)** \\
2G homogeneous & **18,000 / 3** & **75 / 1.3 (v.b.)** \\ \hline \end{tabular}
\end{table}
Table 1: Summary of RM performances (with standard \(D_{0},\zeta_{0}\)): results for Sood test cases in slab geometry.
Figure 2: Geometrical specifications for 1G-2G homogeneous and 1G heterogeneous test cases.
The use of standard diffusion coefficients \(D_{0}\), \(\zeta_{0}\) results in flux deviations of around tens of percent as we approach the vacuum boundaries and other material interfaces (e.g, reflector). For the 1G homogeneous problem, the use of the analytical diffusion constants results in a better agreement to transport but considerably large flux deviations are still present in the proximity of the vacuum boundary. A similar trend is obtained when the optimal diffusion coefficient \(D^{opt}\) is used along with \(\zeta_{0}\). This behaviour is due to the improper extrapolated coefficient used to represent the vacuum boundary condition. The solution achieves an accuracy of at least four decimal places with the CPM solution at all reference positions when \(\zeta^{opt}\) is employed. These results prove that by employing optimal coefficients an equivalence is set between the transport and diffusion model. In Figure 4 and 5, the flux deviations at the boundary with vacuum are shown using \(D^{opt}\) with a range
Figure 3: Flux deviation (%) using diffusion with customized diffusion constants.
of extrapolated coefficients. The error is minimized when \(\zeta^{opt}\) is employed, which means that the vacuum has been correctly modelled by the extrapolated distance \(\zeta^{opt}D^{opt}\). The optimal extrapolated distance is also unique in minimizing the flux error. Small deviations from this value cause a significant increase in the flux deviation. Modelling correctly the vacuum boundary is essential in obtaining an equivalence between the two at all reference positions with a standard diffusion calculation.
### On the convergence criterion in RM iterations
Both RM implementations are employed and compared using standard diffusion coefficients as input data with decreasing convergence criterion for all the test cases introduced
Figure 4: Flux deviation (%) trends with extrapolated coefficient using \(D^{opt}\) for 1G test cases.
Figure 5: Flux deviation (%) trends with extrapolated coefficient using \(D^{opt}\) for 2G test case.
in section 5.1. The CPM is used again as the reference solution. Flux percentage deviations at the vacuum boundary and material interfaces are presented in Figure 6, 7 and 8. The spatial mesh of 0.001 mfp has been used. In addition, analytical coefficients derived in section 3.2 are also employed for the 1G homogeneous problem.
Tightening the convergence criterion does reduce monotonically the flux deviations, which are lowered down to \(\sim 0.1\%\) and \(\sim 0.01\%\) for the \(RM^{net}\) and \(RM^{D}\) implementation with a minimum tolerance of \(10^{-10}\), respectively. The 2G problem's flux deviation reached even lower values. Hence, an improvement of at least one and two orders of magnitude in the flux deviation at the vacuum boundary is obtained with respect to previous results [5]. The \(RM^{D}\) implementation has generally shown a more accurate flux distribution evaluation at
Figure 6: Flux % deviations trends using RM with decreasing \(\epsilon_{RM}\) (1G test cases).
Figure 7: Flux % deviations trends using RM with decreasing \(\epsilon_{RM}\) (2G test case).
the vacuum boundary. At material interfaces but the boundaries, no difference between the two implementations has been noticed, as highlighted in Figure 8.
In addition, with this implementation, no differences in the flux deviations are found by using different input diffusion coefficients (e.g, \(D_{0},\zeta_{0}\); \(D_{\nu_{0}},\zeta_{\nu_{0}}\)) at the beginning of the iterative scheme after a value of \(\epsilon_{RM}=10^{-8}\), as reported in Figure 6. Indeed, the right diffusion coefficients are in practice not needed at the beginning of the diffusion calculation if tighter tolerances are used. This confirms and completes the intuition by Tomatis and Dall'Osso on the advantage of such an iterative scheme in the first numerical application of RM [11]. The number of iterations to achieve the previous flux deviations is reported in Table 2. The increase in the number of RM iterations is relatively acceptable compared to the gain in flux accuracy. Once again, the use of the DAAREM algorithm3 as an acceleration scheme applied to RM iterations is crucial in obtaining fast convergence, especially with the use of a small convergence criterion.
Footnote 3: An improved version of the DAAREM has been used to generate these results. Remember to reference.
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline Test case & Diff. coeffs (RM\({}^{*}\)) & \multicolumn{6}{c}{\(\epsilon_{RM}\)} \\ \cline{3-6} & \(1e-6\) & \(1e-7\) & \(1e-8\) & \(1e-9\) & \(1e-10\) \\ \hline PUb-1-0-SL & \(D_{0},\zeta_{0}\) (RM\({}^{net}\)) & 123 & 220 & 238 & 427 & 569 \\ & \(D_{\nu_{0}},\zeta_{\nu_{0}}\) (RM\({}^{net}\)) & 102 & 223 & 230 & 323 & 682 \\ & \(D_{0},\zeta_{0}\) (RM\({}^{D}\)) & 155 & 308 & 497 & 572 & 746 \\ & \(D_{\nu_{0}},\zeta_{\nu_{0}}\) (RM\({}^{D}\)) & 81 & 320 & 432 & 557 & 658 \\ \hline PUa-H2O(0.5)-1-0-SL & \(D_{0},\zeta_{0}\) (RM\({}^{net}\)) & 79 & 459 & 545 & 572 & 1120 \\ & \(D_{0},\zeta_{0}\) (RM\({}^{D}\)) & 138 & 458 & 545 & 572 & 1141 \\ \hline PU-2-0-SL & \(D_{0},\zeta_{0}\) (RM\({}^{net}\)) & 184 & 722 & 723 & 1195 & 1296 \\ & \(D_{0},\zeta_{0}\) (RM\({}^{D}\)) & 242 & 334 & 868 & 1238 & 1442 \\ \hline Ue-Fe-Na-1-0-SL & \(D_{0},\zeta_{0}\) (RM\({}^{net}\)) & 181 & 188 & 488 & 995 & 1078 \\ & \(D_{0},\zeta_{0}\) (RM\({}^{D}\)) & 177 & 380 & 572 & 626 & 1046 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of RM iterations with convergence criteria for 1G-2G test cases.
Figure 8: Flux % deviations trends at material interfaces using RM with decreasing \(\epsilon_{RM}\) (Ue-Fe-Na-1-0-SL).
### Diffusion coefficient trends in the proximity of material interfaces
The optimal diffusion quantities defined in section 3.1 were computed using the flux distribution obtained from the CPM. However, the flux distribution is usually not known and this check was intentionally set to verify the equivalence between diffusion and transport for very simple test cases. Nevertheless, those coefficients can be considered the reference ones by which comparisons can be made with the corrected diffusion coefficients from RM calculations. In section 5.1, we also saw how the extrapolated distance can affect the flux distribution close to the vacuum boundary. In the RM, the extrapolated coefficient \(\zeta\) is kept constant throughout the iterations and set equal to the suggested value of \(\zeta_{0}=2.13\)[10]. However, a new flux distribution is available at every iteration and the extrapolated distance can be redefined accordingly to its definition using the latest flux distribution. At the end of every RM iteration, \(\mathrm{d}_{ext}\) can be redefined using the redefinition of the diffusion coefficient implementation since a corrected diffusion coefficient is needed.
To get some additional insight regarding this possibility, it is interesting to visualize the diffusion coefficient trends at the end of Ronen iterations in the proximity of vacuum. Figure 9 shows the trends of \(D_{0}\) with and without RM in comparisons with \(D^{opt}\) for a 1G homogeneous case. Diffusion coefficient trends at the material interface (reflector) are also shown for a 1G heterogeneous test case in Figure 10. Similar trends are obtained for the other test cases without providing further information. When the RM is employed, trends are also shown with respect to decreasing values of the convergence criterion. The vacuum boundary condition has been modelled with two extrapolated coefficients: \(\zeta_{0}\) and \(\zeta^{opt}\). The way we model the vacuum boundary does affect the diffusion coefficient at this interface, leading to negative and positive values of the diffusion coefficient for \(\zeta_{0}\) and \(\zeta^{opt}\), respectively. Using \(\zeta_{0}\) results in having a negative extrapolated distance \(\mathrm{d}_{ext}\), which is unphysical and prevents the redefinition of extrapolated distance itself. However, having negative corrections at the vacuum boundary does not prevent obtaining very accurate flux distributions. Using \(\zeta^{opt}\) instead leads to positive diffusion coefficients everywhere but its value is usually unknown. The error trends at the vacuum boundary using the three extrapolated coefficients for the RM\({}^{D}\) implementation are shown in Table 3. The last table clearly shows that the solution does converge to the same distribution for tight convergence criteria regardless of the way we model the vacuum boundary. Hence, we conclude that for very tight tolerances, the value of the diffusion coefficient at the boundary is practically irrelevant, whether it is physical or not. This also shows that the reference solution can be achieved by RM with multiple pairs of \(D\) and \(\zeta\) when tight convergence tolerances are set. As mentioned above, redefining the extrapolated coefficient although correct in theory, brings in some numerical problems related to negative extrapolated distances and is practically irrelevant for tight convergence criteria.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \(\epsilon_{RM}\) & \(\Delta\Phi_{L_{4}}\%\) (\(D_{0},\zeta_{0}=2.13\)) & \(\Delta\Phi_{L_{4}}\%\) (\(D_{0},\zeta_{\nu_{0}}=1.57\)) & \(\Delta\Phi_{L_{4}}\%\) (\(D_{0},\zeta^{opt}=1.98\)) \\ \hline
1.e-6 & 0.769833 & 0.490860 & 0.519612 \\
1.e-7 & 0.212684 & 0.054975 & 0.040269 \\
1.e-8 & 0.07456 & 0.064199 & 0.058997 \\
1.e-9 & 0.03247 & 0.064199 & 0.025689 \\
1.e-10 & **0.02247** & **0.019988** & **0.023171** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Flux deviation (%) at vacuum boundary with extrapolated coefficients using RM for PUb-1-0-SL test case.
## 6 Conclusion
The RM has been recently tested for a wide range of problems in 1D plain geometry, including anisotropic scattering test cases [5]. Some discrepancies on the scalar flux at the material interface (especially with vacuum) have been observed, as previously reported for simple cases in [5; 12]. The latest implementation also includes the use of the Anderson acceleration through the DAAREM algorithm, providing fast calculations. However, these results need further numerical and physical investigations.
In this article, we provide investigations carried out to explain these flux deviations. The use of higher-order numerical approximations of the discretized equation at the bound
Figure 10: Diffusion coefficient (cm) trends for PUa-H2O(0.5)-1-0-SL test case.
Figure 9: Diffusion coefficient (cm) trends for PUb-1-0-SL test case. |
2309.08465 | Cyclic Higgs bundles, subharmonic functions, and the Dirichlet problem | We demonstrate the existence and uniqueness of the solution to the Dirichlet
problem for a generalization of Hitchin's equation for diagonal harmonic
metrics on cyclic Higgs bundles. The generalized equations are formulated using
subharmonic functions. In this generalization, the coefficient exhibits worse
regularity than that in the original equation. | Natsuo Miyatake | 2023-09-15T15:15:04Z | http://arxiv.org/abs/2309.08465v2 | # Cyclic Higgs bundles, subharmonic functions, and the Dirichlet problem
###### Abstract
We demonstrate the existence and uniqueness of the solution to the Dirichlet problem for a generalization of Hitchin's equation for diagonal harmonic metrics on cyclic Higgs bundles. The generalized equations are formulated using subharmonic functions. In this generalization, the coefficient exhibits worse regularity than that in the original equation.
## 1 Introduction
Let \(X\) be a connected, possibly non-compact Riemann surface equipped with a Kahler metric \(g_{X}\). We denote by \(h_{X}\) the Hermitian metric on the canonical bundle \(K_{X}\to X\), by \(\omega_{X}\) the Kahler form, and by \(\Lambda_{\omega_{X}}\) the adjoint of \(\omega_{X}\wedge\). We choose a square root \(K_{X}^{1/2}\) of the canonical bundle \(K_{X}\). We define a vector bundle \(E\) of rank \(r\) as \(E\coloneqq K_{X}^{(r-1)/2}\oplus K_{X}^{(r-3)/2}\oplus\cdots\oplus K_{X}^{-(r- 3)/2}\oplus K_{X}^{-(r-1)/2}\). Let \(h=(h_{1},\ldots,h_{r})\) be a smooth diagonal Hermitian metric on \(E\) with curvature \(F_{h}=(F_{h_{1}},\ldots,F_{h_{r}})\). We assume that \(\det(h)\) is flat. Let \(H_{j}\coloneqq h_{j}^{-1}\otimes h_{j+1}\otimes h_{X}\) be a Hermitian metric on the trivial bundle for each \(j=1,\ldots,r-1\), and \(H_{r}=h_{1}\otimes h_{r}^{-1}\otimes h_{X}\) a Hermitian metric on \(K_{X}^{r}\). For each \(j=1,\ldots,r\), we denote by \(F_{H_{j}}\) the curvature associated with the metric \(H_{j}\). Let \(\varphi:X\rightarrow[-\infty,\infty)\) be a quasi-subharmonic function, i.e., a locally integrable function that is locally a sum of a subharmonic function and a smooth function (cf. [7]). Note that we omit the function which is identically \(-\infty\) from the definition of the quasi-subharmonic function. The quasi-subharmonic function \(\varphi\) is said to be an \(F_{H_{r}}\)_-subharmonic function_ if
the following holds in the sense of the distribution (cf. [7, Section 8]):
\[\sqrt{-1}\partial\bar{\partial}\varphi+\sqrt{-1}F_{H_{r}}\geq 0.\]
As we can easily see from the definition, for each \(N\in\mathbb{Z}_{\geq 1}\) and each \(q_{N}\in H^{0}((K_{X}^{r})^{N})\), \(\frac{1}{N}\log|q_{N}|_{H_{r}}^{2}\) is an \(F_{H_{r}}\)-subharmonic function, where \(|q_{N}|_{H_{r}}^{2}\) is a square of the norm of \(q_{N}\) measured by \(H_{r}\). We consider the following PDE on \(X\) defined by using an \(F_{H_{r}}\)-subharmonic function \(\varphi\):
\[\Delta_{\omega_{X}}\xi+\sum_{j=1}^{r}4k_{j}^{\prime}e^{(v_{j},\xi)}v_{j}=-2 \sqrt{-1}\Lambda_{\omega_{X}}F_{h}, \tag{1}\]
where each symbol is defined as follows:
* Let \(V\) be a vector space defined as \(V\coloneqq\{x=(x_{1},\ldots,x_{r})\mid x_{1}+\cdots+x_{r}=0\}\), which is identified with the set of trace-free diagonal matrices of rank \(r\). Then for each \(j=1,\ldots,r\), \(v_{j}\in V\) is a vector defined as \(v_{j}\coloneqq u_{j+1}-u_{j}\), where \(u_{1},\ldots,u_{r}\) is the canonical basis on \(\mathbb{R}^{r}\).
* We denote by \((\cdot,\cdot)\) the standard inner product on \(\mathbb{R}^{r}\).
* We denote by \(\xi:X\to V\) a \(V\)-valued function which is a solution of equation (1) in some sense.
* We define \(k_{1}^{\prime},\ldots,k_{r}^{\prime}\) as \(k_{j}^{\prime}\coloneqq|1|_{H_{j}}\) (\(j=1,\ldots,r-1\)) and \(k_{r}^{\prime}\coloneqq e^{\varphi}\), where \(1\) is the canonical section of the trivial bundle, and \(|1|_{H_{j}}\) is the norm measured by \(H_{j}\).
* We denote by \(\Delta_{\omega_{X}}=-2\sqrt{-1}\Lambda_{\omega_{X}}\partial\bar{\partial}\) the geometric Laplacian.
Suppose that \(X\) is a non-compact Riemann surface. Let \(f_{X}:X\to\mathbb{R}\) be a smooth strictly subharmonic function such that \(\{x\in X\mid f_{X}(x)\leq c\}\) is a compact subset for each \(c\in\mathbb{R}\). We take a \(c\in\mathbb{R}\) and set
\[Y \coloneqq\{x\in X\mid f_{X}(x)<c\},\] \[\partial Y \coloneqq f_{X}^{-1}(c),\] \[\overline{Y} \coloneqq Y\cup\partial Y.\]
Our main theorem is as follows:
**Theorem 1**.: _For each \(V\)-valued continuous function \(\eta=(\eta_{1},\ldots,\eta_{r}):\partial Y\to V\), there exists a \(V\)-valued function \(\xi=(\xi_{1},\ldots,\xi_{r}):Y\to V\) that satisfies the following:_
1. \(\xi\) _is a_ \(C^{1,\alpha}\)_-function for any_ \(\alpha\in(0,1)\) _and solves equation (_1_) in the sense of the distribution._
2. _The following boundary condition holds:_ \[\lim_{z\to\zeta}\xi(z)=\eta(\zeta)\text{ for all }\zeta\in\partial Y.\] _Moreover, any_ \(V\)_-valued function that satisfies conditions_ \((a)\) _and_ \((b)\) _is unique._
**Remark 2**.: Let \(\xi:U\to V\) be a \(V\)-valued locally \(L^{\infty}\)-function defined on an open subset \(U\subseteq X\) of \(X\). We say that \(\xi\)_solves equation (1) in the sense of the distribution_, or that \(\xi\)_is a weak solution to equation (1)_ if the following holds:
\[\int_{U}\{(\xi,\Delta_{\omega_{X}}\phi)+(\sum_{j=1}^{r}4k_{j}^{\prime}e^{(v_{ j},\xi)}v_{j}+2\sqrt{-1}\Lambda_{\omega_{X}}F_{h},\phi)\}=0\text{ for all }\phi\in C_{c}^{\infty}(U,V),\]
where we denote by \(C_{c}^{\infty}(U,V)\) the space of all smooth \(V\)-valued functions with compact support, and the integral is taken with respect to the Kahler form \(\omega_{X}\). Throughout the paper, we use the terms "in the sense of the distribution" and "weak solution" for equations or inequalities including the Laplace operator, not limited to equation (1), in the sense described above. Note that for the Poisson equation, the definition of a weak solution is not unique (cf. [9]).
**Remark 3**.: We use the notion of the \(F_{H_{r}}\)-subharmonic function even when \(X\) is a non-compact Riemann surface. Note that on such a surface, the \(F_{H_{r}}\)-subharmonic function can globally be expressed as the sum of a subharmonic function and a smooth function related to the curvature \(F_{H_{r}}\). Specifically, if \(X\) is not compact, we can take a global holomorphic frame \(e:X\to K_{X}^{r}\) (cf. [6, Section 30]) for the holomorphic line bundle \(K_{X}^{r}\to X\). Then \(\tilde{\varphi}\coloneqq\varphi-\log H_{r}(e,e)\) is a subharmonic function on \(X\) since we have \(\bar{\partial}\partial\log H_{r}(e,e)=F_{H_{r}}\).
**Remark 4**.: The coefficient \(e^{\varphi}\) is a bounded function on any compact subset \(K\subseteq X\). Although proving this assertion is straightforward even if \(X\) itself is compact, but to avoid redundancy, we proceed by assuming that \(X\) is a non-compact Riemann surface. Since \(X\) is not compact, the \(F_{H_{r}}\)-subharmonic function \(\varphi\) decomposes into the sum of a subharmonic function \(\tilde{\varphi}\) and a smooth function \(-\log H_{r}(e,e)\), as explained in Remark 3. This implies that \(\varphi\) is an upper semicontinuous function and, therefore, attains its maximum on \(K\) (see [20, Chapter 2.1]). Consequently, the coefficient \(e^{\varphi}\) is bounded on \(K\). In particular, for every \(V\)-valued locally \(L^{\infty}\)-function \(\xi:X\to V\), meaning that the function is a \(V\)-valued \(L^{\infty}\)-function over all compact subset \(K\subseteq X\), the function \(e^{\varphi}e^{(v_{r},\xi)}\) is also a locally \(L^{\infty}\)-function. This ensures its well-definedness as a distribution.
**Remark 5**.: Let \(E^{\vee}\) be the dual vector bundle of \(E\). The vector bundle \(E\) is equipped with an isomorphism \(S_{E}:E\to E^{\vee}\) defined as follows (cf. [12, 14, 16]):
\[S_{E}\coloneqq\left(\begin{array}{ccc}&&1\\ &\iddots&\\ 1&&\end{array}\right):E\to E^{\vee}.\]
A Hermitian metric \(h_{E}\) on \(E\) is said to be _real_ (cf. [12, 14, 16]) if the above \(S_{E}\) is isometric with respect to \(h_{E}\) and \(h_{E}^{\vee}\), where \(h_{E}^{\vee}\) is the natural Hermitian metric on \(E^{\vee}\) induced from \(h_{E}\). From the arguments based on the uniqueness of the solution (cf. [12, Section 7], [14, Corollary 3.24], [16, Section 2.3.5]), we can show that if the metric \((e^{\eta_{1}}h_{1}\mid_{\partial Y},\ldots,e^{\eta_{r}}h_{r}\mid_{\partial Y})\) on the boundary is real, then the metric \((e^{\xi_{1}}h_{1}\mid_{Y},\ldots,e^{\xi_{r}}h_{r}\mid_{Y})\) induced from the solution \(\xi=(\xi_{1},\ldots,\xi_{r})\) of the Dirichlet problem in Theorem 1 is also real.
Equation (1) is a generalization of Hitchin's equation [11] for diagonal harmonic metrics on cyclic Higgs bundles [2, 3], which was introduced in [18, Example 1]. It should be noted that in [18, Example 1], only the case where \(Y\) is a domain of \(\mathbb{C}\) and \(h\) is the metric induced by the standard metric on \(\mathbb{C}\) is discussed. In Section 2, we explain the motivation behind introducing equation (1) and solving its corresponding Dirichlet problem. In Section 3, we establish some fundamental a priori estimates to the solution of equation (1) by slightly modifying the proofs of [18, Theorem 2 and Theorem 3]. In Section 4, we give a proof of Theorem 1.
Cyclic Higgs bundles with multi-valued Higgs fields
We first briefly recall the definition of cyclic Higgs bundles. Let \(X\) be a connected Riemann surface. We carry over the symbols used in Section 1. We take a \(q\in H^{0}(K_{X}^{r})\). For each \(j=1,\ldots,r-1\), we set \(\Phi(q)_{j+1,j}=1\) and \(\Phi(q)_{1,r}=q\). We define \(\Phi(q)\in H^{0}(\mathrm{End}E\otimes K_{X})\) as \(\Phi(q)\coloneqq\sum_{j=1}^{r-1}\Phi(q)_{j+1,j}+\Phi(q)_{1,r}\), where \(\Phi(q)_{i,j}\) is considered to be the \((i,j)\)-component of \(\Phi(q)\), and \(1\) (resp. \(q\)) is considered to be a \(K_{X}^{-1}\) (resp. \(K_{X}^{r-1}\))-valued holomorphic \(1\)-form. We call \((E,\Phi(q))\) a cyclic Higgs bundle (cf. [2, 3, 14, 15]). Cyclic Higgs bundles are examples of the cyclotomic Higgs bundles which were introduced in [26]. We set \(k_{1},\ldots,k_{r}\) as \(k_{j}=k_{j}^{\prime}=|1|_{H_{j}}\) (\(j=1,\ldots,r-1\)), \(k_{r}=|q|_{H_{r}}\). The corresponding Hitchin's equation [11] for a diagonal harmonic metric \((e^{f_{1}}h_{1},\ldots,e^{f_{r}}h_{r})\) is then given by:
\[\Delta_{\omega_{X}}\xi+\sum_{j=1}^{r}4k_{j}e^{(v_{j},\xi)}v_{j}=-2\sqrt{-1} \Lambda_{\omega_{X}}F_{h}, \tag{2}\]
where \(\xi\) is defined as \(\xi\coloneqq(f_{1},\ldots,f_{r})\). Equation (2) is also called Toda lattice with opposite sign (see [8]). As we noted in Section 1, \(\log|q|_{H_{r}}\) is an \(F_{H_{r}}\)-subharmonic function, and thus, equation (2) is a special case of equation (1) if we impose the condition \(f_{1}+\cdots+f_{r}=0\) on \(f_{1},\ldots,f_{r}\).
Let \(N\in\mathbb{Z}_{\geq 2}\) and \(q_{N}\in H^{0}((K_{X}^{r})^{N})\). We next consider a cyclic Higgs bundle \((E,\Phi(q_{N}^{1/N}))\) with the following multi-valued Higgs field:
\[\Phi(q_{N}^{1/N})\coloneqq\left(\begin{array}{cccc}0&&&q_{N}^{1/N}\\ 1&\ddots&&&\\ &\ddots&\ddots&&\\ &&&1&0\end{array}\right).\]
It can be observed that Hitchin's equation for diagonal harmonic metrics on cyclic Higgs bundles depends only on the absolute value of \(q\). Therefore, although the Higgs field \(\Phi(q_{N}^{1/N})\) is multi-valued, Hitchin's equation for a diagonal harmonic metric on \((E,\Phi(q_{N}^{1/N}))\) is well-defined, and the equation for diagonal harmonic metrics on a cyclic Higgs bundle with a multi-valued Higgs field \((E,\Phi(q_{N}^{1/N}))\) coincides with equation (1) with an \(F_{H_{r}}\)-subharmonic function \(\frac{1}{N}\log|q_{N}|_{H_{r}}^{2}\). Let \(h_{N}\) be a solution to Hitchin's equation for a cyclic
Higgs bundle \((E,\Phi(q_{N}^{1/N}))\) with a multi-valued Higgs field \(\Phi(q_{N}^{1/N})\). If we choose a well-defined local section \(q_{N}^{1/N}\) on an open subset \(U\subseteq X\), then \((E,\Phi(q_{N}^{1/N}),h_{N})\) is a harmonic bundle on \(U\). Alternatively, if we choose a ramified covering \(\pi:Z_{N}\to X\) where \(q_{N}^{1/N}\) is a well-defined section of \(\pi^{*}(K_{X}^{\tau-1})\otimes K_{Z_{N}}\to Z_{N}\), then the triplet \((\pi^{*}E,\pi^{*}\Phi(q_{N}^{1/N}),\pi^{*}h_{N})\) becomes a harmonic bundle over \(Z_{N}\).
As we noted above, Hitchin's equation for a diagonal harmonic metric on \((E,\Phi(q_{N}^{1/N}))\) is well-defined, although the Higgs field \(\Phi(q_{N}^{1/N})\) is multi-valued. Additionally, it is evident that any functions constructed from a solution \(h_{N}\) to the Hitchin's equation will be well-defined over \(X\), if they depend solely on the absolute value of \(q_{N}\). For example, the norm \(|\Phi(q_{N}^{1/N})|_{h_{N},h_{X}}^{2}\) and the bracket \(\Lambda_{\omega_{X}}[\Phi(q_{N}^{1/N})\wedge\Phi(q_{N}^{1/N})^{*h_{N}}]\) of the Higgs field \(\Phi(q_{N}^{1/N})\) are well-defined functions. Furthermore, obviously, the amounts that can be written as a constant multiple or combination of them, such as energy density \(e(h_{N})\) of harmonic maps and the sectional curvature \(\kappa(h_{N})\) of the image of harmonic maps (cf. [3, 13]), are also well-defined functions on \(X\).
In this paper, we pose the problem of considering what might happen when \(N\) approaches infinity. More specifically, we pose the problem to consider the asymptotic behavior of a sequence of Hermitian metrics \((h_{N})_{N\in\mathbb{N}}\) such that for each \(N\), \(h_{N}\) is a diagonal solution to Hitchin's equation for the cyclic Higgs bundle \((E,\Phi(q_{N}^{1/N}))\) with a multi-valued Higgs field when \(N\) tends to infinity. To specify what we mean by "consider the asymptotic behavior", we pose the following specific questions:
* Does a sequence \((h_{N})_{N\in\mathbb{N}}\) converge a Hermitian metric in some topology? Also, how fast will it converge?
* How does sequences of well-defined functions, such as \((e(h_{N}))_{N\in\mathbb{N}},(\kappa(h_{N}))_{N\in\mathbb{N}},\ldots\) behave as \(N\) approaches infinity? For example, is it possible to choose a non-trivial sequence \((h_{N})_{N\in\mathbb{N}}\) so that the averages \(\int_{X}e(h_{N}),\int_{X}\kappa(h_{N}),...\) monotonically decreases or increases? How fast will they decay or increase? Also, what if we looked at the behavior at each point of \(X\) rather than the average?
* Is it possible to evaluate the value of the average of well-defined functions such as \(\int_{X}e(h_{N}),\int_{X}\kappa(h_{N}),...\) in the limit when \(N\) tends to infinity? Also, what if we looked at the behavior at each point of \(X\) rather than the average?
* The difference of Hermitian metrics \(h_{N,j}^{-1}\otimes h_{N,j+1}\) (\(j=1,\ldots,r-1\)) defines a metric \(d_{N,j}\) on \(X\) (see [14]), where we denote by \(h_{N,j}\) the \(j\)-th component of the Hermitian metric: \(h_{N}=(h_{N,1},\ldots,h_{N,r})\). Moreover, the Hermitian metric \(h_{N,r}^{-1}\otimes h_{N,1}\) coupled with \(q_{N}^{1/N}\) defines a degenerate metric \(d_{N,r}\) on \(X\) (see [14]). How does the sequence of metric spaces \((X,d_{N,1},\ldots,d_{N,r-1},d_{N,r})_{N\in\mathbb{N}}\) behave? For example, is the completeness of the metrics (cf. [14]) preserved in the limit?
Let \(SH(X,F_{H_{r}})\) be the set of all \(F_{H_{r}}\)-subharmonic functions. As we noted in Section 1, for each positive integer \(N\) and each \(q_{N}\in H^{0}((K_{X}^{r})^{N})\), \(\frac{1}{N}\log|q_{N}|_{H_{r}}^{2}\) is \(F_{H_{r}}\)-subharmonic, and moreover, all elements of such form are dense in \(SH(X,F_{H_{r}})\) with respect to the \(L^{1}_{loc}\)-topology (see [7]). Therefore, any \(F_{H_{r}}\)-subharmonic function \(\varphi\in SH(X,F_{H_{r}})\) is a limit of a sequence of \((\frac{1}{N}\log|q_{N}|_{H_{r}}^{2})_{N\in\mathbb{N}}\), where \(q_{N}\in H^{0}((K_{X}^{r})^{N})\), at least for \(L^{1}_{loc}\)-topology. In considering the above problems, we introduce equation (1) as a limit of Hitchin's equation for diagonal harmonic metrics on cyclic Higgs bundles \((E,\Phi(q_{N}^{1/N}))\) with a multi-valued Higgs field when \(N\) tends to infinity.
If the coefficient \(e^{\varphi}\) is smooth, we can directly apply methods from [4, 5, 24] either to find a solution for equation (1) or for its evolution equation (cf. [18], to which we refer the reader for further explanations). More specifically, we can construct a time-global solution to the evolution equation of (1) on a compact manifold with a possibly empty boundary with the Dirichlet boundary condition, by using the techniques of the evolution equation of Hermitian-Einstein equation [4, 24]. If the manifold has a non-empty boundary, we can demonstrate, by using the Donaldson's argument [5], that the time global solution to the evolution equation converges to a solution of equation (1) that satisfies a Dirichlet boundary condition. For compact manifolds without boundary by using the functional of equation (1) (cf. [17]) we can show the convergence of the time global solution of the evolution equation to a solution of equation (1). We can also solve equation (1) by directly applying [17, Theorem 1] when the coefficient \(e^{\varphi}\) is smooth. In this paper, we extend the Dirichlet problem to a more general case. The Dirichlet problem for the Hermitian-Einstein equation was first solved in [5]. This theorem holds significant utility, particularly when constructing a global solution to the Hermitian-Einstein equation on non-compact manifolds with compact exhaustions [14, 15, 19]. It is also worth noting that the Dirichlet problem for elliptic equations has been studied over the years, with specific emphasis on its link to potential theory (cf. [7, 20]). While there are multi
ple aspects to investigate in equation (1), our paper primarily focuses on the Dirichlet problem, considering its distinctive significance.
**Remark 6**.: The aforementioned problem focuses on the increasing number of zeros in the holomorphic \(r\)-differential. Conversely, one could conceive a dual problem that considers the consequences of an increasing number of poles in the \(r\)-differential. While this paper will not delve into this topic in depth, it will be discussed in a subsequent paper.
**Remark 7**.: The notion of cyclic Higgs bundles can be generalized to the case where the vector bundle is of the form \(E=L_{1}\oplus\cdots\oplus L_{r}\) with arbitrary holomorphic line bundles \(L_{1},\ldots,L_{r}\to X\) (cf. [3]). Equation (1) can also be generalized to such a case, and Theorem 1 can be extended to the more generalized equation.
**Remark 8**.: If the purpose is simply to solve equation (1), there is no need to explicitly state the existence and convergence of the time global solution of the evolution equation. However, the heat equation itself is very interesting, so we have explicitly written the existence and convergence of the global time solution as above. For example, an interesting question is, is it possible to construct a time global solution that is complete (cf. [14]) at each instant?
**Remark 9**.: The problems described above are both influenced by and motivated by the study of the asymptotic behavior of the complex polynomials and sections of holomorphic line bundles (see, e.g., [20, 22, 23] and the references therein).
## 3 Fundamental a priori estimates
Before beginning the proof of Theorem 1, we establish some fundamental a priori estimates for the solution of equation (1) by slightly modifying the proofs of [18, Theorem 2 and Theorem 3]. For the Hermitian-Einstein equation of Higgs bundles, the following estimates have been established in [24, Lemma 3.1 and Lemma 10.1] (see also [14, 15, 25]). We will use Proposition 10 below in Section 4 to prove the uniqueness of the boundary value problem in Theorem 1. We carry over the notation from Section 1. The following holds:
**Proposition 10**.: _Let \(\xi=(\xi_{1},\ldots,\xi_{r}),\xi^{\prime}=(\xi^{\prime}_{1},\ldots,\xi^{\prime}_{r }):X\to V\) be locally \(L^{\infty}\)-functions. Suppose that there exist \(V\)-valued locally \(L^{1}\)-functions \(\tilde{\xi},\tilde{\xi}^{\prime}:X\to V\) such that \(\Delta_{\omega_{X}}\xi=\tilde{\xi}\) and \(\Delta_{\omega_{X}}\xi^{\prime}=\tilde{\xi}^{\prime}\) as distributions on \(X\). Then the following holds in the sense of the distribution:_
\[\Delta_{\omega_{X}}\log|\sum_{j=1}^{r}e^{(\xi_{j}-\xi^{\prime}_{j })}|\] \[\leq |\Delta_{\omega_{X}}\xi+\sum_{j=1}^{r}4k^{\prime}_{j}e^{(v_{j}, \xi)}v_{j}+2\sqrt{-1}\Lambda_{\omega_{X}}F_{h}|+|\Delta_{\omega_{X}}\xi^{ \prime}+\sum_{j=1}^{r}4k^{\prime}_{j}e^{(v_{j},\xi^{\prime})}v_{j}+2\sqrt{-1} \Lambda_{\omega_{X}}F_{h}|. \tag{3}\]
_In particular, if \(\xi\) and \(\xi^{\prime}\) solve equation (1) in the sense of the distribution, then we have_
\[\Delta_{\omega_{X}}\log|\sum_{j=1}^{r}e^{(\xi_{j}-\xi^{\prime}_{j })}|\leq 0.\]
**Proposition 11**.: _Suppose that \(\xi=(\xi_{1},\ldots,\xi_{r})\) is an \(L^{\infty}\)-weak solution to equation (1). Then the following holds in the sense of the distribution:_
\[\Delta_{\omega_{X}}\log\bigl{(}\sum_{j=1}^{r}4k^{\prime}_{j}e^{(v_ {j},\xi)}\bigr{)}\leq-\frac{\left|\sum_{j=1}^{r}4k^{\prime}_{j}e^{(v_{j},\xi) }v_{j}\right|^{2}}{\left|\sum_{j=1}^{r}4k^{\prime}_{j}e^{(v_{j},\xi)}\right|}+ 2\sqrt{-1}\Lambda_{\omega_{X}}F_{h_{X}}, \tag{4}\]
_where \(F_{h_{X}}\) is the curvature of the metric \(h_{X}\)._
**Remark 12**.: Let \(f_{1},\ldots,f_{r}:X\to\mathbb{R}\) be locally integrable functions. Then the following inequality holds:
\[\max\{f_{1},\ldots,f_{r}\}\leq\log(\sum_{j=1}^{r}e^{f_{j}})\leq \max\{f_{1},\ldots,f_{r}\}+\log(r).\]
Consequently, \(\log(\sum_{j=1}^{r}e^{f_{j}})\) is also a locally integrable function. Therefore, the distribution \(\Delta_{\omega_{X}}\log(\sum_{j=1}^{r}e^{f_{j}})\) is well-defined. In particular, left-hand sides of inequality (3) and (4) are both well-defined. Furthermore, as highlighted in Remark 4, the coefficient \(e^{\varphi}\) is a locally bounded function, ensuring that the right-hand sides of (3) and (4) are both unambiguously defined as distributions.
While the following concept is generally familiar, we provide a specific definition for the sake of clarity:
**Definition 13** (cf. [7, 20]).: Let \(B(0,1)\coloneqq\{z\in\mathbb{C}\mid|z|<1\}\) be the unit open ball in the complex plane \(\mathbb{C}\). We call a function \(\chi:\mathbb{C}\to\mathbb{R}\) a _mollifier_ if it satisfies the following conditions (cf. [20, p.49, Theorem 2.7.2]):
\[\chi\in C^{\infty}(\mathbb{C},\mathbb{R}),\ \chi\geq 0,\ \chi(z)=\chi(|z|),\ \text{supp}\chi \subseteq B(0,1),\ \int_{\mathbb{C}}\chi=1,\]
where we denote by \(\text{supp}\chi\) the support of the function \(\chi\). Let \(\chi\) be a mollifier. In the same way as [20, p.49, Theorem 2.7.2], for \(\epsilon>0\) we define a function \(\chi_{\epsilon}\) as follows:
\[\chi_{\epsilon}(z)\coloneqq\frac{1}{\epsilon^{2}}\chi\left(\frac{z}{\epsilon }\right)\ \text{for}\ z\in\mathbb{C}.\]
**Remark 14**.: The notion of a mollifier is usually defined for a class of functions broader than the one above.
We prepare the following lemma:
**Lemma 15**.: _Let \(G=(f_{1},\ldots,f_{r}):B(0,1)\to\mathbb{R}^{r}\) be an \(\mathbb{R}^{r}\)-valued locally \(L^{1}\)-function with respect to the Euclidean metric. We set \(b_{1}\coloneqq\sum_{j=1}^{r}e^{f_{j}/2}u_{j},b_{2}\coloneqq\sum_{j=1}^{r}e^{f _{j}}u_{j}\). We assume that there exists an \(\mathbb{R}^{r}\)-valued locally integrable function \(\tilde{G}=(\tilde{f}_{1},\ldots,\tilde{f}_{r})\) such that_
\[\Delta f_{j}\leq\tilde{f}_{j}\ \text{in the sense of the distribution for all}\ j=1,\ldots,r, \tag{5}\]
_where we denote by \(\Delta\) the Laplacian \(\left(-4\frac{\partial^{2}}{\partial z\partial\bar{z}}\right)\) for the Euclidean metric. Then the following inequality holds in the sense of the distribution:_
\[\Delta\log|b_{1}|^{2}\leq\left(\tilde{G},b_{2}/|b_{1}|^{2}\right). \tag{6}\]
**Remark 16**.: The right hand side of inequality (6) is locally integrable since \(b_{2}/|b_{1}|^{2}\) is a bounded function.
**Remark 17**.: As we remarked in Remark 12, \(\log|b_{1}|^{2}\) is locally integrable, and thus the left-hand side of inequality (6) is well-defined as a distribution.
Proof of Lemma15.: Let \(\chi:\mathbb{C}\to\mathbb{R}\) be a mollifier and let \((\chi_{\epsilon})_{\epsilon>0}\) denote the associated family of functions (see Definition 13 above). We define \(G_{\epsilon}=(f_{1,\epsilon},\ldots,f_{r,\epsilon})\) and \(\tilde{G}_{\epsilon}=(\tilde{f}_{1,\epsilon},\ldots,\tilde{f}_{r,\epsilon})\) as the convolution
\[G_{\epsilon} \coloneqq G\ast\chi_{\epsilon}=(f_{1},\ast\chi_{\epsilon},\ldots,f_{r}\ast\chi_{\epsilon}),\] \[\tilde{G}_{\epsilon} \coloneqq\tilde{G}\ast\chi_{\epsilon}=(\tilde{f}_{1},\ast\chi_{ \epsilon},\ldots,\tilde{f}_{r}\ast\chi_{\epsilon}).\]
These are defined on the open ball \(B(0,1-\epsilon)\coloneqq\{z\in\mathbb{C}\mid|z|<1-\epsilon\}\) of \(B(0,1)\) (see [20, Definition 2.7.1]). We set \(b_{1,\epsilon}\) and \(b_{2,\epsilon}\) as follows:
\[b_{1,\epsilon} \coloneqq\sum_{j=1}^{r}e^{f_{j,\epsilon}/2}u_{j},\] \[b_{2,\epsilon} \coloneqq\sum_{j=1}^{r}e^{f_{j,\epsilon}}u_{j}.\]
From [18, Proof of Theorem 2], the following inequality holds for \(G_{\epsilon}\), \(b_{1,\epsilon}\) and \(b_{2,\epsilon}\):
\[\Delta\log|b_{1,\epsilon}|^{2}\leq\left(\Delta G_{\epsilon},b_{2,\epsilon}/|b_ {1,\epsilon}|^{2}\right). \tag{7}\]
Therefore, inequality (6) holds for \(\tilde{G}_{\epsilon}\), \(b_{1,\epsilon}\), and \(b_{2,\epsilon}\):
\[\Delta\log|b_{1,\epsilon}|^{2}\leq\left(\tilde{G}_{\epsilon},b_{2,\epsilon}/|b _{1,\epsilon}|^{2}\right). \tag{8}\]
We then show that as \(\epsilon\to 0\), \(\Delta\log|b_{1,\epsilon}|^{2}\) and \(\left(\tilde{G}_{\epsilon},b_{2,\epsilon}/|b_{1,\epsilon}|^{2}\right)\) converge weakly to \(\Delta\log|b_{1}|^{2}\) and \(\left(\tilde{G},b_{2}/|b_{1}|^{2}\right)\) in the sense of distributions, respectively. Let \(\phi:B(0,1)\to\mathbb{R}\) be a smooth function with a compact support. Due to the property of the mollifier, as \(\epsilon\to 0\), \(G_{\epsilon}\) (resp. \(\tilde{G}_{\epsilon}\)) converges strongly to \(G\) (resp. \(\tilde{G}\)) in the \(L^{1}\)-topology on each compact subset of \(B(0,1)\). Specifically, this means \(G_{\epsilon}\) (resp. \(\tilde{G}_{\epsilon}\)) converges to \(G\) (resp. \(\tilde{G}\)) almost everywhere on each compact subset. Thus, by the Lebesgue's dominated convergence theorem, we find that
\[\int_{B(0,1)}\log|b_{1,\epsilon}|^{2}\Delta\phi\to\int_{B(0,1)}\log|b_{1}|^{2}\Delta\phi\]
and
\[\int_{B(0,1)}\left(\tilde{G}_{\epsilon},b_{2,\epsilon}/|b_{1,\epsilon}|^{2} \right)\phi\to\int_{B(0,1)}\left(\tilde{G},b_{2}/|b_{1}|^{2}\right)\phi\]
as \(\epsilon\to 0\), respectively. This establishes the desired claim.
Proof of Proposition 10.: It is enough to consider the case where \(X\) is the open ball \(B(0,1)\). Also, we can assume that the Kahler metric is the Euclidean metric. We define \(\mathbb{R}^{r}\)-valued functions \(G\) and \(\tilde{G}\) as follows:
\[G \coloneqq\xi-\xi^{\prime},\] \[\tilde{G} \coloneqq\tilde{\xi}-\tilde{\xi}^{\prime}.\]
Then by applying Lemma 15 to the above \(G\) and \(\tilde{G}\), and by combining the calculation in [18, Proof of Theorem 2], we have the desired inequality.
Proof of Proposition 11.: We set \(I(\xi):X\to V\) as follows:
\[I(\xi)\coloneqq-\sum_{j=1}^{r}4k_{j}e^{(v_{j},\xi)}v_{j}-2\sqrt{-1}\Lambda_{ \omega_{X}}F_{h}.\]
We define \(\mathbb{R}^{r}\)-valued functions \(G\) and \(\tilde{G}\) as follows:
\[G \coloneqq((\xi,v_{1})+\log(4k_{1}^{\prime}),\ldots,(\xi,v_{r})+ \log(4k_{r}^{\prime})),\] \[\tilde{G} \coloneqq((I(\xi),v_{1})+2\sqrt{-1}\Lambda_{\omega_{X}}F_{H_{1}},\ldots,(I(\xi),v_{r})+2\sqrt{-1}\Lambda_{\omega_{X}}F_{H_{r}}).\]
Then it can be verified that the following holds in the sense of the distribution:
\[\Delta_{\omega_{X}}((\xi,v_{j})+\log(4k_{j}^{\prime}))\leq(I(\xi),v_{j})+2 \sqrt{-1}\Lambda_{\omega_{X}}F_{H_{j}}\text{ for }j=1,\ldots,r. \tag{9}\]
As in the proof of Proposition 11, it can be supposed that \(X\) is the open ball \(B(0,1)\), and the Kahler metric is the Euclidean metric. From (9), we can apply Lemma 15 to the above \(G\) and \(\tilde{G}\). Then by the same calculation as in [18, Proof of Theorem 3], we have the desired inequality.
## 4 Proof of Theorem 1
In order to prove the existence of a solution to equation (1), we adopt the method of using the Schauder fixed point theorem (cf. [7, Section 5.4.4] and [21, p.143, Theorem 5.28]). Let \(L^{\infty}(Y,V)\) denote the set of all \(V\)-valued \(L^{\infty}\)-functions over \(Y\). To avoid potential misunderstandings, we use \(L^{\infty}(Y,V)\) to denote the set of all \(V\)-valued \(L^{\infty}\)-functions, rather than as the set of equivalence classes. We first prove the following lemma:
**Lemma 18**.: There exist \(\xi^{(-)}=(\xi_{1}^{(-)},\ldots,\xi_{r}^{(-)}),\ \xi^{(+)}=(\xi_{1}^{(+)},\ldots,\xi_{r}^{(+)})\in L ^{\infty}(Y,V)\) such that
1. For each \(j=1,\ldots,r-1\), \(\xi_{j}^{(-)}-\xi_{j}^{(+)}\) is a subharmonic function.
2. For each \(j=1,\ldots,r-1\), \(\xi_{j}^{(-)}\) and \(\xi_{j}^{(+)}\) are quasi-subharmonic functions.
3. The following holds in the sense of the distribution: \[\Delta_{\omega_{X}}\xi_{1}^{(-)} \leq-4k_{r}^{\prime}e^{\xi_{1}^{(+)}-\xi_{r}^{(+)}}-2\sqrt{-1} \Lambda_{\omega_{X}}F_{h_{1}},\] (10) \[\Delta_{\omega_{X}}\xi_{j}^{(-)} \leq-4k_{j-1}^{\prime}e^{\xi_{j}^{(+)}-\xi_{j-1}^{(-)}}-2\sqrt{- 1}\Lambda_{\omega_{X}}F_{h_{j}}\text{ for }j=2,\ldots,r-1,\] (11) \[\Delta_{\omega_{X}}\xi_{j}^{(+)} \geq 4k_{j}^{\prime}e^{\xi_{j+1}^{(+)}-\xi_{j}^{(-)}}-2\sqrt{-1} \Lambda_{\omega_{X}}F_{h_{j}}\text{ for }j=1,\ldots,r-1.\] (12)
4. It holds that \(\xi_{j}^{(-)}\leq\xi_{j}^{(+)}\) for all \(j=1,\ldots,r-1\).
5. \(\xi^{(-)}\) and \(\xi^{(+)}\) satisfy the following boundary conditions: \[\lim_{z\to\zeta}\xi^{(-)}(z) =\ \eta(\zeta)\text{ for all }\zeta\in\partial Y,\] \[\lim_{z\to\zeta}\xi^{(+)}(z) =\ \eta(\zeta)\text{ for all }\zeta\in\partial Y.\]
Proof.: We first note that in Lemma 18, it does not matter what the initial metric \(h=(h_{1},\ldots,h_{r})\) is. If we can prove the above lemma for some initial metric \(h=(h_{1},\ldots,h_{r})\) and arbitrary \(\eta\), then by appropriately transforming \(\xi^{(+)}\), \(\xi^{(-)}\), and \(\eta\) we can easily prove the above proposition for any initial metric. Therefore, for simplicity, from the beginning, we assume that the curvature \(F_{h}\) of the initial metric \(h\) is zero. Let \(\phi_{1},\ldots,\phi_{r}\) be harmonic functions over \(Y\) such that for each \(j=1,\ldots,r\), it holds that \(\lim_{z\to\zeta}\phi_{j}(z)=\eta_{j}(\zeta)\) for all \(\zeta\in\partial Y\) (cf. [7, 20]). For an \(L^{\infty}\)-subharmonic function \(\rho:Y\to[-\infty,\infty)\) satisfying \(\lim_{z\to\zeta}\rho(\zeta)=0\) for all \(\zeta\in\partial Y\), we set
\[\xi_{j}^{(-)} =\rho+\phi_{j}, \tag{13}\] \[\xi_{j}^{(+)} =-\rho+\phi_{j}, \tag{14}\]
for \(j=1,\ldots,r-1\). Then we define
\[\xi^{(-)} =(\xi_{1}^{(-)},\ldots,\xi_{r-1}^{(-)},-(\xi_{1}^{(-)}+\cdots+ \xi_{r-1}^{(-)})), \tag{15}\] \[\xi^{(+)} =(\xi_{1}^{(+)},\ldots,\xi_{r-1}^{(+)},-(\xi_{1}^{(+)}+\cdots+ \xi_{r-1}^{(+)})). \tag{16}\]
For the above \(\xi^{(-)}\) and \(\xi^{(+)}\), it can easily be checked that condition (i), (ii), and (v) are satisfied. We can also check that condition (iv) is satisfied since a subharmonic function \(\rho\) satisfying \(\lim_{z\to\zeta}\rho(\zeta)=0\) for all \(\zeta\in\partial Y\) is a non-positive function (see [7, p.12, Corollary 1.17]). We shall choose such a \(\rho\) appropriately so that condition (iii) is satisfied for \(\xi^{(-)}\) and \(\xi^{(+)}\) defined as above. We set
\[f_{1}\coloneqq\min\left\{-4k_{j-1}e^{\phi_{j}-\phi_{j-1}}\ |\ j=1, \ldots,r\right\},\] \[f_{2}\coloneqq\min\left\{-4k_{j}e^{\phi_{j+1}-\phi_{j}}\ |\ j=1, \ldots,r\right\},\] \[f\coloneqq\min\{f_{1},f_{2}\},\]
where in the definition of \(f_{1}\), \(k_{0}\) and \(\phi_{0}\) are interpreted as \(k_{r}\) and \(\phi_{r}\), respectively. Let \(\rho\) be the unique \(L^{\infty}\)-subharmonic function that solves the following non-linear elliptic boundary problem:
\[-\Delta_{\omega_{X}}\rho=-fe^{-r\rho}\ \text{in the sense of the distribution}, \tag{17}\] \[\lim_{z\to\zeta}\rho(\zeta)=0\ \text{for all}\ \zeta\in\partial Y. \tag{18}\]
The existence and the uniqueness of the solution \(\rho\) to the above boundary value problem is guaranteed by [7, p.154, Theorem 5.24] with a trivial variable transformation that replaces \(r\rho\) with another variable. We will now demonstrate that for such a \(\rho\), \(\xi^{(-)}\) and \(\xi^{(+)}\) defined as in (13), (14), (15), and (16) satisfy (iii) of Lemma 18. For \(\xi^{(-)}\) and \(\xi^{(+)}\) defined as (13), (14), (15), and (16), condition (10), (11), and (12) can be rewritten as follows:
\[\Delta_{\omega_{X}}\rho\leq-4k_{r}e^{-r\rho+\phi_{1}-\phi_{r}}, \tag{19}\] \[\Delta_{\omega_{X}}\rho\leq-4k_{j-1}e^{-2\rho+\phi_{j}-\phi_{j-1 }}\ \text{for}\ j=2,\ldots,r-1,\] (20) \[\Delta_{\omega_{X}}\rho\leq-4k_{j}e^{-2\rho+\phi_{j+1}-\phi_{j}} \ \text{for}\ j=1,\ldots,r-1. \tag{21}\]
Since the subharmonic function \(\rho\) is a non-positive function, the following conditions (22) and (23) are stronger than the above (20) and (21):
\[\Delta_{\omega_{X}}\rho\leq-4k_{j-1}e^{-r\rho+\phi_{j}-\phi_{j-1 }}\ \text{for}\ j=1,\ldots,r-1, \tag{22}\] \[\Delta_{\omega_{X}}\rho\leq-4k_{j}e^{-r\rho+\phi_{j+1}-\phi_{j}} \ \text{for}\ j=1,\ldots,r-1. \tag{23}\]
It is easy to see that a subharmonic function \(\rho\) which is a solution to the elliptic equation (17) satisfies (19), (22) and (23). Then we have the desired claim.
**Remark 19**.: From conditions (ii) and (iii) in Lemma 18, for each \(j=1,\ldots,r-1\), \(\xi^{(-)}_{j}\) is a \(-F_{h_{j}}\)-subharmonic function, and \(-\xi^{(+)}_{j}\) is an \(F_{h_{j}}\)-subharmonic function.
**Remark 20**.: From the proof of Lemma 18, we can construct \(\xi^{(-)}\) and \(\xi^{(+)}\) so that the following conditions hold in addition to (i), (ii), (iii), (iv), and (v) in Lemma 15:
\[\Delta_{\omega_{X}}\xi^{(-)}_{j}\geq-C,\ \Delta_{\omega_{X}}\xi^{(+)}_{j}\leq C \ \text{for}\ j=1,\ldots,r-1 \tag{24}\]
in the sense of the distribution, with \(C\) being some positive constant. Indeed, for the \(\xi^{(-)}\) and \(\xi^{(+)}\) constructed in the proof of Lemma 18, the following inequality holds for each \(j=1,\ldots,r-1\):
\[\Delta_{\omega_{X}}\xi^{(-)}_{j} =\Delta_{\omega_{X}}\rho=fe^{-r\rho}\geq f,\] \[\Delta_{\omega_{X}}\xi^{(+)}_{j} =-\Delta_{\omega_{X}}\rho=-fe^{-r\rho}\leq-f.\]
Since \(f\) is an \(L^{\infty}\)-function, we have (24) for the \(\xi^{(-)}\) and \(\xi^{(+)}\). If we impose condition (24) to \(\xi^{(-)}\) and \(\xi^{(+)}\), then we can show that the sequence \((\xi^{\prime}_{(k)})\) in Lemma 27 converges to a \(\xi^{\prime}\) in capacity, as we will further discuss in Remark 28.
We fix \(\xi^{(-)},\xi^{(+)}\in L^{\infty}(Y,V)\) satisfying the conditions of Lemma 18. For a \(V\)-valued function \(\xi=(\xi_{1},\ldots,\xi_{r}):Y\to V\), we consider the following condition:
* For each \(j=1,\ldots,r-1\), \(\xi^{(-)}_{j}-\xi_{j}\) and \(\xi_{j}-\xi^{(+)}_{j}\) are subharmonic functions.
We introduce the following notation:
**Definition 21**.: We denote by \(SH(Y,\xi^{(-)},\xi^{(+)})\) the set of all \(V\)-valued functions that satisfy the above condition \((*)\):
\[SH(Y,\xi^{(-)},\xi^{(+)})\coloneqq\{\xi:Y\to V\mid\xi\ \text{satisfies condition}\ (*)\}.\]
**Definition 22**.: For two vectors \(v=(v_{1},\ldots,v_{r}),v^{\prime}=(v^{\prime}_{1},\ldots,v^{\prime}_{r})\in V\), we denote by \(v\leq v^{\prime}\) if \(v_{j}\leq v^{\prime}_{j}\) holds for all \(j=1,\ldots,r-1\).
**Definition 23**.: We define a set of \(V\)-valued functions \(\mathcal{C}\) as follows:
\[\mathcal{C}\coloneqq\{\xi\in SH(Y,\xi^{(-)},\xi^{(+)})\cap L^{\infty}(Y,V)\mid \xi^{(-)}\leq\xi\leq\xi^{(+)}\}.\]
Clearly, \(\xi^{(-)}\) and \(\xi^{(+)}\) are contained in \(\mathcal{C}\). Then we prove the following:
**Lemma 24**.: \(\mathcal{C}\) _is a compact set for the \(L^{1}\)-topology, where the \(L^{1}\)-norm is with respect to the Kahler metric \(g_{X}\mid_{Y}\)._
Proof.: Let \((\xi_{(k)})_{k\in\mathbb{N}}\in\mathcal{C}^{\mathbb{N}}\) be a sequence. We denote by \(\xi_{(k),j}\) the \(j\)-th component of \(\xi_{(k)}\) for each \(k\in\mathbb{N}\). From [7, Theorem 1.46], by extracting and relabeling, we can assume that \((\xi_{j}^{(-)}-\xi_{(k),j})_{k\in\mathbb{N}}\) converges to a subharmonic function \(U_{j}\) in the \(L^{1}_{loc}\)-topology for all \(j=1,\ldots,r-1\). We set \(\xi_{j}\coloneqq\xi_{j}^{(-)}-U_{j}\). We can also assume, by extracting and relabeling, that \((\xi_{(k),j}-\xi_{j}^{(+)})_{k\in\mathbb{N}}\) converges to a subharmonic function \(U_{j}^{\prime}\) in the \(L^{1}_{loc}\)-topology for all \(j=1,\ldots,r-1\). We set \(\xi_{j}^{\prime}\coloneqq U_{j}^{\prime}-\xi_{j}^{(+)}\). From the uniqueness of the convergence point, \(\xi_{j}\) and \(\xi_{j}^{\prime}\) coincide almost everywhere. We see that they coincide everywhere since the right-hand side of \(\xi_{j}^{(-)}-\xi_{j}^{(+)}=\xi_{j}^{(-)}-\xi_{(k),j}+\xi_{(k),j}-\xi_{j}^{(+)}\) converges to \(\xi_{j}^{(-)}-\xi_{j}+\xi_{j}^{\prime}-\xi_{j}^{(+)}\) as \(k\to\infty\). We set \(\xi\coloneqq(\xi_{1},\ldots,\xi_{r-1},-(\xi_{1}+\cdots+\xi_{r-1}))\). Obviously, \(\xi\in\mathcal{C}\). Therefore all that remains is to prove \((\xi_{(k)})_{k\in\mathbb{N}}\) converges to \(\xi\) in the \(L^{1}\)-topology. This can be done by the following observation for any compact subset \(K\):
\[\int_{Y}|\xi-\xi_{(k)}|=\int_{K}|\xi-\xi_{(k)}|+\int_{Y\setminus K}|\xi-\xi_{ (k)}|.\]
The second term of the right-hand side can be arbitrarily small without depending on \(k\), by only changing the selection of \(K\) since there exists a compact exhaustion of \(Y\) and \(\xi_{(k)}\) and \(\xi\) are bounded by \(\xi^{(-)}\) and \(\xi^{(+)}\).
**Lemma 25**.: _For each \(\xi\in\mathcal{C}\), there uniquely exists a \(\xi^{\prime}\in\mathcal{C}\) that satisfies the following equation in the sense of the distribution:_
\[\Delta_{\omega_{X}}\xi^{\prime}+\sum_{j=1}^{r}4k_{j}^{\prime}e^{(v_{j},\xi)}v _{j}=-2\sqrt{-1}\Lambda_{\omega_{X}}F_{h}. \tag{25}\]
_Moreover, the unique solution \(\xi^{\prime}\) to equation (25) is a \(C^{1,\alpha}\)-function for any \(\alpha\in(0,1)\)._
Proof.: We first construct a weak solution \(\xi^{\prime}\) to equation (25). Similar to the proof of Lemma 18, we assume that \(F_{h}=0\) for simplicity. For each
\(j=1,\ldots,r-1\), let \(\xi^{\prime}_{j,+}\) be the unique \(L^{\infty}\)-weak solution of the following elliptic boundary value problem:
\[\Delta_{\omega_{X}}\xi^{\prime}_{j,+}-4k^{\prime}_{j}e^{(v_{j}, \xi)}=0\text{ in the sense of the distribution,}\] \[-\xi^{\prime}_{j,+}\in SH(Y),\] \[\lim_{z\to\zeta}\xi^{\prime}_{j,+}(z)=\frac{1}{2}\eta_{j}(\zeta) \text{ for all }\zeta\in\partial Y,\]
where we denote by \(SH(Y)\) the set of all subharmonic functions over \(Y\). The existence and uniqueness of the \(L^{\infty}\)-solution to the above boundary value problem are guaranteed by [7, p.150, Theorem 5.17]. We also denote by \(\xi^{\prime}_{j,-}\) the unique \(L^{\infty}\)-solution of the following:
\[\Delta_{\omega_{X}}\xi^{\prime}_{j,-}+4k^{\prime}_{j-1}e^{(v_{j- 1},\xi)}=0\text{ in the sense of the distribution,}\] \[\xi^{\prime}_{j,-}\in SH(Y),\] \[\lim_{z\to\zeta}\xi^{\prime}_{j,-}(z)=\frac{1}{2}\eta_{j}(\zeta) \text{ for all }\zeta\in\partial Y.\]
We set \(\xi^{\prime}_{j}\coloneqq\xi^{\prime}_{j,+}+\xi^{\prime}_{j,-}\) for each \(j=1,\ldots,r-1\). Then we define \(\xi^{\prime}\) as \(\xi^{\prime}\coloneqq(\xi^{\prime}_{1},\ldots,\xi^{\prime}_{r-1},-(\xi^{ \prime}_{1}+\cdots+\xi^{\prime}_{r-1}))\). From the construction, clearly, \(\xi^{\prime}\) satisfies the elliptic equation (25) in the weak sense. From [1, Theorem 6.2] (see also [9, Theorem 1]), it can be observed that \(\xi^{\prime}_{j}\in W^{1,2}_{loc}(Y)\) for each \(j=1,\ldots,r\) (see [1, Chapter 1] for the definition of the local Sobolev space). Therefore by [10, Proposition 2.18], it follows that \(\xi^{\prime}\) is a \(V\)-valued \(C^{1,\alpha}\)-function for any \(\alpha\in(0,1)\). We next prove that \(\xi^{\prime}\) is contained in \(\mathcal{C}\). We show that for each \(j=1,\ldots,r-1\), \(\xi^{(-)}_{j}-\xi^{\prime}_{j}\) is a subharmonic function. From (10), the following holds in the sense of the distribution:
\[\Delta_{\omega_{X}}(\xi^{(-)}_{1}-\xi^{\prime}_{1})\leq -4k^{\prime}_{r}e^{\xi^{(+)}_{1}-\xi^{(+)}_{r}}-2\Lambda_{\omega_{ X}}F_{h_{1}}\] \[-4k^{\prime}_{1}e^{(v_{1},\xi)}+4k^{\prime}_{r}e^{(v_{r},\xi)}+2 \sqrt{-1}\Lambda_{\omega_{X}}F_{h_{j}}\] \[\leq -4k^{\prime}_{r}(e^{\xi^{(+)}_{1}-\xi^{(+)}_{r}}-e^{\xi_{1}-\xi_ {r}})\] \[\leq 0,\]
where the final inequality follows from the assumption \(\xi_{1}\leq\xi^{(+)}_{1}\) and the fact \(\xi^{(+)}_{r}=-(\xi^{(+)}_{1}+\cdots+\xi^{(+)}_{r-1})\leq-(\xi_{1}+\cdots+\xi _{r-1})=\xi_{r}\) which is immediately follows from the assumption \(\xi_{j}\leq\xi^{(+)}_{j}\) for all \(j=1,\ldots,r-1\). Similarly, from
(11), the following holds for each \(j=2,\ldots,r-1\):
\[\Delta_{\omega_{X}}(\xi_{j}^{(-)}-\xi_{j}^{\prime})\leq -4k_{j-1}^{\prime}e^{\xi_{j}^{(+)}-\xi_{j-1}^{(-)}}-2\Lambda_{\omega _{X}}F_{h_{j}}\] \[-4k_{j}^{\prime}e^{(v_{j},\xi)}+4k_{j-1}^{\prime}e^{(v_{j-1},\xi) }+2\sqrt{-1}\Lambda_{\omega_{X}}F_{h_{j}}\] \[\leq -4k_{j-1}^{\prime}(e^{\xi_{j}^{(+)}-\xi_{j-1}^{(-)}}-e^{\xi_{j}- \xi_{j-1}})\] \[\leq 0.\]
Since \(\xi^{\prime}\) is a \(V\)-valued \(C^{1,\alpha}\)-function, \(\xi_{j}^{(-)}-\xi_{j}^{\prime}\) is an upper semicontinuous function for each \(j=1,\ldots,r-1\). Therefore, combining the above inequality, it concludes that \(\xi_{j}^{(-)}-\xi_{j}^{\prime}\) is a subharmonic function for each \(j=1,\ldots,r-1\). Then by applying the maximum principle [7, Corollary 1.16] to \(\xi_{j}^{(-)}-\xi_{j}^{\prime}\), we have \(\xi_{j}^{(-)}-\xi_{j}^{\prime}\leq 0\) for each \(j=1,\ldots,r-1\). Therefore, in order to show that \(\xi^{\prime}\) is contained in \(\mathcal{C}\), all that remains is to prove for each \(j=1,\ldots,r-1\), \(\xi_{j}^{\prime}-\xi_{j}^{(+)}\) is a subharmonic function satisfying \(\xi_{j}^{\prime}-\xi_{j}^{(+)}\leq 0\). From (12), the following holds in the sense of the distribution:
\[\Delta_{\omega_{X}}(\xi_{j}^{\prime}-\xi_{j}^{(+)})\leq \ 4k_{j}^{\prime}e^{(v_{j},\xi)}-4k_{j-1}^{\prime}e^{(v_{j-1},\xi)}-2 \sqrt{-1}\Lambda_{\omega_{X}}F_{h_{j}}\] \[-4k_{j}^{\prime}e^{\xi_{j+1}^{(+)}-\xi_{j}^{(-)}}+2\Lambda_{ \omega_{X}}F_{h_{j}}\] \[\leq -4k_{j}^{\prime}(e^{\xi_{j+1}^{(+)}-\xi_{j}^{(-)}}-e^{\xi_{j+1}- \xi_{j}})\] \[\leq 0.\]
Then by the same argument as above, it concludes that \(\xi_{j}-\xi_{j}^{(+)}\) is a subharmonic function. Also, again, by the maximum principle, we have \(\xi_{j}-\xi_{j}^{(+)}\leq 0\). Finally, we show the uniqueness of the weak solution to equation (25) which is contained in \(\mathcal{C}\). This follows from the maximum principle: Let \(\xi^{\prime}=(\xi_{1}^{\prime},\ldots,\xi_{r}^{\prime}),\xi^{\prime\prime}=( \xi_{1}^{\prime\prime},\ldots,\xi_{r}^{\prime\prime})\in\mathcal{C}\) be weak solutions to equation (25). As we noted above, \(\xi^{\prime}\) and \(\xi^{\prime\prime}\) are \(V\)-valued \(C^{1,\alpha}\)-functions. Also, we have \(\Delta_{\omega_{X}}(\xi^{\prime}-\xi^{\prime\prime})=0\). Since we have
\[\xi^{(-)}-\xi^{(+)}\leq\xi^{\prime}-\xi^{\prime\prime}\leq\xi^{(+)}-\xi^{(-)}, \tag{26}\]
it holds that \(\lim_{z\to\zeta}(\xi^{\prime}(z)-\xi^{\prime\prime}(z))=0\) for all \(\zeta\in\partial Y\). Consequently, by the maximum principle (cf. [7, Section 1.2.2]), we have \(\xi_{j}^{\prime}=\xi_{j}^{\prime\prime}\) for all \(j=1,\ldots,r\). This establishes the desired result.
We introduce the following notation:
**Definition 26**.: We denote by \(S:\mathcal{C}\to\mathcal{C}\) the map mapping a \(\xi\in\mathcal{C}\) to the unique \(\xi^{\prime}\eqqcolon S(\xi)\) in Lemma 25.
The following holds:
**Lemma 27**.: _The map \(S\) defined in Definition 26 is a continuous map in the \(L^{1}\)-topology._
Proof.: Let \((\xi_{(k)})_{k\in\mathbb{N}}\in\mathcal{C}^{\mathbb{N}}\) be a sequence that converges to a \(\xi=(\xi_{1},\dots,\xi_{r})\in\mathcal{C}\) in the \(L^{1}\)-topology. We set \(\xi^{\prime}_{(k)}\coloneqq S(\xi_{(k)})\) for each \(k=1,2,\dots\). From Lemma 24, by extracting and relabeling, we can assume that \((\xi^{\prime}_{(k)})_{k\in\mathbb{N}}\) converges to a \(\xi^{\prime}=(\xi^{\prime}_{1},\dots,\xi^{\prime}_{r})\in\mathcal{C}\) in the \(L^{1}\)-topology. Consequently the assertion follows from the continuity of the Laplace operator with respect to the \(L^{1}_{loc}\)-topology.
**Remark 28**.: Unlike the case of the higher-dimensional Monge-Ampere-operator (see [7, pp. 154-155]), there is no need to show \((\xi^{\prime}_{(k)})_{k\in\mathbb{N}}\), which has already been extracted and relabeled appropriately, converges to \(\xi^{\prime}\) in capacity (see [7, p.112, Definition 4.23]). However, if we additionally impose condition (24) on \(\xi^{(-)}\) and \(\xi^{(+)}\), then it is not difficult to show that \((\xi^{\prime}_{(k)})_{k\in\mathbb{N}}\) converges to \(\xi^{\prime}\) in capacity, as shown in below: Let \(\xi^{\prime}_{(k),j}\) be the \(j\)-th component of \(\xi^{\prime}_{(k)}\) for each \(j=1,\dots,r\) and each \(k=1,2,\dots\). We show that for each \(j=1,\dots,r-1\), \((\xi^{\prime}_{(k),j})_{k\in\mathbb{N}}\) converges to \(\xi^{\prime}_{j}\) in capacity. For each Borel subset \(E\subseteq Y\), we denote by \(\operatorname{Cap}_{Y}(E)\) the capacity of \(E\) (see [7, p.108, Definition 4.16]). By using [7, Lemma 5.18], for each \(\delta>0\), we have
\[\operatorname{Cap}_{Y}(\{\xi^{\prime}_{j}-\xi^{\prime}_{(k),j} \geq 2\delta\}) =\operatorname{Cap}_{Y}(\{\xi^{\prime}_{j}-\xi^{(+)}_{j}-(\xi^{ \prime}_{(k),j}-\xi^{(+)}_{j})\geq 2\delta\})\] \[\leq\delta^{-1}\int_{\{\xi^{\prime}_{j}-\xi^{\prime}_{(k),j}\geq \delta\}}dd^{c}(\xi^{\prime}_{(k),j}-\xi^{(+)}_{j})\] \[=(2\pi\delta)^{-1}\int_{\{\xi^{\prime}_{j}-\xi^{\prime}_{(k),j} \geq\delta\}}(-\Delta_{\omega_{X}})(\xi^{\prime}_{(k),j}-\xi^{(+)}_{j})\omega _{X}\] \[\leq(2\pi\delta)^{-1}\int_{\{\xi^{\prime}_{j}-\xi^{\prime}_{(k),j} \geq\delta\}}(-\Delta_{\omega_{X}})(\xi^{(-)}_{j}-\xi^{(+)}_{j})\omega_{X}\] \[\leq(2\pi\delta)^{-1}\int_{\{\xi^{\prime}_{j}-\xi^{\prime}_{(k),j }\geq\delta\}}2C\omega_{X}\] \[\leq(2\pi\delta^{2})^{-1}\int\max\,\{\xi^{\prime}_{j}-\xi^{ \prime}_{(k),j},0\}\ 2C\omega_{X} \tag{27}\]
where \(C\) is a constant in (24), and we have used the inequality \(\Delta_{\omega_{X}}(\xi_{j}^{(-)}-\xi_{(k),j}^{\prime})\leq 0\). (27) converges to \(0\) as \(k\to\infty\). Therefore we have \(\lim_{k\to\infty}\operatorname{Cap}_{Y}(\{\xi_{j}^{\prime}-\xi_{(k),j}^{\prime} \geq 2\delta\})=0\). By swapping the roles of \(\xi_{j}^{\prime}\) and \(\xi_{(k),j}^{\prime}\), we can also show that \(\lim_{k\to\infty}\operatorname{Cap}_{Y}(\{\xi_{(k),j}^{\prime}-\xi_{j}^{\prime }\geq 2\delta\})=0\). Therefore, \((\xi_{(k)}^{\prime})_{k\in\mathbb{N}}\) converges to \(\xi^{\prime}\) in capacity.
Then we have the following:
**Lemma 29**.: _There exists a \(V\)-valued function \(\xi=(\xi_{1},\ldots,\xi_{r}):Y\to V\) that satisfies (a) and (b) in Theorem 1._
Proof.: It is easy to observe that \(\mathcal{C}\) is a convex set. From Lemma 24, \(\mathcal{C}\) is also a compact set with respect to the \(L^{1}\)-topology. Then by the Schauder fixed point theorem (cf. [21, p.143, Theorem 5.28]), the map \(S\) defined in Definition 26 has a fixed point \(\xi\) since \(S\) is a continuous map as shown in Lemma 27. From Lemma 25, the fixed point \(\xi\) is a \(V\)-valued \(C^{1,\alpha}\)-function. The \(V\)-valued function \(\xi\) solves equation (1) in the sense of the distribution since \(\xi\) is a fixed point of the map \(S\). It can also be verified that the fixed point \(\xi\) satisfies the boundary condition (b) since we have \(\xi^{(-)}\leq\xi\leq\xi^{(+)}\) and \(\lim_{z\to\zeta}\xi^{(-)}(z)=\lim_{z\to\zeta}\xi^{(+)}(z)=\eta(\zeta)\) for all \(\zeta\in\partial Y\).
The uniqueness of a solution \(\xi\) in Theorem 1 follows from the maximum principle:
**Lemma 30**.: _Let \(\xi=(\xi_{1},\ldots,\xi_{r})\) and \(\xi^{\prime}=(\xi_{1}^{\prime},\ldots,\xi_{r}^{\prime})\) be \(V\)-valued functions that satisfy (a) and (b) in Theorem 1. Then we have \(\xi=\xi^{\prime}\)._
Proof.: It follows from Proposition 10 that \(\log(\sum_{j=1}^{r}e^{\xi_{j}-\xi_{j}^{\prime}})\) is a subharmonic function. By applying the maximum principle [7, Corollary 1.16] to the subharmonic function \(\log(\sum_{j=1}^{r}e^{\xi_{j}-\xi_{j}^{\prime}})-\log r\), it concludes that \(\log(\sum_{j=1}^{r}e^{\xi_{j}-\xi_{j}^{\prime}})-\log r\) is a negative function. Since \(\sum_{j=1}^{r}(\xi_{j}-\xi_{j}^{\prime})=0\), it can be verified that \(\log(\sum_{j=1}^{r}e^{\xi_{j}-\xi_{j}^{\prime}})-\log r\) is also a positive function. Therefore \(\xi_{j}=\xi_{j}^{\prime}\) for all \(j=1,\ldots,r\), and thus we have the result.
From Lemma 29 and Lemma 30, we have Theorem 1.
**Acknowledgements.** I am grateful to Takahiro Aoi and Ryushi Goto for interesting discussions and stimulating conversations. I am also grateful to Tatsuya Tate for interesting discussions and stimulating conversations, and for helpful comments on the manuscript. I wish to express my gratitude to Hisashi Kasuya and Takashi Ono for their discussions on Simpson's papers.
I am indebted to Toshiaki Yachimura for valuable discussions and for providing invaluable information on the regularity theory of elliptic PDEs. It was through his assistance that I was able to prove Lemma 25. I would like to thank Yoshinori Hashimoto for interesting discussions, many stimulating conversations, and for answering my many fundamental questions about the Monge-Ampere equation, potential theory, and related topics. I wish to thank Takuro Mochizuki for his guidance on cyclic Higgs bundles and harmonic bundles, as well as for answering my many questions about them when I was beginning my research on cyclic Higgs bundles.
|
2305.19865 | Proof-of-work consensus by quantum sampling | Since its advent in 2011, boson sampling has been a preferred candidate for
demonstrating quantum advantage because of its simplicity and near-term
requirements compared to other quantum algorithms. We propose to use a variant,
called coarse-grained boson-sampling (CGBS), as a quantum Proof-of-Work (PoW)
scheme for blockchain consensus. The users perform boson sampling using input
states that depend on the current block information and commit their samples to
the network. Afterwards, CGBS strategies are determined which can be used to
both validate samples and reward successful miners. By combining rewards for
miners committing honest samples together with penalties for miners committing
dishonest samples, a Nash equilibrium is found that incentivizes honest nodes.
We provide numerical evidence that these validation tests are hard to spoof
classically without knowing the binning scheme ahead of time and show the
robustness of our protocol to small partial distinguishability of photons. The
scheme works for both Fock state boson sampling and Gaussian boson sampling and
provides dramatic speedup and energy savings relative to computation by
classical hardware. | Deepesh Singh, Gopikrishnan Muraleedharan, Boxiang Fu, Chen-Mou Cheng, Nicolas Roussy Newton, Peter P. Rohde, Gavin K. Brennen | 2023-05-31T13:58:40Z | http://arxiv.org/abs/2305.19865v3 | # Proof-of-work consensus by quantum sampling
###### Abstract
Since its advent in 2011, boson-sampling has been a preferred candidate for demonstrating quantum advantage because of its simplicity and near-term requirements compared to other quantum algorithms. We propose to use a variant, called coarse-grained boson-sampling (CGBS), as a quantum Proof-of-Work (PoW) scheme for blockchain consensus. The users perform boson-sampling using input states that depend on the current block information, and commit their samples to the network. Afterward, CGBS strategies are determined which can be used to both validate samples and to reward successful miners. By combining rewards to miners committing honest samples together with penalties to miners committing dishonest samples, a Nash equilibrium is found that incentivizes honest nodes. The scheme works for both Fock state boson sampling and Gaussian boson sampling and provides dramatic speedup and energy savings relative to computation by classical hardware.
## I Introduction
Blockchain technology relies on the ability of a network of non-cooperating participants to reach consensus on validating and verifying a new set of block-bundled transactions, in a setting without centralized authority. A consensus algorithm is a procedure through which all the peers of the blockchain network reach a common agreement about the present state of the distributed ledger. One of the best tested consensus algorithms which has demonstrated robustness and security is Proof-of-Work (PoW) [1]. PoW relies on validating a proposed block of new transactions to be added to the blockchain by selecting and rewarding a successful "miner" who is the first to solve a computational puzzle. This puzzle involves a one-way function, i.e. a function that is easy to compute, and hence easy to verify, but hard to invert. Traditionally the chosen function is the inverse hashing problem, which by its structure makes the parameters of the problem dependent on the current block information, thus making pre-computation infeasible. Additionally the problem is progress-free, meaning the probability of successfully mining a block at any given instant is independent of prior mining attempts. This means a miner's success probability essentially grows linearly with the time spent, or equivalently work expended, solving the problem. The later feature ensures that the mining advantage is proportionate to a miner's hashing power.
There are, however, two issues that threaten to compromise continued usage of PoW consensus in a scalable manner. The first is energy consumption. Problems like inverse hashing admit fast processing, now at speeds of THash/s, by application specific integrated circuits (ASICs). Unfortunately, the tremendous speed of these devices come at the cost of large power consumption, and as the hashing power of the network grows so grows the energy cost per transaction. The reason is for asset-based cryptocurrencies like Bitcoin, as the overall network hashing power grows, the difficulty of the one way function is increased to maintain a constant transaction speed. Since new Bitcoin are introduced through the mining process, a constant transaction speed is desirable to maintain stability and to avoid inflationary pressures. As of May, 2023, a single Bitcoin transaction had the equivalent energy consumption of an average U.S. household over 19.1 days (Digiconomist).
The energy consumption of PoW blockchains has several negative consequences. It contributes to climate change by generating large amounts of carbon emissions when the source of energy is non-renewable. Additionally, it creates a significant financial burden for miners, who must pay for the electricity and equipment required to mine blocks effectively. This can lead to the centralization of mining power in the hands of a few large mining pools, potentially compromising the network's security and decentralization.
Moreover, the energy consumption of PoW blockchains can be seen as wasteful and unnecessary, given that there
are alternative consensus mechanisms, such as Proof-of-Stake (PoS), that require significantly less energy to operate. However, PoS has some other liabilities such as the plutocratic feature of mining power being dependent on the number of coins held by a miner, and vulnerability to other attack vectors like "long range" and "nothing at stake" attacks. As a result, there have been growing calls for the development of more sustainable and environmentally friendly blockchain technologies.
The second issue is that PoW assumes only classical computers are available as mining resources. Quantum computing technology, while only at the prototype stage now, is rapidly developing. Quantum computers running Grover's search algorithm [2], can achieve a quadratic speedup in solving unstructured problems like inverting one-way functions. This means if they were integrated into PoW, the progress-free condition would no longer apply and the probability of solving the problem grows super-linearly with computational time spent 1. An adversarial network of future quantum computers performing PoW consensus will have radically different behaviour, such as probabilistic computing strategies and large fluctuations in the time to solve [3]. Workarounds can be found, such as using random beacons which interrupt the search progress of quantum computers by periodically announcing new puzzles to be solved, as suggested in Ref. [4]. However, as quantum computers speedup and are parallelized, the frequency of beacons will need to increase to avoid distortions in the consensus dynamics. A future-proofed consensus algorithm should take quantum processing into account as a core resource.
Footnote 1: Specifically the probability to solve in time \(t\) grows like \(p(t)=\sin^{2}(ct)\), where \(c=O(\sqrt{D/H})\), \(H\) is the size of the search domain for the one-way function, and \(D\) is the number of satisfying arguments.
As mentioned above, for unstructured search problems Grover's algorithm provides a quadratic advantage over the best-known classical algorithms. This gap between the classical and quantum algorithms' runtimes can be increased in more specific tasks like prime factorisation and discrete logarithm to provide exponential speedups [5; 6]. The accomplishment of these exponential speedups however requires large-scale fault-tolerant quantum computers that will not be available for some time. Moreover, as discussed, such algorithms would violate the progress-free condition. This motivates the search for practical applications of the advantages provided by noisy intermediate-scale quantum (NISQ) devices [7].
We propose a new PoW consensus protocol based on boson-sampling. Boson-sampling was originally developed to demonstrate _quantum supremacy_, owing to its reduced resource requirements compared to the other quantum algorithms [8]. Boson-samplers are specialized photonic devices that are restricted in the sense that they are neither capable of universal quantum computing nor error correctable, through proposals have been made to find practical applications in chemistry, many-body physics, and computer science [7]. We formulate a practical application of a boson-sampling variant called coarse-grained boson-sampling (CGBS) [9; 10]. This scheme involves the equal size grouping of the output statistics of a boson-sampler into a fixed number of bins according to some given binning tactic.
The advantage provided by binning the output probability distribution is the polynomial number of samples required to verify a fundamental property of the distribution as opposed to the exponential samples required when no binning is performed. While boson-samplers are not arbitrarily scalable owing to lack of error correction, we argue nonetheless the speedup provided is dramatic enough to warrant their use for PoW consensus.
Photonic based blockchain has been investigated before. Optical PoW [11] uses HeavyHash, a slight modification of the Bitcoin protocol, where a photonic mesh-based matrix-vector product is inserted in the middle of mining. This has already been integrated into the cryptocurrencies optical Bitcoin and Kaspa. Recently, a more time and energy efficient variant named LightHash has been tested on networks of up to 4 photons [12]. Both of these protocols use passive linear optics networks acting upon coherent state inputs which implement matrix multiplication on the vector of coherent amplitudes. It is conjectured that the photonic implementation of this matrix multiplication can achieve an order of magnitude speedup over traditional CPU hardware. They exploit the _classical_ speedup associated with photonic implementation of this operation and do not exploit any quantum advantage. While that method uses a multi-mode interferometer similar to what we describe in this work, it does not use intrinsically quantum states of light and in fact is a different form of classical computing using light. In contrast, our boson sampling method uses quantum resources with processes that become exponentially harder, in the number of photons, to simulate with classical hardware whether photonic or not.
## II Background
### Blockchains
A blockchain is a decentralized and distributed ledger that stores transactions in a secure and transparent manner. The ledger consists of a chain of fixed-length blocks, each of which is verified by every node in the network. The network is decentralized, meaning no central authority exerts control, relying on a network of nodes to maintain its integrity. Each block is then added to the blockchain once a decentralized consensus is reached. The whole process is illustrated in Fig. 1 and can be described as follows:
1. Transaction Verification: Transactions are sent to the network. Before a transaction can be included
in a block, it must be validated by nodes on the network. Each node checks that the transaction is legitimate and that the sender has sufficient funds to complete the transaction.
2. Block Creation: Once a group of transactions is verified, they are bundled together into a block. The block contains a header, which includes the previous block's hash, a timestamp, and a _nonce_ (a random number).
3. Proof-of-Work: To mine the block, miners compete to solve a complex mathematical puzzle, known as Proof-of-work (PoW). The first miner to solve the puzzle broadcasts their solution to the network, and the other nodes verify the solution. If the solution is correct, the miner is rewarded with newly minted cryptocurrency, and the block is added to the blockchain.
4. Consensus Mechanism: To maintain the integrity of the blockchain, the network must reach a consensus on the state of the ledger. In a decentralized blockchain network, this is achieved through a consensus mechanism, such as PoW or Proof-of-Stake (PoS). PoW requires miners to compete to solve a mathematical puzzle, while PoS relies on validators who hold a stake in the network to verify transactions.
5. Block Confirmation: Once a block is added to the blockchain, it cannot be altered or deleted. Other nodes on the network can confirm the block by verifying the hash of the previous block, ensuring that the chain is continuous and secure.
#### ii.1.1 One-way functions
Blockchain technology relies heavily on one-way functions as a critical component of its security infrastructure. One-way functions are mathematical functions that are easy to compute in one direction but difficult to reverse. The public-key cryptography used in blockchains today relies on pairs of related keys (public and private) generated by one-way functions. While it is easy to compute a public key from a private key, the reverse operation is computationally intractable. This makes private keys extremely difficult to guess or brute-force, thus ensuring the security of blockchain networks. Hash functions are another example of one-way functions with widespread cryptographic utility.
More precisely, one-way functions are easy to compute for all inputs in their domain, but hard to invert given the image of any unknown input. That is, given a function,
\[f(x)=y, \tag{1}\]
\(y\) is easy to compute for all inputs \(x\), however computing \(x\) for a given \(y\) is hard. Computationally speaking, the notions of 'easy' and 'hard' refer to polynomial-time and super-polynomial-time algorithms respectively in input size. Therefore, in general, the inversion of one-way functions resides within the computational complexity class \(\mathbf{NP}\) since the verification of any pre-image is possible in polynomial time, unlike its explicit computation.
These one-way functions are of importance in various applications including cryptography and authentication protocols. Their existence is still an open conjecture and if proven, has serious computational complexity theoretic implications, including that of \(\mathbf{P}\neq\mathbf{NP}\), hence the interest in their discovery. Nevertheless, there are many favourable candidates for one-way functions, i.e. functions for which no polynomial-time inversion algorithms are known. It is important to note that no rigorous proof of the non-existence of these inversion algorithms exists.
#### ii.1.2 Hash functions
A general hash function is a one-way function that satisfies three main properties:
* Its input can be of any size.
* Its output is always of a fixed size.
* It should be easy to compute.
A cryptographic hash function has several additional requirements [13]:
* Collision-free: A hash function \(H\) is said to be collision resistant if it is infeasible to find two values, \(x\) and \(y\), where \(H(x)=H(y)\) and \(x\neq y\).
* Hiding: A hash function \(H\) is hiding if it is infeasible to find \(x\), given \(H(r\|x)\), where \(r\) is a secret value that is chosen from a probability distribution with high min-entropy.
* Puzzle friendliness: A hash function \(H\) is said to be puzzle-friendly if for every possible \(n\)-bit output
Figure 1: Blockchain architecture and the addition of new blocks.
value \(y\), if \(k\) is chosen from a distribution with high min-entropy, then it is infeasible to find \(x\) such that \(H(k\|x)=y\) in time significantly less than \(2^{n}\).
In some existing classical blockchain implementations, notably Bitcoin, partial inverse hashing is employed for the purposes of PoW. Here the miners compete to find bitstrings that hash to an output string with some number of leading zeros. The number of required leading zeroes translates to the difficulty of solving this problem. Since hash functions are highly unstructured, the best classical approach to finding such solutions is using brute force to hash random input strings until by chance a satisfying output is found. Once found, it is trivial for other nodes to verify the solution by simply hashing it.
State-binned boson-sampling (see Sec. II.2.3) was motivated as an attempt to construct a hash function - a one-way decision function - from the boson-sampling problem [10]. Note that such a definition for a hash function differs from conventional hash functions, as it is not in **NP**, since a classical verifier cannot efficiently verify the output to the hash given the input state.
Here we do not employ this full hash function construction directly, but taking inspiration from it employ the peak bin probability as a signature of the operation of a boson-sampling device. While a classical verifier is unable to verify the peak bin probability given the input state, independent quantum boson samplers will converge upon the same estimated peak bin probability. This is sufficient for the purposes of consensus, where samples provided by different parties can be cross-checked for convergence upon the same estimate, despite it not being classically efficient to determine whether that estimate is correct.
#### ii.2.3 Hash pointers and data structures
A regular pointer stores the memory address of data, making it easy to access. On the other hand, a hash pointer is a pointer that stores the cryptographic hash of the data along with its memory address. Thus, a hash pointer points to data while enabling verification (weblink).
Moreover, a _linked list_ is a linear collection of data elements where each element contains both data and a pointer to the previous element. The order of a linked list is not given by their physical placement in memory (wiki). A blockchain then is a linked list with a hash pointer to the previous element, which assists in the verification of the previous element's data.
### Boson-sampling
Boson-sampling [8; 14] is the problem of sampling multi-mode photo-statistics at the output of a randomised optical interferometer. This problem constitutes a noisy intermediate scale quantum (NISQ) [15] protocol, naturally suited to photonic implementation. Like other NISQ protocols, boson sampling is not believed to be universal for quantum computation, nor does it rely on error correction, thereby limiting scalability. Nonetheless, it has been shown2 to be a classically inefficient yet quantum mechanically efficient protocol, making it suitable for demonstrating _quantum supremacy_, which is now believed to have been achieved [16; 17].
Footnote 2: Under reasonable complexity-theoretic assumptions.
Unlike _decision problems_, which provide a definitive answer to a question, boson-sampling is a _sampling problem_ where the goal is to take measurement samples from the large superposition state exiting the device.
Since boson-sampling is not an **NP** problem [18], the full problem cannot be efficiently verified by classical or quantum computers. Indeed, even another identical boson sampler cannot be used for verification since results are probabilistic and in general unique, ruling out a direct comparison of results as a means of verification. Nonetheless, restricted versions of the problem such as coarse-grained boson sampling, described below, can be used for verification.
#### ii.2.1 Fundamentals
The general setup for the boson sampling problem is illustrated in Fig. 2. We take \(M\) optical modes of which \(N\) are initialised with the single-photon state and \(M-N\) with the vacuum state at the input,
\[|S\rangle =|1\rangle^{\otimes N}\otimes|0\rangle^{\otimes M-N}\] \[=\hat{a}_{1}^{\dagger}\ldots\hat{a}_{N}^{\dagger}|0\rangle^{ \otimes M}, \tag{2}\]
where \(\hat{a}_{i}^{\dagger}\) is the photonic creation operator on the \(i\)th mode. Choosing \(M\geq O(N^{2})\) ensures that with a high likelihood the output state remains in the anti-bunched regime whereby modes are occupied by at most one photon. Hence, such samples may be represented as \(m\)-bit binary strings.
The input state is evolved via passive linear optics comprising beamsplitters and phase-shifters, implementing the Heisenberg transformation on the photonic creation operators,
\[\hat{U}\hat{a}_{i}^{\dagger}\hat{U}^{\dagger}\rightarrow\sum_{j=1}^{M}U_{i,j} \hat{a}_{j}^{\dagger}, \tag{3}\]
where \(U\) is the \(M\times M\) unitary matrix representing the multi-mode linear optics transformation 3. That is, each
input photonic creation operator is mapped to a linear combination of creation operators over the output modes.
The linear optics transformation \(U\) is chosen uniformly at random from the Haar measure, essential to the underlying theoretical complexity proof. It was shown by [19] that any \(M\times M\) linear optics transformation of the form shown in Eq. 3 can be decomposed into a network of at most \(O(M^{2})\) beamsplitters and phase-shifters, ensuring that efficient physical implementation is always possible. As presented in Fig. 2, the number of detectors equals the number of modes \(M\). In practice, the number of detectors can be reduced by exploiting multiplexing in other degrees of freedom, such as the temporal degree of freedom. For example, in the architecture presented in Ref. [20], where modes are encoded temporally, a single time-resolved detector is sufficient for detecting and distinguishing between all modes.
The output state takes the general form,
\[|\psi\rangle_{\text{out}} =\left[\prod_{i=1}^{N}\sum_{j=1}^{M}U_{i,j}\hat{a}_{j}^{\dagger} \right]|0\rangle^{\otimes M} \tag{4}\] \[=\sum_{k=1}^{|Y|}\alpha_{k}|Y_{k}\rangle,\]
where \(|Y_{k}\rangle=|y_{1}^{(k)},\ldots,y_{M}^{(k)}\rangle\) denotes the occupation number representation of the \(k\)th term in the superposition with \(y_{i}^{(k)}\) photons in the \(i\)th mode, and \(\alpha_{k}\) is the respective quantum amplitude, where for normalisation,
\[\sum_{k=1}^{|Y|}|\alpha_{k}|^{2}=1. \tag{5}\]
The number of terms in the superposition is given by,
\[|Y|=\binom{M+N-1}{N}, \tag{6}\]
which grows super-exponentially with \(M\) in the \(M\geq O(N^{2})\) regime. Since we are restricted to measuring a number of samples polynomial in \(N\) from an exponentially large sample space, we are effectively guaranteed to never measure the same output configuration multiple times. Hence, the boson-sampling problem is _not_ to reconstruct the full photon-number distribution given in Eq. 4, but rather to incompletely sample from it.
In the lossless case, the total photon number is conserved. Hence,
\[\sum_{i=1}^{M}x_{i}=\sum_{i=1}^{M}y_{i}^{(k)}=N\ \forall\ X,Y,k, \tag{7}\]
where \(\left|X\right\rangle=\left|x_{1},\ldots,x_{M}\right\rangle\) represents the occupation number representation of the input state.
The amplitudes in the output superposition state are given by,
\[\alpha_{k}=\langle Y_{k}|\hat{U}|X\rangle=\frac{\text{Per}(U_{X,Y_{k}})}{ \sqrt{\prod_{i=1}^{M}x_{i}!y_{i}^{(k)}!}}, \tag{8}\]
where \(\text{Per}(\cdot)\) denotes the matrix permanent, and \(U_{X,Y}\) is an \(N\times N\) sub-matrix of \(U\) composed by taking \(x_{i}\) copies of each row and \(y_{i}^{(k)}\) copies of each column of \(U\). The permanent arises from the combinatorics associated with the multinomial expansion of Eq. 4, which effectively sums the amplitudes over all possible paths input photons \(X\) may take to arrive at a given output configuration \(Y_{k}\).
The probability of measuring a given output configuration \(Y_{k}\) is simply,
\[\text{Pr}(Y_{k})=|\alpha_{k}|^{2}. \tag{9}\]
In lossy systems with uniform per-photon loss \(\eta\), all probabilities acquire an additional factor of \(\eta^{N}\) upon post-selecting on a total of \(N\) measured photons,
\[\text{Pr}(Y_{k})=\eta^{N}|\alpha_{k}|^{2}. \tag{10}\]
The overall success probability of the device is similarly,
\[\text{Pr}_{\text{success}}=\eta^{N}. \tag{11}\]
Calculating matrix permanents is **#P**-hard in general, a complexity class even harder than **NP**-hard4, from which the classical hardness of this sampling problem
Figure 2: Illustration of the use of a boson-sampling device for blockchain consensus. Initially, \(N\) photons are incident in the first \(N\) modes, with the remaining \(M-N\) modes in the vacuum state. The modes then undergo a permutation \(\Pi\) dependent on the block header information, which in practice would be accomplished by simply permuting the locations of the single-photon inputs. The photons then pass through an interferometer circuit of depth \(M\) described by unitary \(U\). Finally, the photons are detected at the \(M\) output ports providing a measurement record of the sample.
arises. It should however be noted that boson-sampling does not let us efficiently _calculate_ matrix permanents as this would require knowing individual amplitudes \(\alpha_{k}\). The \(\alpha_{k}\) amplitudes cannot be efficiently measured since we are only able to sample a polynomial subset of an exponentially large sample space, effectively imposing binary accuracy as any output configuration is unlikely to be measured more than once.
The class of sampling problems that can be efficiently solved on a universal quantum computer is defined as \(\mathbf{SampBQP}\). Boson sampling is not universal and defined by its own complexity class \(\mathbf{B}\mathbf{B}\mathbf{on}\mathbf{SampP}\), which is (likely strictly) contained in \(\mathbf{SampBQP}\). Thus, \(\mathbf{B}\mathbf{on}\mathbf{SampP}\subseteq\mathbf{SampBQP}\). Boson sampling is also not believed to be universal for classical sampling problems, \(\mathbf{SampP}\), which are believed to be incomparable classes.
The complexity proof of boson-sampling presented in [8] is not a direct proof per se, but rather a proof that if boson-sampling were efficiently classically simulatable this would have complexity theoretic implications considered highly unlikely, although not proven. This effectively reduces the argument to one that has been well-studied. Specifically, it was shown using the results in [21] and other arguments that efficient classical simulation of the boson-sampling problem, including approximate boson-sampling, would imply a collapse of the polynomial hierarchy, \(\mathbf{PH}\), to the third level. It is important to note that, for the case of approximate boson-sampling problem, there are additional conjectures that are assumed to be true for the complexity results [8]. The polynomial hierarchy is an oracle-based generalisation of the complexity classes \(\mathbf{P}\) and \(\mathbf{NP}\), where an _oracle_ is a theoretical device that can be queried to spontaneously provide solutions to problems in a given complexity class. \(\mathbf{P}\) and \(\mathbf{NP}\) are contained in the zeroth and first levels of \(\mathbf{PH}\) respectively. An \(\mathbf{NP}\) device with access to an \(\mathbf{NP}\) oracle is denoted \(\mathbf{NP}^{\mathbf{NP}}\), which is contained in the second level of \(\mathbf{PH}\). This oracle-based definition generalises to form the full polynomial hierarchy. In the same way that it is strongly believed, but not proven, that \(\mathbf{P}\neq\mathbf{NP}\), it is firmly believed, but not proven, that all levels of \(\mathbf{PH}\) are distinct. The boson-sampling complexity proof shows that if boson-sampling could be efficiently classically simulated, this would imply a _collapse_ in \(\mathbf{PH}\), whereby levels are not distinct. Thus, if it is the case the levels of \(\mathbf{PH}\)_are_ distinct -- strongly believed to be the case -- boson-sampling is a classically hard problem.
#### ii.2.2 Mode-binned boson-sampling
Consider an \(N\)-photon, \(M\)-mode boson-sampling experiment where the output modes are arranged in \(d^{(\mathsf{nb})}\) bins labelled \(\mathsf{bin}_{1}^{(\mathsf{nb})},\mathsf{bin}_{2}^{(\mathsf{nb})},\ldots, \mathsf{bin}_{d^{(\mathsf{nb})}}^{(\mathsf{nb})}\). Given a linear optical unitary \(\hat{U}\) on \(M\) modes, let \(P(\mathbf{n})\) be the probability of measuring the multi-photon binned number output described by the output vector \(\mathbf{n}=(n_{1},n_{2},\ldots,n_{d^{(\mathsf{nb})}})\), with \(n_{i}\) photons in \(\mathsf{bin}_{i}\). It was shown in [22] that this distribution can be expressed as the discrete Fourier transform over the characteristic function,
\[P^{(\mathsf{nb})}(\mathbf{n})=\frac{1}{(N+1)^{d^{(\mathsf{nb})}}}\sum_{ \mathbf{c}\in\mathbb{Z}_{N+1}^{d^{(\mathsf{nb})}}}\chi\left(\frac{2\pi \mathbf{c}}{N+1}\right)e^{-i\frac{2\pi\mathbf{c}\cdot\mathbf{n}}{N+1}}, \tag{12}\]
where,
\[\chi(\mathbf{s})=\langle\Psi_{\mathrm{in}}|\hat{U}^{\dagger}e^{i2\pi\mathbf{ s}\cdot\hat{\mathbf{N}}_{d^{(\mathsf{nb})}}}\hat{U}|\Psi_{\mathrm{in}}\rangle, \tag{13}\]
and the vector of binned number operators is,
\[\hat{\mathbf{N}}_{d^{(\mathsf{nb})}}=\left(\sum_{j_{1}\in\mathsf{bin}^{( \mathsf{nb})}}\hat{n}_{j_{1}},\ldots,\sum_{j_{d^{(\mathsf{nb})}}\in\mathsf{bin }_{d^{(\mathsf{nb})}}^{(\mathsf{nb})}}\hat{n}_{j_{d^{(\mathsf{nb})}}}\right). \tag{14}\]
The characteristic function can be computed directly as a matrix permanent,
\[\chi(\mathbf{s})=\mathrm{Per}(V_{N}(\mathbf{s})), \tag{15}\]
with,
\[V(\mathbf{s})=U^{\dagger}D(\mathbf{s})U, \tag{16}\]
where the diagonal matrix \(D(\mathbf{s})=\prod_{j=1}^{d^{(\mathsf{nb})}}D^{(j)}(s_{j})\) and
\[[D^{(j)}(s_{j})]_{u,v}=\left\{\begin{array}{ccc}1&\text{if}&u=v\text{ and }u\not\in\mathsf{bin}_{j}^{(\mathsf{nb})}\\ e^{is_{j}}&\text{if}&u=v\text{ and }u\in\mathsf{bin}_{j}^{(\mathsf{nb})}\\ 0&\text{if}&u\neq v\end{array}\right.. \tag{17}\]
Here \(V_{N}(\mathbf{s})\) means taking the \(N\times N\) matrix formed from the \(N\) rows and \(N\) columns of the \(M\times M\) matrix \(V\) according to the mode location of single-photon inputs in the input vector \(|\Psi_{\mathrm{in}}\rangle\).
By Eq. 12, the mode-binned probability distribution can be computed by evaluating \((N+1)^{d^{(\mathsf{nn})}}\) permanents. To exactly compute the permanent of an \(N\times N\) matrix requires \(O(N2^{N})\) elementary operations using Ryser's algorithm, but if we only demand a polynomial additive approximation then a cheaper computational method is available. We can use the Gurvitz approximation which allows for classical estimation of the permanent of a complex \(N\times N\) matrix to within additive error \(\delta\) in \(O(N^{2}/\delta^{2})\) operations. The algorithm works by sampling random binary vectors and computing a Glynn estimator (Appendix A). The number of random samples \(m\) needed to approximate \(\chi(\mathbf{s})\) to within \(\delta\) with probability at least \(p\) is,
\[m=\frac{2}{\delta^{2}}\ln(2/(1-p)), \tag{18}\]
and each Glynn estimator can be computed in \(N^{2}\) elementary steps. We use now the definition of total variation distance between two distributions with support in
some domain \(D\) as,
\[\mathcal{D}^{\rm(tv)}(P,Q)\equiv\frac{1}{2}\sum_{\mathbf{x}\in D}|P(\mathbf{x})-Q (\mathbf{x})|. \tag{19}\]
It is shown in Ref. [22] that by choosing
\[\delta\leq\frac{\beta}{(N+1)^{d^{\rm(ab)}/2}}, \tag{20}\]
an estimate \(\widehat{P^{\rm(ab)}}(\mathbf{n})\) of the mode-binned distribution can be obtained such that \(\mathcal{D}^{\rm(tv)}(\widehat{P^{\rm(ab)}},P^{\rm(nb)})\leq\beta\). The number of elementary operations to compute this estimate is5,
Footnote 5: We ignore the cost to compute the \(M\times M\) matrices \(V(\mathbf{s})\) as this could be pre-computed for all \(\mathbf{s}\) since we assume a fixed unitary \(U\) in the protocol to follow.
\[\frac{2\ln(2/(1-p))N^{2d^{\rm(ab)}+2}\log(N)}{\beta^{2}}. \tag{21}\]
For a fixed \(d^{\rm(ab)}\), this provides a classical polynomial time in \(N\) approximation to the mode-binned distribution. Regarding the number of quantum samples needed, it has been shown [23] that if one has the means to draw samples from a distribution \(Q\), the number of samples \(N_{\rm tot}\) needed to distinguish \(Q\) from another distribution \(P\) is,
\[\frac{c\sqrt{|D|}}{\mathcal{D}^{\rm(tv)}(Q,P)^{2}}. \tag{22}\]
Here, choosing the constant \(c=2^{16}\) assures that the test succeeds with probability at least \(3/4\). For the mode-binned boson-sampling distribution, we can choose \(Q\) to be the distribution from which nodes are sampling from \(P^{\rm(ab)}_{\tt BS}(\mathbf{n})\), and \(P\) to be the estimate of the true distribution \(\widehat{P^{\rm(ab)}}(\mathbf{n})\). The dimension \(|D|\) will be the total number of ways \(N\) photons can be put in \(d^{\rm(ab)}\) bins. This is given by,
\[|D|=\binom{N+d^{\rm(ab)}-1}{N}. \tag{23}\]
We want to guarantee that the following cases are rejected,
\[\mathcal{D}^{\rm(tv)}(P^{\rm(ab)}(\mathbf{n}),P^{\rm(ab)}_{\tt BS}(\mathbf{n} ))\geq\beta. \tag{24}\]
Since the total variation distance is a distance metric, we can write,
\[\mathcal{D}^{\rm(tv)}(P^{\rm(ab)}(\mathbf{n}),P^{\rm(ab)}_{\tt BS }(\mathbf{n})) \tag{25}\] \[\geq\mathcal{D}^{\rm(tv)}(P^{\rm(ab)}_{\tt BS}(\mathbf{n}), \widehat{P^{\rm(ab)}}(\mathbf{n}))-\mathcal{D}^{\rm(tv)}(P^{\rm(ab)}(\mathbf{ n}),\widehat{P^{\rm(ab)}}(\mathbf{n}))\] \[\geq\mathcal{D}^{\rm(tv)}(P^{\rm(ab)}_{\tt BS}(\mathbf{n}), \widehat{P^{\rm(ab)}}(\mathbf{n}))-\beta,\]
where we have used the fact that \(\mathcal{D}^{\rm(tv)}(P^{\rm(nb)},\widehat{P^{\rm(ab)}}(\mathbf{n}))\leq\beta\). So in order to reject cases in Eq: 24, the following has to be true,
\[\mathcal{D}^{\rm(tv)}(\widehat{P^{\rm(ab)}}(\mathbf{n}),P^{\rm(ab)}_{\tt BS}( \mathbf{n}))\geq 2\beta \tag{26}\]
The number of samples needed to distinguish the estimate \(P^{\rm(ab)}\) from \(P_{\tt BS}\) that is more than \(2\beta\) in total variation distance away is,
\[N^{\rm(ab)}_{\rm tot}=2^{14}\frac{\sqrt{\binom{N+d^{\rm(ab)}-1}{N}}}{\beta^{2}}. \tag{27}\]
#### ii.2.3 State-binned boson-sampling
An alternative to the above procedure where bins are defined by sets of output modes is to bin according to sets of multimode Fock states. For an \(N\)-photon input state in an \(M\)-mode unitary \(U\), the number of possible output configurations is given by \(|Y|\) as defined in Eq. 6. State-binned boson sampling then concerns the binning of this \(|Y|\) dimensional Hilbert space into \(d^{\rm(ab)}\) bins.
For a given boson-sampling experiment, the output samples are essentially the \(|Y_{k}\rangle\) configuration vectors as defined in Eq. 4, where \(1\leq k\leq\binom{N+M-1}{N}\). However, the state-binned samples into \(d^{\rm(ab)}\) bins, on the same Boson-sampling experiment are given by the \(|bin^{\rm(ab)}_{l}\rangle\) configuration vectors, where
\[|bin^{\rm(ab)}_{l}\rangle=\bigcup_{j}|Y_{j}\rangle, \tag{28}\]
and the union over \(j\) can be chosen according to any agreed-upon strategy such that \(1\leq l\leq d^{\rm(ab)}\). In this paper, we consider the case where all bins contain an equal number of configuration vectors.
Given any binning strategy, the bin with the maximum probability is defined as \(bin^{d^{\rm(ab)}}_{true}\), and the corresponding peak bin probability (PBP) is defined as \(\mu_{\tt true}\). If the complete output bin probability distribution is unknown, the PBP \(\mu_{\tt net}\) of the incomplete probability distribution serves as an estimate of \(\mu_{\tt true}\). That is, assuming that the honest nodes on the blockchain network provide enough samples for the same boson-sampling experiment, the PBP \(\mu_{\tt net}\) will be a close approximation to the PBP \(\mu_{\tt true}\) of the binned boson-sampling problem.
Specifically, we wish to ensure that,
\[\Pr[\mu_{\tt net}-\epsilon/2<\mu_{\tt true}<\mu_{\tt net}+\epsilon/2]>1-\gamma, \tag{29}\]
for some accuracy \(\epsilon<1/d^{\rm(sb)}\ll 1\) where \(\gamma\ll 1\) determines the \(100(1-\gamma)\%\) confidence interval for \(\mu_{\tt true}\). It was shown in Ref. [10] that this can be achieved for perfect boson sampling using a sample size of at least
\[N^{\rm(sb)}_{\rm tot}=\frac{12d^{\rm(sb)}}{\epsilon^{2}}\ln(2\gamma^{-1}). \tag{30}\]
Using a bootstrap technique obtained by resampling provided samples from the boson-sampling distribution, it is shown [10] that the required accuracy can be obtained when \(2d^{(\texttt{sb})}\epsilon^{0.8}\lesssim 0.1\), in which case, if we demand a low uncertainty \(\gamma=10^{-4}\), the number of required samples is
\[N_{\text{tot}}^{(\texttt{sb})}=1.8\times 10^{5}d^{(\texttt{sb})^{7/2}}. \tag{2.31}\]
### Variation of the protocol using Gaussian Boson-Sampling
While the original boson-sampling protocol described above is based on photon-number states, variants based on alternate types of input states have been described [24; 25]. Most notably, Gaussian boson-sampling [26], where inputs are squeezed vacuum states, has gained a lot of traction amongst experimental realisations owing to the relative ease and efficiency of preparing such states. Many of the protocols for photon generation were already making use of Gaussian states and post-selection, so the complexity of sampling from the output state when the input state is a Gaussian state was studied in detail [26]. Gaussian states can be characterised by their mean and variance. The simplest Gaussian states are coherent states. It is interesting to note that there is no quantum advantage in using coherent states as input states for boson sampling. In this variant of boson sampling, input states are taken to be squeezed vacuum states. The squeezing operator is given by,
\[\hat{S}(z)=\exp\left[\frac{1}{2}(z^{*}\hat{a}^{2}-z\hat{a}^{\dagger 2})\right],\ z =re^{i\theta}. \tag{2.32}\]
Let us assume a Gaussian Boson-Sampling setup with squeezed vacuum states in \(N\) of \(M\) modes and vacuum in the remaining \(M-N\) modes. The initial state is,
\[|\psi_{\text{in}}\rangle=\prod_{j=1}^{N}\hat{S}_{j}(r_{j})|0\rangle, \tag{2.33}\]
where \(r_{j}\) is the squeezing parameter for the \(j\)th mode, which is assumed to be real for simplicity. The symplectic transformation corresponding to the squeezing operations is
\[S=\begin{pmatrix}\oplus_{j=1}^{M}\cosh r_{j}&\oplus_{j=1}^{M}\sinh r_{j}\\ \oplus_{j=1}^{M}\sinh r_{j}&\oplus_{j=1}^{M}\cosh r_{j}\end{pmatrix}. \tag{2.34}\]
Then the covariance matrix for the output state after the input state passes through the interferometer described by \(U\) is
\[\sigma=\frac{1}{2}\begin{pmatrix}U&0\\ 0&U^{*}\end{pmatrix}SS^{\dagger}\begin{pmatrix}U^{\dagger}&0\\ 0&U^{T}.\end{pmatrix}. \tag{2.35}\]
Now let the particular measurement record of photon number counts be \(Y_{k}=(y_{1}^{(k)},\ldots,y_{M}^{(k)})\). Then the proba
Figure 3: Plots showing the output probability distribution of a Haar random boson-sampling device with two photons in six modes, i.e. \(N=2\) and \(M=6\), for which a total of \(\binom{6+2-1}{2}\), i.e. 21 output photon configurations are possible. (a) BS distribution without any binning. The x-axis shows the different ways in which two photons can exit the six modes of the boson sampler and the corresponding probabilities of these configurations. (b) State-binned distribution of the same experiment where the 21-dimensional output Hilbert space is binned into \(d^{(\texttt{sb})}=7\) bins, each bin containing three configurations chosen by the colour code as visible in both (a) and (b). Note that \(\texttt{bin}_{\texttt{i}}^{(\texttt{sb})}\) has the maximum probability of \(\mu_{\texttt{true}}=0.26\). (c) Mode-binned distribution of the same experiment where the modes are grouped into \(d^{(\texttt{sb})}=3\) bins, each mode bin containing two consecutive bins. A total of \(\binom{3+2-1}{2}\), i.e. 6 output photon configurations are possible for this mode-binning.
bility of finding that record is given by,
\[\Pr(\mathrm{Y}_{k}) =|\sigma_{Q}|^{-1/2}|\mathrm{Haf}(\mathrm{B}_{\mathrm{Y}_{k}})|^{2},\] \[\sigma_{Q} =\sigma+\frac{1}{2}\mathbbm{1}_{2M}. \tag{36}\]
Here the matrix \(B_{Y_{k}}\) is a constructed from the matrix
\[B=U(\oplus_{j=1}^{M}\tanh r_{j})U^{T}, \tag{37}\]
and is determined as follows. If \(y_{i}=0\) then rows and columns \(i\) of matrix \(B\) are deleted, otherwise the rows and columns are repeated \(y_{i}\) times. \(\mathrm{Haf}(\cdot)\) denotes the matrix Hafnian. Similar to the permanent, the Hafnian of a general matrix is also **#P**-hard to calculate. It has been shown that sampling from the output state is also hard in the case of Gaussian boson sampling.
We can think of analogous mode and state binned sampling for the Gaussian variant. For the mode-binned Gaussian boson sampling we will want to develop a validation scheme similar to the one described in [22]. Even though other methods exists for validating samples from Gaussian boson sampling [27], we would like to have a protocol similar to that was used for original boson-sampling. The detailed study of parameters involved including the required number of samples is beyond the scope of this paper.
The protocol is similar to Sec. II.2.2. We start with the input state defined in Eq. 33. The squeezing parameter is taken so that the total average number of photons is close to \(2N\). Then the probability, \(P^{(\mathtt{imb})}(\mathbf{n})\), of measuring the binned output configurations can be expressed as
\[P^{(\mathtt{imb})}(\mathbf{n})=\frac{1}{(N+1)^{d^{(\mathtt{imb})}}}\sum_{ \mathbf{\tilde{c}}\in\mathbb{Z}_{M+1}^{(\mathtt{imb})}}\tilde{\chi}\left( \frac{2\pi\mathbf{\tilde{c}}\cdot\mathbf{n}}{N+1}\right)e^{-i\frac{2\pi \mathbf{\tilde{c}}\cdot\mathbf{n}}{N+1}}. \tag{38}\]
The calculation of the characteristic function is slightly different since now the input state does not have a fixed number of photons. It is as follows,
\[\chi(\mathbf{c})=\sum_{n_{k}|k=1,2,\cdots m}P(\mathbf{n})e^{i\mathbf{c}\cdot \mathbf{n}}. \tag{39}\]
It was shown in Ref. [28] (see Eq. 25 within reference) that the characteristic function for GBS is,
\[\chi(\mathbf{c}) =\frac{1}{\sqrt{\det\left(\mathbb{I}-Z(\mathbb{I}-\sigma_{Q}^{-1 })\right)}}, \tag{40}\] \[Z =\bigoplus_{k=1}^{M}\left[e^{i\frac{2\pi\epsilon_{k}}{N+1}}\begin {array}[]{c}0\\ 0\end{array}e^{i\frac{2\pi\epsilon_{k}}{N+1}}\right]. \tag{41}\]
Here \(\sigma_{Q}\) is related to the covariance matrix of the output state and is defined in Eq. 36, and \(\tilde{\chi}\left(\mathbf{\tilde{c}}\right)\) can be obtained from \(\chi(\mathbf{c})\) by replacing all \(c_{k}\)'s in \(i^{\mathrm{th}}\) bin to be \(\tilde{c}_{i}\) (see Appendix B for more details). This function can now be used in Eq. 38 and evaluated at a polynomial number of points to obtain the exact binned distribution (see also Ref. [29] for an alternative approach using classical sampling of the positive P distribution to obtain an approximation of the mode binned distribution). The rest of the protocol is similar to that of Fock state boson-sampling.
## III A quantum Pow consensus protocol
We consider a PoW consensus with two types of binning, one used for validation to catch out cheaters, and one to reward miners. The former can be estimated with classical computers efficiently, while the latter does not have a known classical computation though it does have an efficient quantum estimation. Upon successful mining of a block, the output of both binning distributions will be added to the blockchain, meaning one part can be verified efficiently by classical computers while another part cannot. This will incentivize nodes using boson-sampling devices to verify prior blocks in the blockchain. The protocol is illustrated in Fig. 4 and a detailed description follows below. See Table 10 for a description of the various parameters.
1. A transaction, or bundle of transactions, is created on the network. All nodes are aware of the following set of input parameters: \[\mathtt{Pm}=\{N,M,U,d^{(\mathtt{imb})},d^{(\mathtt{imb})},T_{\mathtt{ mine}},\epsilon,\beta,R,P\},\] (42) which is assumed to be constant over many blocks but can be varied to adjust the difficulty of the problem.
2. A new block \(b_{j}\) representing this transaction is created. It has a header header\((b_{j})\) that contains summary information of the block including the parameter set \(\mathtt{Pm}\), a hash derived from transactions in the block, a hash of the previous block header together with its validation record \(\mathtt{Rec}(b_{j-1})\) (discussed in step 7), and a timestamp.
3. The new block is sent to every node in the network. All nodes stake tokens to participate. Note this is different from a proof-of-stake protocol since here all miners stake the same amount of tokens and the probability of successfully mining a block is independent of the staked amount.
4. Miners implement boson-sampling [8] using devices like those illustrated in Figure 2, using \(N\) photons input into \(M\) modes ordered \(\{1,2,\ldots,M\}\). A hash of the header is mapped to a permutation on the modes using a predetermined function \(a\), \[a:H(\mathtt{header}(b_{j}))\rightarrow\Pi\in S_{M}.\] (43)
This permutation, which depends on the current block, is used to determine the locations of the \(N\) input photons in the input state of the boson sampler. Each node \(i\) collects a set of samples denoted \(s_{i}\), of size \(|s_{i}|\), and commits each sample in the set by hashing that sample along with a timestamp and some private random bit string. The committed samples are broadcast to the network. The set of committed samples by node \(i\) is denoted \(\tilde{s}_{i}\). The purpose of broadcasting hashed versions of the samples is to prevent dishonest miners from simply copying honest miners' samples.
5. After some predetermined mining time, \[T_{\text{mine}}=\max\{N_{tot}^{(\texttt{mb})},N_{tot}^{(\texttt{sb})}\}/R_{q},\] (3.3) the mining is declared over and no new samples are accepted. All miners reveal their sample sets \(\{s_{i}\}\) as well as the random bit strings associated with each sample so that the sets can be verified against the committed sets \(\{\tilde{s}_{i}\}\). If for some node \(i\), the sets don't agree, that node is removed from further consideration of the mining round and they lose their stake. Let the set of remaining samples be \(W=\bigcup_{i}s_{i}\).
6. This stage consists of three steps: a validation step using mode binning to catch dishonest miners, a state binning step to determine the mining success criterion and a reward/penalty payoff step. 1. _Validation_. A mode-binned distribution \(P^{(\texttt{ab})}\) is used to validate each miner's sample set. Mode binning refers to grouping output modes into \(d^{(\texttt{ab})}\) bins so that for a given sample the number of photon counts in a bin is simply the total number of ones at all the bit locations contained in the bin. We assume the bins are of equal size, \[|\texttt{bin}_{j}^{(\texttt{ab})}|=M/d^{(\texttt{ab})}\ \forall j.\] (3.4)
Figure 4: Blockchain architecture with the inclusion of boson-sampling routine.
A random beacon in the form of a string \(\mathtt{beacon}^{(\mathtt{mb})}\) is announced to the network. Decentralized randomness beacons can be integrated into PoW consensus protocols in such a way that they are reliable, unpredictable, and verifiable. It would be advisable here to construct the beacons using post-quantum secure verifiable random functions [30; 31]. Using a predetermined function \(g\),
\[g:\mathtt{beacon}^{(\mathtt{mb})}\rightarrow\pi^{(\mathtt{mb})}\in S_{M}, \tag{10}\]
the beacon is mapped to a permutation on the modes such that the modes contained in \(\mathtt{bin}_{j}^{(\mathtt{mb})}\) are,
\[\{\pi^{(\mathtt{mb})}(k)\}_{k=(j-1)M/d^{(\mathtt{mb})}+1}^{jM/d^{(\mathtt{mb}) }}. \tag{11}\]
The mode-binned distribution for miner \(i\) is,
\[P^{(\mathtt{mb})}[i]=\frac{1}{N|s_{i}|}(m_{1}[i],m_{2}[i],\ldots,m_{d^{( \mathtt{mb})}}[i]), \tag{12}\]
where \(m_{j}[i]\) is the number of photon counts in \(\mathtt{bin}_{j}^{(\mathtt{mb})}\) over the sample set \(s_{i}\). The true mode binned distribution, \(P^{(\mathtt{mb})}\), that depends on \((\Pi,\pi^{(\mathtt{mb})},U)\), can be estimated as \(\widetilde{P^{(\mathtt{mb})}}\) using a polynomial time classical algorithm. If the total variation distance between the distributions \(\mathcal{D}^{(tv)}(\widetilde{P^{(\mathtt{mb})}},P^{(\mathtt{mb})}[i])\geq 2\beta\) for some predetermined \(0<\beta<1\) then the sample set \(s_{i}\) is invalidated and miner \(i\) loses their stake. Otherwise, the sample set is validated and labelled \(s_{i}^{(v)}\). Let the set of validated samples be,
\[W^{(v)}=\bigcup_{i}s_{i}^{(v)}. \tag{13}\]
2. _Determining success criterion_. At this step a state binned distribution \(P^{(\mathtt{sb})}\) is computed to determine which miners are successful. First, it is necessary to sort the samples in \(W^{(v)}\) into bins, a procedure referred to as state binning. The state space \(Y\) consists of \((N+1)\)ary valued strings of length \(M\) and weight \(N\): \[Y=\{Y_{k}\}=\{(y_{1}^{(k)},\ldots,y_{M}^{(k)});\] \[y_{j}^{(k)}\in\mathbb{Z}_{N+1},\sum_{j=1}^{M}y_{j}^{(k)}=N\},\] (14) where the notation \(y_{i}^{(k)}\) means for the \(k^{th}\) element of the sample space \(y_{i}\) photons were measured in the \(i^{th}\) mode. The states in \(Y\) are ordered lexographically6. A second beacon(\(\mathtt{beacon}^{(\mathtt{sb})}\) is announced to the network and using a predetermined function \(f\),
Footnote 6: For example, for \(M=3,N=2\) the ordering would be \(\{(002),(011),(020),(101),(110),(200)\}\). \[f:\mathtt{beacon}^{(\mathtt{mb})}\rightarrow\pi^{(\mathtt{sb})}\in S_{|Y|}.\] (15) the beacon is mapped to a permutation on the state space. The states are sorted into \(d^{(\mathtt{sb})}\) equal sized bins such that the states contained in \(\mathtt{bin}_{j}^{(\mathtt{sb})}\) are, \[\{Y_{\pi(k)}\}_{k=(j-1)|Y|/d^{(\mathtt{sb})}+1}^{j|Y|/d^{(\mathtt{sb})}+1}.\] (16) All the publicly known samples in \(W^{(v)}\) are then sorted into the bins and the collective state binned distribution is, \[P^{(\mathtt{sb})}=\frac{1}{|W^{(v)}|}(h_{1},h_{2},\ldots,h_{d}),\] (17) where \(h_{j}\) is the number of samples in the \(\mathtt{bin}_{j}^{(\mathtt{sb})}\). The PBP across the validated miners in the network is, \[\mu_{\mathrm{net}}=\frac{\max_{j}\{h_{j}\}}{|W^{(v)}|}.\] (18) Similarly, the PBP for validated miner \(i\) is, \[\mu_{i}=\frac{\max_{j}\{|s_{i}^{(v)}\cap bin_{j}|\}}{|s_{i}^{(v)}|}.\] (19) 3. _Payoff_. Miners whose samples were validated have their stake returned and are awarded a payoff if \(|\mu_{i}-\mu_{\mathrm{net}}|\leq\epsilon\) for some predetermined precision \(\epsilon\). The amount of the payoff is dependent on the number of samples committed.
7. The new block \(b_{j}\) is added to the blockchain with an appended record, \[\mathtt{Rec}(b_{j})=\{\Pi,\pi^{(\mathtt{mb})},\pi^{(\mathtt{sb})},\widehat{P^{ (\mathtt{mb})}},\mu_{\mathrm{net}}\}.\] (20) This record contains the information necessary to validate the block.
## IV Analysis of the protocol
### Robustness
The key to making this protocol work is that the miners don't have enough information ahead of time about the problem to be solved to be able to pre-compute it but their samples can be validated after they have been committed. The blockchain is tamper-proof because any attempt to alter a transaction in a verified block of the chain will alter that block header and hence the input permutation \(\Pi\) that determines the boson-sampling problem and the output record \(\mathtt{Rec}\). One could also use a protocol where the unitary \(U\) depends on the block header
but it is easier to change the locations of input state photons than to reconfigure the interferometer circuit itself. The number of input states using \(N\) single photons in \(M\) modes is \(\binom{M}{N}\) making precomputation infeasible.
The record \(\mathtt{Rec}(b_{j})\) can be verified since the output distribution \(P^{(\mathtt{abs})}\) can be checked in polynomial time (in the number of bins \(d^{(\mathtt{abs})}\) and \(N\)) on a classical computer. The peak probability \(\mu_{\mathrm{net}}\) can be checked in polynomial time (in the number of bins \(d^{(\mathtt{abs})}\)) on a quantum boson-sampler. The fact that the miners don't know the mode binning ahead of time, of which there are \(M!/(M/d^{(\mathtt{abs})})!^{d^{(\mathtt{abs})}}\) possibilities, means that even after the problem is specified, there is no advantage in using even classical supercomputers to estimate \(P^{(\mathtt{abs})}\). The probability of generating a random sample set that produces a correct mode-binned distribution within total variation distance \(\beta\) is no more than \(\beta^{d^{(\mathtt{abs})}-1}\), i.e. the probability to correctly guess to within \(\beta\) the probability in each bin (except the last which is given by normalization). Even if this probability were non-negligible, for example, because of a choice to use a small \(d^{(\mathtt{abs})}\) and large \(\beta\) to speed up the validation time, provided it is smaller than \(p^{\mathtt{heat}}\), the protocol is robust. The reason is, as established in Sec. V, cheaters will be disincentivized since failure to pass the test incurs a penalty of lost staked tokens. Similarly, not knowing the state-binning means that they have no potential advantage in the payout.
The mining time is,
\[T_{\mathrm{mine}}=\frac{\max\{N_{tot}^{(\mathtt{abs})},N_{tot}^{(\mathtt{abs} )}\}}{R_{q}}, \tag{10}\]
where \(R_{q}\) is based on publicly available knowledge of the boson sampling repetition rate at the time of the genesis block. This choice for mining time is made to ensure that honest miners with boson samplers will have produced enough samples to pass the validation test and even if there is only one honest node, that node will have produced enough samples to earn a reward. The repetition rate will of course increase with improvements in quantum technology but that can be accommodated for by varying the other parameters in the problem such as photon number, bin numbers, and prescribed accuracy, in order to maintain a stable block mining rate. For \(N=25\), \(d^{(\mathtt{abs})}=3\), and \(\beta=0.1\), and assuming the boson sampling specs in Fig. 5, the minimum mining time would be 81.6s. The validation test sets a lower limit on the time to execute the entire protocol and add a new block. The classical computation involved during the validation step, while tractable, can be a long computation even for moderate sized bin numbers \(d^{(\mathtt{abs})}\) and photon numbers. Miners will be incentivized to use boson samplers to speed up this step of the consensus protocol.
The purpose of the state-binning step is twofold. It provides an independent way to tune the reward structure and hence moderate participation in the protocol. Second, it incentivizes nodes to have a quantum boson-sampling device in order to verify older blocks in the blockchain since there is no known efficient classical simulation of the state-binned distribution whereas there is for the counterpart mode-binned distribution under the assumption of a constant number of bins.
### Quantum vs. classical sampling rates
The time needed to successfully mine a block is determined by the inverse of the sampling(repetition) rate of the physical device. For a photonic boson sampler, the repetition rate is [32]
\[R_{q}=(\eta_{f}\eta_{t}^{M})^{N}R_{0}/Ne. \tag{11}\]
Here \(R_{0}\) is the single photon source rate and \(R_{0}/N\) is the rate at which \(N\) indistinguishable photons are produced, \(\eta_{f}\) is a parameter that doesn't scale with the number of modes and accounts for the preparation and detection efficiencies per photon. It can be written as the product \(\eta_{f}=\eta_{g}\eta_{c}\eta_{d}\), where \(\eta_{g}\) is the photon generation efficiency, \(\eta_{c}\) is the coupling efficiency, and \(\eta_{d}\) is the detector efficiency. Finally, \(\eta_{t}\) is the beamsplitter transmission probability. Since we are assuming a circuit of depth equal to the number of modes (which is sufficient to produce an arbitrary linear optical transformation), the overall transmission probability per photon through the circuit is \(\eta_{t}^{M}\). Finally, the factor of \(e\) is an approximation of the probability to obtain a collision-free event [33]. The experiment of Ref. [34] produced a single photon repetition rate of \(R_{0}=76\)MHz and the experiment of Ref. [35], reported a transmission probability per photon through a \(144\times 144\) optical circuit of 97% implying a per beamsplitter transmission probability of \(\eta_{t}=0.97^{1/144}\) as well as an average wavepacket overlap of 99.5%. A value of \(\eta_{g}=0.84\) was reported for quantum dot sources in Ref. [36] and efficiencies of \(\eta_{c}=0.9843\) have been demonstrated for coupling single photons from a quantum dot into a photonic crystal waveguide [37]. Finally, single photon detector efficiencies of up to \(\eta_{d}=0.98\) have been reported at telecom wavelengths [38]. All these numbers can reasonably be expected to improve as technology advances [39].
The state-of-the-art general-purpose method to perform classical exact boson sampling uses a hierarchical sampling method due to Clifford & Clifford [40]. The complexity is essentially that of computing two exact matrix permanents providing for a repetition rate 7
Footnote 7: We ignore the relatively small \(O(MN^{2})\) additive complexity to the classical scaling.
\[R_{c}=\frac{1}{\tilde{a}\cdot 2\cdot N\cdot 2N}. \tag{12}\]
Here \(\tilde{a}\) refers to the scaling factor (in units of seconds \(s\)) in the time to perform the classical computation of the
matrix permanent of one complex matrix where Glynn's formula is used to exactly compute the permanent of a complex matrix in a number of steps \(O(N2^{N})\) using a Gray code ordering of bit strings. Recently an accelerated method for classical boson sampling has been found with an average case repetition rate scaling like \(R_{c}=O(1.69^{-N}/N)\)[41], however, this assumes a linear scaling of the modes with the number of photons, whereas we assume a quadratic scaling.
As shown in Fig. 5 the performance ratio, defined as the ratio of sampling rates for quantum to classical machines \(R_{q}/R_{c}\), is substantial even for a modest number of photons.
### Quantum vs. classical energy cost
The energy cost to run boson samplers is dominated by the cost to cool the detectors since the cost to generate the photons and couple them into the device is negligible. Superconducting single-photon detectors of NbN type with reported efficiencies of \(\eta_{d}=0.95\) can operate at 2.1K [38] which is just below the superfluid transition temperature for Helium. Two-stage Gifford-McMahon cryocoolers can run continuously at a temperature of 2K with a power consumption of \(\sim 1.5\)kW [38]. To compare the energy cost of boson-samplers to classical samplers, note that the power consumption of the Tianhe-2 supercomputer is 24MW [43], and the power consumption of a single core processor at 3.5GHz is \(\sim 100\)W. Ultimately, the important metric is the energy cost per sample since it is the accumulation of valid samples that enables a consensus to be reached. As seen from Fig.6, quantum boson-samplers are significantly more energy efficient than classical computers. For example, at \(N=25\) photons the quantum boson-sampler costs \(6.77\times 10^{-2}\)J per sample which is 1563 times more energy efficient than a single core processor and 29569 times more efficient than a supercomputer.
While classical devices, such as ASICs, could be developed in the future that would speed up calculations of matrix permanents by some constant factor, any such device is fundamentally going to be limited by the exponential in \(N\) slowdown in the sampling rate (\(R_{c}\) in Eq. 4.3). Even as classical computers do speed up, one can increase the number of photons to maintain the same level of quantum advantage. Importantly, this would not require frequent upgrades on the boson sampler since the same device can accommodate a few more input photons as the number of modes was already assumed to be \(O(N^{2})\). Furthermore, as the quality of the physical components used for boson-sampling improves, the quantum
Figure 5: Sampling rate speed up \(R_{q}/R_{c}\) (log scale) for quantum boson-samplers relative to classical computers. Points above the red line indicate a quantum speedup. Orange dotted line: performance relative to single-core Intel Keon processor running at 3.5GHz, and 128GB RAM with \(\tilde{a}=10^{-9.2}\)s [42]. Blue dotted line: performance relative to the Tianhe-2 supercomputer [43] with \(\tilde{a}=N\times 1.99\times 10^{-15}\)s\({}^{\text{b}}\). The photonic boson-sampler is assumed to have the following specifications: single photon source rate \(R_{0}=100\)MHz, a single photon joint preparation and detection probability of \(\eta_{f}=0.90\), and beam-splitter transmission probability of \(\eta_{c}=0.9999\).
Figure 6: Comparison of the energy cost per sample (log scale) for boson-sampling using: a quantum boson-sampler, a supercomputer, and a single core processor all with same specs as in Fig. 5.
repetition rates (\(R_{q}\) in Eq. 11) will increase, ultimately limited by the single photon source rate.
On the other hand, it is unlikely that much faster "quantum ASIC" devices will be developed for boson sampling. Fock state Boson sampling can be simulated fault tolerantly by universal quantum computers with polynomial overhead. One way to do this is to represent the state space as a truncated Fock space encoded in \(M\) qudits of local dimension \(N+1\) (or in \(M\times\lceil\log{(N+1)}\rceil\) qubits). The input state is a tensor product state of \(|0\rangle\) and \(|1\rangle\) states, the gates of the linear interferometer are two qudit gates which can be simulated in \(O(N^{4})\) elementary single and two qudit gates, and the measurement consists of local projectors such that the total simulation scales like \(O(N^{4}M^{2})\). Another approach using the symmetric space of qudits is given in [44]. Given the algorithmic penalty as well as the gate overheads for error correction, the quantum computer based simulation would be slower than a photonic based native boson sampler, except in the limit of very large \(N\) where the fault tolerance of the former enables a speedup. However, at that point the entire protocol would be too slow to be of use for consensus anyway.
The improvements in the quantum repetition rates will hinge on advances in materials and processes that most likely would impose a negligible increase in energy cost. In this sense, PoW by boson sampling offers a route to reach a consensus without incentivizing users to purchase ever more power-hungry mining rigs.
## V Payoff mechanism
To reward nodes for their work done in the boson-sampling subroutine, nodes are rewarded when their individual PBP \(\mu_{i}\) is sufficiently close to the net PBP \(\mu_{\texttt{net}}\). That is, a reward \(R_{i}=\mathcal{R}(\mu_{i},\mu_{\texttt{net}},|s_{i}|)\) is paid to \(node_{i}\) when \(f(|\mu_{i}-\mu_{\texttt{net}}|)<\epsilon\) is satisfied. To prevent cheating, a penalty term \(P_{i}=\mathcal{P}(\mu_{i},\mu_{\texttt{net}},|s_{i}|)\) is applied to \(node_{i}\) when their individual PBP \(\mu_{i}\) is far away compared to the net PBP \(\mu_{\texttt{net}}\) (i.e. \(f(|\mu_{i}-\mu_{\texttt{net}}|)\geq\epsilon\)). The function \(f\) should be monotonic and we can assume it is linear in the argument.
We now construct a reward and penalty mechanism where it is the player's unique dominant strategy to behave honestly in the boson-sampling subroutine and not cheat. We construct \(R_{i}\) and \(P_{i}\) so that it scales linearly with the number of samples provided by \(node_{i}\). Denote this as \(n\equiv|s_{i}|\). We also denote R to be the base rate reward for satisfying \(f(|\mu_{i}-\mu_{\texttt{net}}|)<\epsilon\) with \(n=1\) and let \(P\) be the base rate penalty for satisfying \(f(|\mu_{i}-\mu_{\texttt{net}}|)\geq\epsilon\) with \(n=1\). We also introduce a cutoff timestamp \(T_{\texttt{mine}}\) where only samples submitted prior to the cutoff time are considered for the payoffs. Finally, we denote the probability that an honest user satisfies the requirement \(f(|\mu_{i}-\mu_{\texttt{net}}|)<\epsilon\) as \(p_{i}^{\texttt{honest}}\) and the probability that a cheater satisfies the requirement \(f(|\mu_{i}-\mu_{\texttt{net}}|)<\epsilon\) as \(p_{i}^{\texttt{cheat}}\).
This gives the expected reward and payoff for \(node_{i}\) as,
\[\mathbb{E}[R_{i}] =\begin{cases}np_{i}R&\text{if }t_{i}<T_{\texttt{mine}}\\ 0&\text{otherwise}\end{cases},\] \[\mathbb{E}[P_{i}] =\begin{cases}n(1-p_{i})P&\text{if }t_{i}<T_{\texttt{mine}}\\ 0&\text{otherwise}\end{cases}, \tag{12}\]
where \(p_{i}\) is either \(p_{i}^{\texttt{honest}}\) or \(p_{i}^{\texttt{cheat}}\) depending on the characteristics of \(node_{i}\) as either a honest player or cheater. It is clearly sub-optimal to submit samples after the cutoff timestamp, thus the discussion going forward assumes that the player submits the samples prior to the cutoff time. There are 4 viable strategies for each player. They can:
* Submit an honest sample from a quantum boson sampler (denoted with an "honest" superscript)
* Exit the PoW scheme and submit nothing (denoted with a "nothing" superscript)
* Submit a cheating sample from any algorithm (denoted with a "cheat" superscript)
* Submit an honest sample from a classical algorithm (denoted with a "classical" superscript)
We now show that given some innocuous assumptions, a payoff mechanism can be constructed such that a unique pure strategy Nash equilibrium exists where each player's dominant strategy is to submit an honest sample from a quantum boson sampler. To show this, we assume the following:
* A player's utility is derived from the expected rewards minus the expected penalties and the costs incurred to generate a sample.
* An individual player's sample contribution is significantly smaller than the combined sample of all players (i.e. \(|s_{i}|\ll|s_{total}|\)) so that \(\mu_{\texttt{net}}\) remains unchanged irrespective of \(node_{i}\) being honest or cheating.
* The verification subroutine is fairly accurate for \(|s_{i}|\gg 1\) so that a honest player will satisfy \(f(|\mu_{i}-\mu_{\texttt{net}}|)<\epsilon\) with probability \(p_{i}^{\texttt{honest}}\in\mathbb{R}_{(0.75,1)}\) and a cheater will satisfy \(f(|\mu_{i}-\mu_{\texttt{net}}|)<\epsilon\) with probability \(p_{i}^{\texttt{cheat}}\in\mathbb{R}_{(0,0.25)}\).
* The cost to generate sample \(\{s_{i}\}\) (denoted \(C_{i}\)) scales linearly with \(|s_{i}|\). That is, \(C_{i}=kn\), where \(k\in\mathbb{R}\) and \(n\equiv|s_{i}|\). The \(k\) parameter includes costs such as energy consumption to generate one sample but should not include sunk costs [45]. This assumption will be relaxed later to cover heterogeneous costs between players.
* The cost to generate a cheating sample is 0. This assumption will be relaxed later to cover cheating samples with costs.
We will cover the classical player later. Focusing on the first 3 strategies, the utilities are:
\[u_{i}^{\texttt{honest}} =\mathbb{E}[R_{i}]-C_{i}-\mathbb{E}[P_{i}]\] \[=np_{i}^{\texttt{honest}}R-nk-n(1-p_{i}^{\texttt{honest}})P\] \[=n(p_{i}^{\texttt{honest}}R-k-(1-p_{i}^{\texttt{honest}})P)\] \[u_{i}^{\texttt{nothing}} =0\] \[u_{i}^{\texttt{cheat}} =\mathbb{E}[R_{i}]-\mathbb{E}[P_{i}]\] \[=np_{i}^{\texttt{cheat}}R-n(1-p_{i}^{\texttt{cheat}})P\] \[=n(p_{i}^{\texttt{cheat}}R-(1-p_{i}^{\texttt{cheat}})P) \tag{12}\]
To ensure that the dominant strategy is for players to behave honestly and for cheaters to exit the scheme we require that
\[u_{i}^{\texttt{honest}}>u_{i}^{\texttt{nothing}}>u_{i}^{\texttt{cheat}}. \tag{13}\]
So we require,
\[0<u_{i}^{\texttt{honest}}\] \[\implies 0<p_{i}^{\texttt{honest}}R-k-(1-p_{i}^{\texttt{honest}})P\] \[0>u_{i}^{\texttt{cheat}}\] \[\implies 0>p_{i}^{\texttt{cheat}}R-(1-p_{i}^{\texttt{cheat}})P \tag{14}\]
Solving this, we obtain,
\[\frac{p_{i}^{\texttt{cheat}}R}{1-p_{i}^{\texttt{cheat}}}<P<\frac{p_{i}^{ \texttt{honest}}R-k}{1-p_{i}^{\texttt{honest}}} \tag{15}\]
This inequality is not always defined. However, we note \(p_{i}^{\texttt{cheat}}<p_{i}^{\texttt{honest}}\) and \(\frac{1}{1-x}\) is increasing in \(x\in\mathbb{R}_{(0,1)}\). So we have,
\[\frac{1}{1-p_{i}^{\texttt{cheat}}}<\frac{1}{1-p_{i}^{\texttt{honest}}}, \tag{16}\]
and a sufficient condition for inequality is,
\[p_{i}^{\texttt{cheat}}R<p_{i}^{\texttt{honest}}R-k\] \[\implies \frac{k}{p_{i}^{\texttt{honest}}-p_{i}^{\texttt{cheat}}}<R. \tag{17}\]
Since,
\[1<\frac{1}{p_{i}^{\texttt{honest}}-p_{i}^{\texttt{cheat}}}<2, \tag{18}\]
a sufficient condition for \(R\) is,
\[\frac{k}{p_{i}^{\texttt{honest}}-p_{i}^{\texttt{cheat}}}<2k<R, \tag{19}\]
to ensure Eq. 15 is well-defined. Taking the tightest bounds for Eq. 15 and \(2k<R\), we can bound \(P\) by,
\[\frac{1}{3}R<P<R. \tag{20}\]
These bounds ensure that,
\[u_{i}^{\texttt{honest}}>u_{i}^{\texttt{nothing}}>u_{i}^{\texttt{cheat}}, \tag{21}\]
is satisfied and the dominant strategy for \(node_{i}\) is to be honest.
### Classical Honest Players
To keep the PoW protocol quantum and to disincentivize classical players from submitting samples to the network would require the utility of classical players to be negative while keeping the utility of quantum players positive. From the construction above, we have already derived bounds for \(node_{i}\) to be honest. We will keep these bounds and derive an upper bound for \(R\) that ensures \(u_{i}^{\texttt{honest}}>0\) and \(u_{i}^{\texttt{classical}}<0\).
We work under the assumption that the utility of a classical player is analogous to the utility of an honest player. That is,
\[u_{i}^{\texttt{classical}}=n(p_{i}^{\texttt{classical}}R-k^{ \texttt{classical}}-(1-p_{i}^{\texttt{classical}})P) \tag{22}\]
Where \(p_{i}^{\texttt{classical}}=p_{i}^{\texttt{honest}}\) and \(k^{\texttt{classical}}\gg k\). It is reasonable to think of a classical player as performing the boson-sampling subroutine using a classical simulator instead of a true quantum boson-sampler. Letting \(N\) be the number of photons and \(M=N^{2}\) be the number of modes, the most efficient known classical boson-sampling simulator has a per sample cost proportional to the inverse of the repetition rate, \(R_{c}\), defined in Eq. 11, i.e. \(k^{\texttt{classical}}\in O(2^{N}N)\). In contrast, a quantum boson sampler has a per-sample cost proportional to the inverse of the repetition rate \(R_{q}\) (Eq. 10). In the ideal case (\(\eta_{f}=\eta_{t}=1\)), this cost is linear in \(N\), otherwise, it increases exponentially with \(N\) and \(M\). However, as shown in Fig. 5 there is a large region of \(N\)values where this cost is several orders of magnitude smaller than that for classical supercomputers. Hence we can safely assume \(k^{\texttt{classical}}\gg k\).
To have \(u_{i}^{\texttt{classical}}<0\), it is sufficient to have,
\[k^{\texttt{classical}}>R>p_{i}^{\texttt{classical}}R, \tag{23}\]
since \(p_{i}^{\texttt{classical}}\in\mathbb{R}_{(0.75,1)}\). Combined with the derived bounds for \(node_{i}\) to be honest, we have the bounds for \(R\) and \(P\) be,
\[2k<R<k^{\texttt{classical}}, \tag{24}\] \[\frac{1}{3}R<P<R, \tag{25}\]
This ensures that \(u_{i}^{\texttt{honest}}>0\), \(u_{i}^{\texttt{cheat}}\), \(u_{i}^{\texttt{classical}}<0\), and \(u_{i}^{\texttt{nothing}}=0\) and the dominant strategy of \(node_{i}\) is to submit an honest sample to the network using a quantum boson-sampler. This strategy is unique as strictly dominant Nash equilibria are unique [46].
### Non-Nash Equilibrium without Penalty Term
[47] showed that under certain assumptions, deterministic tests to check PoW can have a Nash equilibrium that is in line with the consensus protocol's best interests. In this section, we show that contrary to deterministic tests to check PoW (such as running double SHA-256
in Bitcoin), a penalty term is a necessity for statistical tests that check PoW to ensure it is a Nash equilibrium for players to remain honest. This is because statistical tests imply a non-zero probability of passing the test even though a player may have submitted a cheating sample. A penalty term ensures that it is not optimal for the cheater to submit cheating samples in this manner.
Without a penalty term, the utilities of the players are:
\[u_{i}^{\texttt{honest}} =\mathbb{E}[R_{i}]-C_{i}\] \[=np_{i}^{\texttt{honest}}R-nk\] \[=n(p_{i}^{\texttt{honest}}R-k),\] \[u_{i}^{\texttt{nothing}} =0\] \[u_{i}^{\texttt{cheat}} =\mathbb{E}[R_{i}]\] \[=Np_{i}^{\texttt{cheat}}R, \tag{5.16}\]
where \(n=|s_{i}^{\texttt{honest}}|\) is the number of samples committed by an honest player and \(N=|s_{i}^{\texttt{cheat}}|\) is the number of samples committed by a cheater. To show that the honest strategy is not a Nash equilibrium, it suffices to show that \(u_{i}^{\texttt{cheat}}>u_{i}^{\texttt{honest}}\). Let \(N=\frac{np_{i}^{\texttt{honest}}}{p_{i}^{\texttt{cheat}}}\). Then,
\[u_{i}^{\texttt{cheat}} =Np_{i}^{\texttt{cheat}}R\] \[=\frac{np_{i}^{\texttt{honest}}}{p_{i}^{\texttt{cheat}}}p_{i}^{ \texttt{cheat}}R\] \[=np_{i}^{\texttt{honest}}R\] \[>n(p_{i}^{\texttt{honest}}R-k)\] \[=u_{i}^{\texttt{honest}}. \tag{5.17}\]
In essence, when sample submission incurs negligible costs (i.e. \(k=0\)) and without a penalty term, cheaters could artificially inflate their sample size in hopes of getting a large payoff by chance. This would result in a higher utility to act maliciously and destroy the original Nash equilibrium of being honest.
### Heterogeneous Costs
We now relax the assumption that all players have the same cost factor \(k\) for generating one sample by a quantum boson-sampler and allow for a heterogeneous cost factor. That is, for player \(i\in\{1,2,...,p\}\) with cost function \(C_{i}=k_{i}n\), \(k_{i}\in\mathbb{R}_{>0}\) is potentially different along the players.
With heterogeneous costs, we set the cost factor \(k\) in Eq. 5.14 to the cost factor of the most efficient player (i.e. \(k=\min\{k_{1},k_{2},...,k_{p}\}\)). This ensures that there is at least one player (the most efficient player) such that,
\[u_{eff}^{\texttt{honest}}>u_{eff}^{\texttt{nothing}}>u_{eff}^{\texttt{cheat}}. \tag{5.18}\]
Since the sign of \(u_{i}^{\texttt{cheat}}\) is independent of the value of \(k\), this also ensures that \(u_{i}^{\texttt{cheat}}<u_{i}^{\texttt{nothing}}=0\) for \(i\in\{1,2,...,p\}\). For inefficient players with individual cost factors \(k_{i}>k\) such that \(u_{i}^{\texttt{honest}}<u_{i}^{\texttt{nothing}}\), the market mechanism will have the inefficient players leave the PoW scheme and submit nothing for verification.
If the variation in the individual cost factors is significant enough such that setting \(k\) to be the most efficient cost factor will result in the market becoming saturated, we can set \(k\) to be the \(m\)th lower percentile cost factor (i.e. \(k=\min_{m\%}\{k_{1},k_{2},...,k_{p}\}\)) so that we can ensure at least \(m\) per cent of the \(p\) players will have a positive payoff from contributing samples to the network and not exit the PoW scheme.
### Cheating with Costs
If players have non-zero costs for generating a cheating sample, then it is clearly sub-optimal for players to cheat since cheating without costs is already a dominant strategy. Additional costs associated with cheating just make the utility for cheaters lower.
### Block Reward vs. Split Reward
The derivations above assumed a split reward mechanism. That is, the reward for the addition of a new block is split between all players satisfying \(f(|\mu_{i}-\mu_{\texttt{net}}|)<\epsilon\) and each player receives \(nR\) for their \(n\) samples provided. Another reward mechanism that could be used is a block reward mechanism in which the entire reward is awarded to one player instead of splitting it between players (i.e. one player satisfying \(f(|\mu_{i}-\mu_{\texttt{net}}|)<\epsilon\) would randomly be chosen to receive the entire reward). While the expected reward would stay the same, there is now considerable variation in the payoff for the player. The initial assumption that the player's utility is risk-neutral and only depends on the expected rewards/penalties and costs would no longer be valid.
Conventional mean-variance utility theory in finance imposes a penalty term for risk-aversion due to the variability of the payoffs [48; 49]. Thus, for block reward mechanisms, it is more appropriate to use utility functions of the form
\[u_{i}=\mathbb{E}[R_{i}]-C_{i}-\mathbb{E}[P_{i}]-A_{i}\sigma^{2}. \tag{5.19}\]
Where \(A_{i}\) is the coefficient of risk-aversion for \(node_{i}\) and \(\sigma^{2}\) is the variance of the reward. It is difficult if not impossible to estimate the parameter \(A_{i}\) for all the players in the PoW protocol as it is intrinsically related to individual preferences of risk-aversion. We do not claim here that we can provide an estimate, empirical or theoretical, for its value. However, for implementation purposes, the reward \(R\) in Eq. 5.14 should be set higher for a block reward mechanism compared to a split reward mechanism so that the additional expected rewards \(\mathbb{E}[R_{i}]\) would offset the penalty from risk-aversion \(A_{i}\sigma^{2}\).
For implementation purposes of a block reward mechanism, it may also be prudent to consider safeguards
against selfish miners as proposed in [50]. In their paper, the authors discussed a mining protocol that deviated from the intended consensus protocol and had revenues scaling super-linearly against computational power once a threshold percentage of the network is dominated by one party (the authors upper bound this threshold by \(1/3\)). This is particularly relevant to block reward mechanisms due to the formation of mining pools to reduce the variance of payoffs. As such, it may be prudent to implement the solution proposed in [50] that raises the threshold to \(1/4\). That is, whenever the blockchain forks and two branches of length one occur, instead of each node mining the branch that they received first, the protocol should dictate that they randomly and uniformly choose one of the two branches to mine. The act of this randomization safeguards against potential selfish miners that control less than \(1/4\) of the computational power of the network.
### Components of Costs (Variable \(k\)) and Cost to Entry
The cost variable \(k\) (or \(k_{i}\) for heterogeneous costs) is the amalgamation of all relevant costs to the generation of one sample. There is a distinction in this cost factor for players wishing to enter the boson-sampling scheme (prospective players) and for players already providing samples to the boson-sampling subroutine (current players).
For current players or players using a subscription-based cloud boson sampler, the cost factor \(k\) should only include the variable costs required to produce one sample to the sampling subroutine (e.g. subscription costs, electricity costs, boson preparation costs, measurement costs). That is, \(k=k_{variable}\). The fixed cost of the boson-sampling device is sunk and its cost should not be taken into consideration for sampling decisions going forward [45].
For prospective players, however, the initial capital expenditure costs (e.g. source guides, detectors, machinery) must be taken into consideration for \(k\). If \(\tau\) is the expected number of samples the boson-sampler is expected to produce before obsolescence, then,
\[k=k_{variable}+\frac{k_{fixed}}{\tau}. \tag{5.20}\]
For the PoW protocol to be self-sustaining in the long run with consistent user renewal, the value for \(k\) in Eq. 5.14 must be above the \(k\) value for prospective players so that there are sufficient incentives for new players to overcome the cost to entry.
Two comments are worth adding here on the adoption of this new PoW consensus protocol. First, in the early stages, before large scale production and availability of boson samplers, it could be expected that classical miners would dominate. This could be accommodated for by having the reward inequality in Eq. 5.14 be initially \(R>k^{\texttt{classical}}\) so that the utility of classical players is positive. Then a decision could be made to gradually either (1) increase \(k^{\texttt{classical}}\) (such as increasing the number of photons in the sampling problem and hence the difficulty) or (2) reduce \(R\). This will kick classical players out of the protocol as they no longer have positive utility. Second, the conditions on reward and penalty described above assume that the Nash equilibrium is already reached since it is defined by the condition that no unilateral deviation will move the equilibrium. This will not be the case during the initialization stage of the protocol. During the genesis block and several blocks thereafter, additional mechanisms should be placed by trustworthy players, to ensure the initialization reaches this Nash equilibrium. The trustworthy players can then exit the market and the equilibrium will be retained, thus ensuring no "central authority" exists in the protocol.
## VI Conclusion
We have proposed a PoW consensus protocol that natively makes use of the quantum speedup afforded by boson-samplers. The method requires that miners perform full boson-sampling, where samples are post-processed as coarse-grained boson-sampling using a binning strategy only known after samples have been committed to the network. This allows efficient validation but resists pre-computation either classically or quantum mechanically. Whereas classical PoW schemes such as Bitcoin's are notoriously energy inefficient, our boson-sampling-based PoW scheme offers a far more energy-efficient alternative when implemented on quantum hardware. The quantum advantage has a compounding effect: as more quantum miners enter the network the difficulty of the problem will be increased to maintain consistent block mining time, further incentivizing the participation of quantum miners.
The quantum hardware required for the implementation of our protocol has already been experimentally demonstrated at a sufficient scale and is becoming commercially available (Xanadu Borealis). While we have focused our analysis primarily on conventional Fock state boson-sampling, the method extends to Gaussian boson-sampling, accommodating faster quantum sampling rates owing to the relative ease with which the required squeezed vacuum input states can be prepared. We leave the detailed study of number of samples required, error tolerances, and performance of Gaussian boson-samplers to future work.
Like the inverse hashing problem in classical PoW, the boson-sampling problem has no intrinsic use. It would be interesting to consider if samples contributed to the network over many rounds could be used for some practical purpose, enabling 'useful proof-of-work', something that has also been suggested in the context of conventional blockchains [51].
## Acknowledgements
We gratefully acknowledge discussions with Louis Tessler, Simon Devitt, and Peter Turner. GKB and GM received support from a BTQ-funded grant with Macquarie University. GKB and PPR receive support from the Australian Research Council through the Centre of Excellence for Engineered Quantum Systems (project CE170100009). DS is supported by the Australian Research Council (ARC) through the Centre of Excellence for Quantum Computation and Communication Technology (project CE170100012).
|
2309.11134 | GNSS/Multi-Sensor Fusion Using Continuous-Time Factor Graph Optimization
for Robust Localization | Accurate and robust vehicle localization in highly urbanized areas is
challenging. Sensors are often corrupted in those complicated and large-scale
environments. This paper introduces GNSS-FGO, an online and global trajectory
estimator that fuses GNSS observations alongside multiple sensor measurements
for robust vehicle localization. In GNSS-FGO, we fuse asynchronous sensor
measurements into the graph with a continuous-time trajectory representation
using Gaussian process regression. This enables querying states at arbitrary
timestamps so that sensor observations are fused without requiring strict state
and measurement synchronization. Thus, the proposed method presents a
generalized factor graph for multi-sensor fusion. To evaluate and study
different GNSS fusion strategies, we fuse GNSS measurements in loose and tight
coupling with a speed sensor, IMU, and lidar-odometry. We employed datasets
from measurement campaigns in Aachen, Duesseldorf, and Cologne in experimental
studies and presented comprehensive discussions on sensor observations,
smoother types, and hyperparameter tuning. Our results show that the proposed
approach enables robust trajectory estimation in dense urban areas, where the
classic multi-sensor fusion method fails due to sensor degradation. In a test
sequence containing a 17km route through Aachen, the proposed method results in
a mean 2D positioning error of 0.48m while fusing raw GNSS observations with
lidar odometry in a tight coupling. | Haoming Zhang, Chih-Chun Chen, Heike Vallery, Timothy D. Barfoot | 2023-09-20T08:30:53Z | http://arxiv.org/abs/2309.11134v3 | # GNSS/Multi-Sensor Fusion Using Continuous-Time Factor Graph Optimization for Robust Localization
###### Abstract
Accurate and robust vehicle localization in highly urbanized areas is challenging. Sensors are often corrupted in those complicated and large-scale environments. This paper introduces GNSS-FGO, an online and global trajectory estimator that fuses GNSS observations alongside multiple sensor measurements for robust vehicle localization. In GNSS-FGO, we fuse asynchronous sensor measurements into the graph with a continuous-time trajectory representation using Gaussian process regression. This enables querying states at arbitrary timestamps so that sensor observations are fused without requiring strict state and measurement synchronization. Thus, the proposed method presents a generalized factor graph for multi-sensor fusion. To evaluate and study different GNSS fusion strategies, we fuse GNSS measurements in loose and tight coupling with a speed sensor, IMU, and lidar-odometry. We employed datasets from measurement campaigns in Aachen, Dusseldorf, and Cologne in experimental studies and presented comprehensive discussions on sensor observations, smoother types, and hyperparameter tuning. Our results show that the proposed approach enables robust trajectory estimation in dense urban areas, where the classic multi-sensor fusion method fails due to sensor degradation. In a test sequence containing a \(17\,\mathrm{km}\) route through Aachen, the proposed method results in a mean 2D positioning error of \(0.19\,\mathrm{m}\) for loosely coupled GNSS fusion and \(0.48\,\mathrm{m}\) while fusing raw GNSS observations with lidar odometry in tight coupling.
GNSS, Factor Graph Optimization, Localization, Sensor Fusion, Autonomous Vehicle Navigation
## I Introduction
Safe and reliable autonomous driving operations in urban areas require accurate and consistent vehicle localization that infers a smooth trajectory estimate for planning and control tasks. Autonomous vehicles may use global navigation satellite systems (GNSS) to achieve global positioning in large-scale environments. However, the performance of GNSS is highly degraded when a vehicle passes through tunnels or urban canyons, where GNSS signal loss can be expected, greatly penalizing positioning availability. Moreover, the error dynamics of GNSS observations grow increasingly complex due to multipath and non-line-of-sight effects, resulting in inconsistent error models used in state estimation [1].
Many previous works fuse information from local optical sensors (e.g., lidars or cameras) for vehicle localization. They can typically be categorized into pose retrieval using a given map [3] and simultaneous location and mapping (SLAM) [4]. Generally, landmarks in sensor frames are extracted and associated to acquire either frame-to-map global pose constraints or frame-to-frame local motion increments. Lacking high-quality maps for vehicle pose retrieval in many areas, approaches relying on local sensors can often only achieve satisfactory localization if the ground is even and sufficient loop-closure constraints help eliminate drift. However, these requirements cannot always be met for long-term autonomous operations in large-scale environments [5].
In recent years, combining local sensors with GNSS has been investigated as a robust way to enable accurate and precise vehicle location in challenging areas. Incremental batch estimation implemented as factor graph optimization (FGO) is often superior to classic filtering-based algorithms in terms of localization performance and consistency [6, 7]. Unlike Bayesian filters, a factor graph fuses prior information and sensor measurements associated with the to-be-estimated state variables into probabilistic representations. A maximum-a-posterior problem (MAP) can be formulated from the factor graph and solved in a batch configuration using iterative Gauss-Newton-like algorithms [8]. In general, this optimization procedure is activated only if new sensor observations are available. Thus, many classic graph FGO approaches rely on a primary sensor that schedules the optimization procedure.
To fuse additional sensor modalities, asynchronous measurements must be synchronized with the primary sensor, leading to information loss and inefficient fusion mechanisms. Furthermore, classic FGO approaches degrade if the primary sensor is compromised or fails, which is likely in challenging environments. In this case, state variables cannot be effectively constrained by other sensor observations if the graph is not constructed in time. Fig. 1 exemplifies this problem, where a state-of-the-art lidar-centric SLAM approach diverges due to scan registration failures while driving in a tunnel1. In fact, as discussed in [9, 10, 11], commonly used sensors in localization deteriorate under challenging environmental conditions, complicating robust and long-term vehicle localization.
Footnote 1: Same factors, noise models, and smoother were used while benchmarking the lidar-centric approach with loosely coupled GNSS-FGO.
In this work, we address the degradation problem of GNSS-based localization approaches by translating classic FGO for multi-sensor fusion into an approach where the graph associated with all to-be-estimated state variables is constructed deterministically based on a priori chosen timestamps. It thus presents a time-centric factor graph construction that is independent of any particular reference sensor (e.g., GNSS). To achieve this, we represent the vehicle trajectory in continuous time using a Gaussian process (GP). This approach incorporates a motion prior using the white-noise-on-jerk (WNOJ) motion model, as originally proposed in [12]. The algorithm feeds new observations from each sensor independently into the factor graph without measurement-to-state synchronization. If a measurement cannot be temporally aligned with any state variable, we query a GP-interpolated state corresponding to the measurement that is used for the error evaluation within the optimization procedures.
To retrieve a robust global trajectory estimation while the GNSS measurements are strongly corrupted, we implemented the time-centric factor graph to fuse GNSS observations with measurements of an inertial measurement unit (IMU), optical speed sensor, and lidar for vehicle localization in challenging urban scenarios. We propose two factor graph structures for both loosely and tightly coupled fusion of GNSS observations alongside other local sensor measurements, demonstrating the flexibility of the proposed GNSS-FGO. For the graph that considers the GNSS positioning solution in the loose coupling, we fuse the pre-integrated IMU measurements, 2D velocity measurements, and lidar odometry. In tightly coupled fusion, we replace GNSS solution factors with GNSS pseudorange and deltarange factors, which are expected to provide more effective constraints compared to inconsistent GNSS positioning in urban areas [6].
We used raw data from measurement campaigns in the cities of Aachen, Dusseldorf, and Cologne to evaluate the proposed approach by benchmarking with a well-known lidar-centric SLAM approach [2, 13]. This lidar-centric SLAM has been shown to perform best for vehicle localization tasks in large-scale environments and can be configured to fuse GNSS measurements [14], which presents an equivalent fusion mechanism as our loosely coupled GNSS-FGO.
In contrast to our previous study [7], which focused only on trajectory smoothness using FGO with an offline evaluation, we now address online multi-sensor fusion for vehicle localization.
The contributions of this work are summarized as follows:
1. We propose a flexible, online, continuous-time factor graph optimization framework that can accommodate common multi-sensor fusion problems. The flexibility comes from the fact that (i) we can accommodate asynchronous measurements, and (ii) we choose estimation timestamps independent of any particular sensor frequency. This latter feature, as well as the smoothing effect of a motion prior, provide robustness in the phase of any particular sensor dropout.
2. We implement the proposed method for vehicle localization in challenging scenarios and conduct comprehensive studies on loosely coupled and tightly coupled fusion mechanisms to fuse GNSS measurements with other local sensors, with the aim of presenting extensive evaluations and discussions on accuracy, robustness, and run-time efficiency. Compared to other state-of-the-art methods, our method is shown to be more robust in our challenging test scenarios.
3. We evaluate the GP motion prior, which is implemented using the white-noise-on-acceleration (WNOA) and white-noise-on-jerk (WNOJ) models, to study the accuracy of the interpolated states.
4. We introduce a scalable and open-source software framework gnssFGO2 that can be extended for arbitrary robot localization using continuous-time factor graph optimization.
The rest of this article is organized as follows: we present a comprehensive review of the literature on multi-sensor fusion in the context of vehicle location using FGO in Sec. II. Sec. III introduces the proposed continuous-time FGO in detail. In Sec. IV, the mathematical background for factor formulations is presented, whereas the graph implementations for vehicle localization are introduced in Sec. V. We verify our method in Sec. VII and conduct further experiments and ablation studies on the precision and consistency of estimated trajectories in different scenarios. Finally, Sec. VIII summarizes results and limitations. We release our code and raw data used in our experiments from real-world measurement campaigns with different urban scenarios2. A demonstration video is also available3.
Footnote 2: [https://github.com/rwth-irt/gnsNFGO](https://github.com/rwth-irt/gnsNFGO)
Footnote 3: [https://youtu.be/9R5SuCCYNss](https://youtu.be/9R5SuCCYNss)
## II Related Work
### _Graph Optimization for GNSS-based Vehicle Localization_
In recent years, fusing GNSS observations using factor graph optimization for robust vehicle localization has drawn great attention. Compared with filtering-based approaches, FGO conducts batch optimization, where all measurement models are re-linearized and re-evaluated iteratively, resulting in a more robust state estimation even with measurement outliers. Previous work demonstrated robust localization in urban areas only by factoring pseudoranges with robust error models [15, 16]. Later, Wen et al., [6] and Zhang et al., [7] showed that FGO generally outperforms Kalman filters with respect to the precision and smoothness of the estimated trajectory.
GNSS data can be integrated into the graph using a loosely or tightly coupled schema [17]. While the loosely coupled fusion incorporates GNSS positioning solution into the graph, pre-processed raw GNSS observations such as code or carrier phase measurements can be fed into the estimator in a tight coupling as state constraints. As the to-be-estimated state variables can be directly observed in GNSS solutions, fusing GNSS data in a loose coupling enables quick convergence and elevated accuracy if high-quality Real-Time-Kinematic (RTK)-fixed GNSS solutions are available. In contrast, the integration of raw GNSS observations contributes to multiple state constraints associated with received satellites, which has been shown to be more robust than loose coupling [6, 18, 19]. Wen et al. [20] included double-differenced pseudorange (DDPR) and double-differenced carrier-phase measurements (DDCP) in FGO, resulting in performance improvement. Later, this work was extended to efficiently model carrier-phase constraints between multiple satellite measurement epochs within a time window [21]. In [22], time-differenced carrier-phase (TDCP) was integrated with cycle-slip estimation, which achieved accurate localization while presenting substantial availability compared to DDCP if satellites can be continuously tracked. Congram and Barfoot [23] also proposed a GPS odometry using TDCP with a more prominent cycle-slip detection and showed an effective drift reduction compared to visual odometry. However, since carrier-phase observations are also disturbed in deep urban areas, the robustness of state estimation cannot yet be guaranteed.
As factor graph optimization presents a convenient tool for robust error modeling [24], several works employ m-Estimators to reject faulty GNSS observations [7, 25, 26, 22, 16]. Recently, FGO has been explored in the context of vehicle location based on GNSS for noise distribution identification or adaptive rejection of outliers [27, 28], showing a positive impact on consistent trajectory estimation using FGO.
### _Graph Optimization for Multi-Sensor Fusion_
While the aforementioned works have particularly explored graph optimization for GNSS observations, they may still suffer from performance degeneration in complex scenarios if GNSS measurements are lost or present outliers. Therefore, another research domain focuses on fusing more sensor modalities (more than two) alongside GNSS observations into the graph, with applications predominantly in SLAM.
A pose graph that fuses GPS position measurements and lidar odometry with loop-closure constraints for outdoor scenarios improved both runtime efficiency and performance compared to lidar-only approaches [29]. In [2], feature-based lidar odometry and loop-closure constraints were merged into a factor graph with synchronized GPS position measurements to achieve a drift-free pose estimate, which was forwarded to another graph optimization with pre-integrated IMU measurements for high-frequency and real-time state estimation. In addition to integrating feature-based lidar odometry into FGO, the lidar map can also be used for GNSS visibility assessment [30]. Some works also introduce camera-centric sensor fusion, where other sensor observations are synchronized with camera data and fused on the graph [31, 32, 33]. In [34], camera, lidar, and wheel odometers were fused into the graph along with the GNSS positioning solution and IMU measurements, presenting consistent localization in featureless environments for long-term runs. Similar works also conduct multi-sensor fusion without GNSS and propose a carefully managed fusion architecture [35, 36]. However, these works still require well-handled data synchronization and careful graph construction to fuse heterogeneous sensor measurements.
Many recent approaches introduce multi-graph structures to achieve flexible and compact sensor fusion. In [37], IMU, GNSS, and lidar observations were separately integrated into multiple graphs in parallel with a switching mechanism. When the GNSS receiver lost its signal, the lidar-centric graph was activated. Another work aimed to confederate loosely and tightly coupled fusion schemes to ensure the estimation performance [38]. Each sensor modality is associated with a separate graph and proposes odometry factors to the IMU-centric graph that provides final estimated states in real-time.
Because the aforementioned methods introduce redundant and complex graph structures, other works exploit high-frequency IMU measurements to coordinate multi-sensor fusion. In [39] and [40], asynchronous global pose measurements (e.g., GPS measurements) are propagated into timestamps of visual-inertial factors using pre-integrated IMU measurements. The same concept has been extended to a forward-backward
IMU pre-integration mechanism in order to precisely associate asynchronous measurements with keyframes [41]. Nevertheless, these methods still depend on the noisy IMU sensor, which introduces uncertainty.
### _Continuous-Time Trajectory Representation_
One essential requirement for flexible graph-based multi-sensor fusion is the ability to query the states associated with the observations within the iterative optimization process. This requirement can be fulfilled if the trajectory is represented in continuous time. In [42], B-splines were proposed as a parametric approach to represent the trajectory in continuous time. This method was later used to propose stereo-inertial odometry [43]. Another approach utilizes exactly sparse Gaussian process (GP) regression by assuming that system dynamics follow a linear time-varying stochastic differential equation (LTV-SDE) [44]. The system dynamics are typically modeled as white-noise-on-acceleration (WNOA). This approach was verified in [45, 46, 47], where the reliability of this proposed surrogate dynamics model was demonstrated. Recently, Tang et al. [12] proposed an improved system dynamics model, which assumed a white-noise-on-jerk (WNOJ) model in LTV-SDE. They showed that the WNOJ could model the vehicle dynamics more accurately and thus, was appropriate for systems with more complicated dynamics. This work follows this aspect and adapts the GP-WNOJ model proposed in [12] as between-state motion constraints and state interpolator to fuse asynchronous measurements.
Although continuous-time trajectory representation is studied for localization and mapping problems by extending incremental smoothing using sparse GP interpolation to reduce computation time [46], fusing GNSS observations with multiple heterogeneous sensor measurements for online vehicle localization has not yet been presented or discussed. Inspired by the aforementioned methods, we address the problem of multi-sensor fusion for GNSS-based vehicle localization using continuous-time trajectory representation, which enables a fusion of asynchronous sensor observations in a single factor graph. Our hypotheses are: (i) factor graph construction in continuous time generalizes multi-sensor fusion and enables consistent trajectory estimation that incorporates effective state constraints from multiple sensor modalities in challenging scenarios, (ii) the GP-WNOJ motion model presents a larger capacity to represent complicated system dynamics, such as driving in urban areas.
## III Time-Centric Factor Graph Optimization
In this section, we introduce an implementation of continuous-time trajectory estimation, as proposed in [44, 12]. Generally, fusing multiple heterogeneous sensor observations into a state estimator incorporates different timestamps due to asynchronous measurements and unpredictable delays. In this work, we assume that the state estimator and all measurements have the same timing clock. Compared to the estimated states in continuous time, all sensor observations are sampled and processed in asynchronous timestamps, as illustrated in Fig. 2. We employ GP motion priors that enable a continuous-time trajectory representation. In this way, the construction of a factor graph can be deterministic and time-centric, bypassing asynchronous sensor frequencies and timing issues. We show the general structure of a time-centric factor graph in Fig. 3, where the to-be-estimated state variables \(\boldsymbol{x}_{t}\) are presented in solid line circles on a continuous-time trajectory. The queried states in dashed lines are not explicitly estimated in the optimization procedure, and thus are only queried between two successive state variables using the time offset \(\tau\) to the previous state variable.
Alg. 1 explains one optimization procedure from graph construction to iterative optimization. Assume that the time-centric factor graph is extended with \(n\) new to-be-estimated state variables in each procedure. We extend the graph with \(n\) state variables and create GP motion prior factors that constrain the relative state transitions between two successive state variables. In doing so, the timestamps of all state variables are chosen deterministically. While solving the iterative optimization problem, an initial prediction \(\boldsymbol{x}_{k}^{-}\in\boldsymbol{\mathcal{X}}^{-}\) must be provided for each state variable. These predictions can be acquired using prior motion models (e.g., GP state extrapolation [48]). In this work, we utilize state propagation using IMU measurements to calculate the initial estimate of future states at high frequency.
As new sensor observations are received at different timestamps in parallel to state estimation, we retrieve the cached \(m\) observations from each sensor \(s\in\mathcal{S}\) in a second loop. We define a time threshold \(t_{\mathrm{sync}}\) for state-observation alignment to query the index of related state variables. If state variables can be associated with sensor observations within this threshold, normal sensor factors are added to the graph. Otherwise,
Figure 3: A general time-centric factor graph. The state variables \(\boldsymbol{x}_{t}\) are created and constrained with GP motion prior factors on time while all asynchronous measurements are fused by querying a state with a time offset \(\tau\) to a former state variable.
Figure 2: Continuous-time state estimation with asynchronous measurements. A time offset \(\tau\) can be calculated with respect to a former state variable for each asynchronous measurement.
we construct the measurement factors by querying a GP interpolated state aligned with the measurement timestamp. In this case, two successive state variables \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j},\ j=i+1\) are obtained with a time offset \(\tau\) between the measurement and the former state \(\mathbf{x}_{i}\).
After graph construction, we employ a Gauss-Newton-like optimizer to solve the MAP problem [49]. The optimized state \(\mathbf{x}_{k}^{+}\) and marginalized uncertainties \(\mathbf{P}_{k}^{+}\) are returned for further state propagation, as introduced in Sec. V-E.
```
Input : Last state id and timestamp pair \((x_{\rm id}^{-},\ x_{\rm ts}^{-})\) Propagated states \(\mathbf{x}_{k}^{-}\in\mathbf{\mathcal{X}}^{-},\ k=1...n\) List of sensor measurements \(s\in\mathbf{\mathcal{S}}\) Output : Current state id and timestamp pair \((x_{\rm id}^{+},\ x_{\rm ts}^{+})\) Optimized state \(\mathbf{x}_{k}^{+}\) and uncertainties \(\mathbf{P}_{k}^{+}\)
1\(\mathbf{\mathcal{G}}\leftarrow\)initGraph\((\mathbf{x}_{0}^{-},\ \mathbf{P}_{0}^{-})\);
2 List of state id and timestamp pairs \(\mathbf{\mathcal{P}}=\emptyset\);
3\(x_{\rm id}=x_{\rm id}^{-}\);
4for\(k=1:n\)do
5\(x_{\rm id}^{+}=x_{\rm id}+1\);
6\(x_{\rm ts}^{+}=\)updateTimestamp\((x_{\rm ts}^{-})\);
7\(\mathbf{\mathcal{G}}\leftarrow\)NewStateVariable\((x_{\rm id},\ x_{\rm ts},\ \mathbf{\mathcal{X}})\);
8\(\mathbf{\mathcal{G}}\leftarrow\)GPMotionFactor\((x_{\rm id-1},\ x_{\rm id})\);
9\(\mathbf{\mathcal{P}}\leftarrow(x_{\rm id},\ x_{\rm ts})\);
10 endfor
11forEach sensor \(s\in\mathbf{\mathcal{S}}\)do
12forEach observation \(k=1:m\)do
13\((x_{i}^{\rm id},\ x_{i}^{\rm ts},\ \tau,\ type)\)\(\leftarrow\)queryStateInfo\((\rm timestamp_{k}^{s},\ \mathbf{\mathcal{P}})\);
14if\(type\) is DROPPEDthen
15\(\prime\)* Measurements in the past. */;
16\(\rm discardMeasurement(\mathbf{o}_{k}^{s})\);
17
18elseif\(type\) is SYNCHRONIZEDthen
19\(\mathbf{\mathcal{G}}\leftarrow\)SensorFactor\((x_{\rm id},\ \mathbf{o}_{k}^{s})\);
20elseif\(type\) is INTERPOLATEDthen
21\(\mathbf{\mathcal{G}}\leftarrow\)GPSensorFactor\((x_{i}^{\rm id},\ x_{i+1}^{\rm id},\ \tau,\ \mathbf{o}_{k}^{s})\);
22elseif\(type\) is CACREDthen
23\(\prime\)* Measurements in the future. */
24cacheMeasurement\((\mathbf{o}_{k}^{s})\);
25 endfor
26
27 endfor
28\(\{\mathbf{x}_{k}^{+},\ \mathbf{P}_{k}^{+}\}\leftarrow\)doOptimizationAndMarginalization\((\mathbf{\mathcal{G}})\); return\(\{(x_{\rm id}^{+},\ x_{\rm ts}^{+}),\ \mathbf{x}_{k}^{+},\ \mathbf{P}_{k}^{+}\}\)
```
**Algorithm 1**Time-centric factor graph optimization
## IV Mathematical Background
### _Frames_
In this article, we consider the following frames for state representation and illustrate them in Fig. 4:
1. _World Geodetic System (WGS84) frame:_ The WGS84 frame presents the vehicle's coordinates (latitude \(\varphi\), longitude \(\lambda\), and height \(h\)) in a unified geodetic system [50]. In this work, we transform the position into the WGS84 frame for visualization and do not consider projected coordinate systems to avoid coordinate distortions.
2. _Earth-Centered, Earth-Fixed (ECEF) frame:_ The ECEF frame, denoted as \((\cdot)^{e}\), formulates a Cartesian coordinate system at the center of Earth's mass and can be transformed into the WGS84 frame in closed form. We define the pose and velocity of our system in the ECEF frame for the sake of convenience.
3. _Navigation frame:_ As the vehicle is moving in a local tangent plane on Earth, the north-east-down (NED) frame and the east-north-up (ENU) frame are commonly used as navigation frames \((\cdot)^{n}\) to present the vehicle's velocity and orientation in planning and control tasks [17]. In this work, vehicle velocity and rotation are transformed into the frame \((\cdot)^{n}\) for error metrics in experimental studies.
4. _Local World Frame:_ As shown in [2, 32], odometry information acquired by local sensors (e.g., camera or lidars) refers to an arbitrary local tangent frame that is determined on system initialization with respect to an initial vehicle pose in the navigation frame. We denote this frame as a local world frame \((\cdot)^{w}\).
5. _Body Frame:_ We denote the body frame aligned with the IMU sensor center as \((\cdot)^{b}\). The body frame also represents the local pose \(\mathbf{\eta}(t)\) of the GP motion models.
### _Notation_
To present the state variables in different frames, we use \(\mathbf{R}_{b}^{e}\) and \(\mathbf{t}_{b}^{e}\) to denote the rotational and translational parts from frame \(b\) to frame \(e\). This notation is extended as \(\mathbf{R}_{b,t}^{e}\) to represent the states with respect to time \(t\). For motion increments in the same frame, we simplify the notation as \(\Delta\mathbf{t}_{ij}\) to represent the translational offset of two timestamps \(i\) and \(j\). We follow the pose representation \(\mathbf{T}_{b}^{e}=\left[\begin{smallmatrix}\mathbf{R}_{b}^{e}&\mathbf{t}_{b}^{e}\\ \mathbf{0}&1\end{smallmatrix}\right]\in SE(3)\) to calculate the motion increment [51]. For high-dimensional transition matrices in GP motion models (e.g., \(\mathbf{\Lambda}(\tau)\in\mathbb{R}^{18\times 18}\)), we denote the subblocks \(\mathbf{\Lambda}_{mn}\in\mathbb{R}^{6\times 6}\) associated with different state components for linear state querying in (16).
Fig. 4: Coordinate frames used in this work.
### _Continuous-Time Trajectory Representation using GP_
Barfoot et al. [44] originally proposed a continuous-time trajectory representation using Gaussian process regression, which presents an exactly sparse kernel by assuming the system dynamics follow a linear time-varying stochastic differential equation (LTV-SDE):
\[\dot{\mathbf{\gamma}}(t) =\mathbf{A}\mathbf{\gamma}(t)+\mathbf{B}\mathbf{u}(t)+\mathbf{F}\mathbf{w}(t), \tag{1}\] \[\mathbf{w}(t) \sim\mathcal{GP}(\mathbf{0},~{}\mathbf{Q}_{c}\cdot\delta(t-t^{{}^{ \prime}})),\]
where the vector \(\mathbf{\gamma}(t)\) represents a local state variable. The time-varying system matrices are denoted as \(\mathbf{A},~{}\mathbf{B}\) and \(\mathbf{F}\), respectively. The input vector \(\mathbf{u}(t)\) is set to \(\mathbf{0}\). The process noise \(\mathbf{w}(t)\) is given as a zero-mean Gaussian process (GP) with the kernel function formulated with the power spectral density matrix \(\mathbf{Q}_{c}\in\mathbb{R}^{6\times 6}\) and the Dirac delta function, \(\delta\).
In discrete time, this state-space model can be furthermore interpreted to interpolate an arbitrary state at timestamp \(t_{\tau}\) ( \(t_{i}<t_{\tau}<t_{j}\)) given an initial local state \(\mathbf{\gamma}_{i}\) using
\[\mathbf{\gamma}_{i}(t_{\tau}) =\mathbf{\Lambda}(t_{\tau})\mathbf{\gamma}_{i}(t_{i})+\mathbf{\Omega}(t_{ \tau})\mathbf{\gamma}_{i}(t_{j}), \tag{2}\]
where
\[\mathbf{\Lambda}(t_{\tau}) =\mathbf{\Phi}(t_{\tau},t_{i})-\mathbf{\Omega}(t_{\tau})\mathbf{\Phi}(t_{\tau },t_{j}), \tag{3}\] \[\mathbf{\Omega}(t_{\tau}) =\mathbf{Q}_{i,t_{\tau}}\mathbf{\Phi}(t_{\tau},t_{j})^{T}\mathbf{Q}_{i,j}^{-1}. \tag{4}\]
The system transition matrix \(\mathbf{\Phi}\) in (3) and (4) can be defined using a white-noise-on-acceleration (WNOA, a.k.a., constant-velocity) prior, as demonstrated in earlier works [44, 52]. Later, Tang et al. [12] introduced a white-noise-on-jerk (WNOJ) prior that presents third-order system dynamics with the system transition function
\[\mathbf{\Phi}(t,t_{i})=\begin{bmatrix}\mathbf{1}&(t_{i}-t)\mathbf{1}&\frac{1}{2}(t_{i}-t) ^{2}\mathbf{1}\\ \mathbf{0}&\mathbf{1}&(t_{i}-t)\mathbf{1}\\ \mathbf{0}&\mathbf{0}&\mathbf{1}\end{bmatrix}. \tag{5}\]
The time-varying covariance matrix \(\mathbf{Q}_{i}(t)\in\mathbb{R}^{18\times 18}\) and its precision matrix \(\mathbf{Q}_{i}^{-1}(t)\) are computed as
\[\mathbf{Q}_{i}(t)=\begin{bmatrix}\frac{1}{20}\Delta t_{i}^{5}\mathbf{Q}_{c}&\frac{1}{ 8}\Delta t_{i}^{4}\mathbf{Q}_{c}&\frac{1}{6}\Delta t_{i}^{3}\mathbf{Q}_{c}\\ \frac{1}{8}\Delta t_{i}^{4}\mathbf{Q}_{c}&\frac{1}{8}\Delta t_{i}^{3}\mathbf{Q}_{c}& \frac{1}{2}\Delta t_{i}^{2}\mathbf{Q}_{c}\\ \frac{1}{6}\Delta t_{i}^{3}\mathbf{Q}_{c}&\frac{1}{2}\Delta t_{i}^{2}\mathbf{Q}_{c}& \Delta t_{i}\mathbf{Q}_{c}\end{bmatrix}, \tag{6}\]
\[\mathbf{Q}_{i}^{-1}(t)=\begin{bmatrix}720\Delta t_{i}^{-5}\mathbf{Q}_{c}^{-1}&-360 \Delta t_{i}^{-4}\mathbf{Q}_{c}^{-1}&60\Delta t_{i}^{-3}\mathbf{Q}_{c}^{-1}\\ -360\Delta t_{i}^{-4}\mathbf{Q}_{c}^{-1}&192\Delta t_{i}^{-3}\mathbf{Q}_{c}^{-1}&-36 \Delta t_{i}^{-2}\mathbf{Q}_{c}^{-1}\\ 60\Delta t_{i}^{-3}\mathbf{Q}_{c}^{-1}&-36\Delta t_{i}^{-2}\mathbf{Q}_{c}^{-1}&9\Delta t _{i}^{-1}\mathbf{Q}_{c}^{-1}\end{bmatrix}. \tag{7}\]
Compared to other approaches, trajectory representation (interpolation) using Gaussian process regression effectively incorporates physics-driven models to retrieve realistic vehicle motion by scaling the transition function with the time-varying covariance matrix \(\mathbf{Q}\). As the hyper-parameter \(\mathbf{Q}_{c}\) can be tuned for different applications [53], this approach can be extended for nonlinear problems (see Sec. IV-D) and enables more accurate state interpolation [7, 47].
### _GP-WNOJ Motion Prior Model_
Following the approach in [12], a GP motion prior for \(SE(3)\) can be defined as
\[\dot{\mathbf{T}}(t) =\mathbf{\varpi}(t)^{\wedge}\mathbf{T}(t), \tag{8}\] \[\dot{\mathbf{\varpi}}(t) =\mathbf{w}(t),\]
where the vehicle pose in the global frame is denoted as \(\mathbf{T}(t)\), which can be calculated as \(\mathbf{T}(t)=\exp\left(\mathbf{\xi}(t)^{\wedge}\right)\) with local pose \(\mathbf{\xi}(t)=[\mathbf{\rho}(t)^{T}~{}\mathbf{\phi}(t)^{T}]^{T}\in\mathbb{R}^{6}\). The vectors \(\mathbf{\rho}(t)\) and \(\mathbf{\phi}(t)\) represent the position and orientation of a local pose (e.g., in the body frame) [54]. A local pose can be converted to \(\mathfrak{se}(3)\) by applying the operator \((\cdot)^{\wedge}\). The operator \((\cdot)^{\vee}\) is the inverse of \((\cdot)^{\wedge}\)[51]. The vector \(\mathbf{\varpi}(t)=[\mathbf{\nu}(t)^{T}~{}\mathbf{\omega}(t)^{T}]^{T}\in\mathbb{R}^{6}\) represents the body-centric velocity. With this motion prior, the state of the GP motion model in a global frame is given as
\[\mathbf{x}(t)=\{\mathbf{T}(t),~{}\mathbf{\varpi}(t),~{}\mathbf{\dot{\varpi}}(t)\}\in SE(3) \times\mathbb{R}^{12}. \tag{9}\]
However, the GP motion prior in (8) cannot be implemented directly using (1) due to nonlinearity of the system dynamics. To address this problem, Anderson and Barfoot [52] showed that a local linear GP prior can be defined between each state-timestamp pair, \(t_{i}\) and \(t_{i+1}\), by transforming the global pose \(\mathbf{T}(t)\) into the local tangent frame, where a local pose \(\mathbf{\xi}(t)\) can be calculated as
\[\mathbf{\xi}_{i}(t)=\ln(\mathbf{T}(t)\mathbf{T}_{t_{i}}^{-1})^{\vee},~{}~{}t_{i}\leq t\leq t _{i+1}, \tag{10}\]
where we consider the pose \(\mathbf{T}_{t_{i}}\) at the timestamp \(t_{i}\) as a fixed parameter while formulating the local pose \(\mathbf{\xi}_{i}(t)\) for an arbitrary pose \(\mathbf{T}(t)\) for \(t>t_{i}\).
Because the motion between state-timestamp pairs, which are usually associated with high-frequent measurement timestamps (e.g., lidar at \(10\,\mathrm{Hz}\)), is generally small, this local GP prior approximately represents a linear, time-invariant (LTI) SDE, which can be driven from (1) by assuming the system matrices remain constant. Thus, a local state variable of GP-WNOJ prior for \(SE(3)\) can be defined as
\[\mathbf{\gamma}(t)=[\mathbf{\xi}(t)^{T}~{}\dot{\mathbf{\xi}}(t)^{T}~{}\ddot{\mathbf{\xi}}(t)^{T}] ^{T} \tag{11}\]
and propagated using (2) to (4). The time derivatives of the local pose can be calculated as
\[\dot{\mathbf{\xi}}(t) =\mathbf{\mathcal{J}}(\mathbf{\xi}_{i}(t))^{-1}\mathbf{\varpi}(t), \tag{12}\] \[\dot{\mathbf{\xi}}(t) =-\frac{1}{2}(\mathbf{\mathcal{J}}(\mathbf{\xi}(t))^{-1}\mathbf{\varpi}(t))^{ \wedge}\mathbf{\varpi}(t)+\mathbf{\mathcal{J}}(\mathbf{\xi}(t))^{-1}\dot{\mathbf{\varpi}}(t), \tag{13}\]
where the matrix \(\mathbf{\mathcal{J}}\) is the left Jacobian of \(SE(3)\)[51]. To calculate \(\frac{d(\mathbf{\mathcal{J}}^{-1})}{d(t)}\) in closed form for (13), we approximately formulate \(\mathbf{\mathcal{J}}^{-1}\approx\mathbf{1}-\frac{1}{2}\mathbf{\xi}^{\wedge}\)[12]. The operator \((\mathbf{\xi})^{\wedge}\) represents the adjoint of \(\mathbf{\xi}^{\wedge}\in\mathfrak{se}(3)\)[51], which can be calculated as
\[\mathbf{\xi}^{\wedge}=\begin{bmatrix}\mathbf{\rho}\\ \mathbf{\phi}\end{bmatrix}=\begin{bmatrix}\mathbf{\phi}^{\wedge}&\mathbf{\rho}^{\wedge}\\ \mathbf{0}&\mathbf{\phi}^{\wedge}\end{bmatrix}. \tag{14}\]
Because the left Jacobian requires several matrix calculations, it can be approximated as an identity matrix \(\mathbf{1}\) over small intervals to improve the computation efficiency4[55].
Given a local state variable that represents the origin system state for each state-timestamp pair, we can retrieve the WNOJ motion model for two successive local state variables in the local frame as
\[\begin{split}\mathbf{\gamma}_{i}(t_{i})&=[\mathbf{0}\ \mathbf{\varpi}(t)^{T}\ \dot{\mathbf{\varpi}}(t)^{T}]^{T},\\ \mathbf{\gamma}_{i}(t_{i+1})&=\begin{bmatrix}\ln(\mathbf{T} _{i+i})^{\vee}\\ \mathbf{\mathcal{J}}_{i+1}^{-1}\mathbf{\varpi}_{i+1}\\ -\frac{1}{2}(\mathbf{\mathcal{J}}_{i+1}^{-1}\mathbf{\varpi}_{i+1})^{\times}\mathbf{\varpi }_{i+1}+\mathbf{\mathcal{J}}_{i+1}^{-1}\mathbf{\breve{\varpi}}_{i+1}\end{bmatrix}.\end{split} \tag{15}\]
Using the GP-WNOJ prior, a state at an arbitrary time \(\tau\in(i,i+1)\) can be queried as
\[\begin{split}\mathbf{T}_{\tau}&=\exp\Big{\{}[\mathbf{\Lambda} _{12}(\tau)\mathbf{\varpi}_{i}+\mathbf{\Omega}_{13}(\tau)\dot{\mathbf{\varpi}}_{i}+\mathbf{ \Sigma}_{11}(\tau)\ln(\mathbf{T}_{i+i,i})^{\vee}+\\ &+\mathbf{\Omega}_{12}(\tau)\mathbf{\mathcal{J}}_{i+1}^{-1}\mathbf{\varpi}_{i+ 1}+\\ &+\mathbf{\Omega}_{13}(\tau)(-\frac{1}{2}(\mathbf{\mathcal{J}}_{i+1}^{-1} \mathbf{\varpi}_{i+1})^{\times}\mathbf{\varpi}_{i+1}+\mathbf{\mathcal{J}}_{i+1}^{-1}\mathbf{ \breve{\varpi}}_{i+1})^{\wedge}\Big{\}}\mathbf{T}_{i},\end{split} \tag{16}\]
where \(\mathbf{\Lambda}\) and \(\mathbf{\Omega}\) are vehicle transition matrices obtained from (2) to (6).
**Remark 1**.: _As discussed in [12], representing a realistic system transition using the GP motion priors requires proper tuning of the power spectral density matrix \(\mathbf{Q}_{c}\). In this work, we assume that \(\mathbf{Q}_{c}\) is a constant diagonal matrix defined as \(\mathbf{Q}_{c}=\mathrm{diag}(\mathbf{q}_{c})\) with a 6D hyper-parameter \(\mathbf{q}_{c}\). For a fair evaluation of the GP-WNOA and GP-WNOJ models, we do not tune this hyper-parameter explicitly in each test sequence. We formulate \(\mathbf{Q}\) in both GP models with the same parameterization for pose weighting._
### _Measurement Models_
#### Iii-E1 GNSS Observations
Generally, a single antenna GNSS receiver can provide both position, velocity, and time (PVT) solutions and raw observations. As the pose and velocity of the PVT solution can be directly associated with the state variables in the FGO, we only present the measurement models for the raw GNSS observations: pseudorange \(\rho\) and Doppler shift (a.k.a., deltarange) \(\nu\).
In localization approaches that tightly fuse the GNSS observations, pseudorange and deltarange are commonly used and well studied [17]. The pseudorange \(\rho\) represents a geometric distance between the phase center of the GNSS antenna and the associated satellite, which is calibrated in a pre-processing step to eliminate satellite orbit bias and atmospheric delays, as shown in [56, 7]. After pre-processing, the pseudorange can be calculated with respect to the antenna position as
\[\tilde{\rho}_{k}=\left\|\mathbf{t}_{a}^{e}-\mathbf{t}_{\mathrm{sat},k}^{e}\right\|_{2 }+c_{b}+\delta\rho_{M}+w_{\rho}, \tag{17}\]
where the vector \(\mathbf{t}_{a}^{e}\) and \(\mathbf{t}_{\mathrm{sat},k}^{e}\) represent the positions of GNSS antenna and \(k\)-th satellite in ECEF frame, respectively. The variable \(c_{b}\) represents the receiver clock bias for the corresponding satellite source. The measurement noise is denoted as \(w_{\rho}\). The multi-path error \(\delta\rho_{M}\) is neglected in this work. We filter out all GNSS observations from satellites with an elevation angle less than \(15^{\circ}\)5.
Footnote 5: This is an ad-hoc choice, generally used in ground vehicle navigation.
The deltarange \(\nu\), or Doppler velocity, is measured as the internal carrier frequency change of the GNSS receiver while moving relative to the corresponding satellite. With this observation, the vehicle velocity concerning the satellite velocity can be represented as
\[\tilde{\nu}_{k}=(\mathbf{e}_{\mathrm{sat},k}^{e})^{T}(\mathbf{v}_{\mathrm{sat},k}^{e }-\mathbf{v}_{a}^{e})+c_{d}+w_{\nu,k}. \tag{18}\]
In (18), the unit vector \(\mathbf{e}_{\mathrm{sat},k}^{e}\) represents the direction from the antenna to the \(k\)-th satellite. We denote the satellite and antenna velocities in the ECEF frame as \(\mathbf{v}_{\mathrm{sat},k}^{e}\) and \(\mathbf{v}_{a}^{e}\), respectively. The receiver clock drift and measurement noise are given as \(c_{d}\) and \(w_{\nu,k}\).
#### Iii-E2 Lidar Odometry
We adapt the feature extraction and matching methods from a feature-based lidar odometry [2, 13] to obtain the relative motion increments between two laser keyframes. The coordinates of raw lidar points acquired in different timestamps are re-calibrated using the IMU measurement to the original timestamp of the lidar scan. We classify the calibrated points into edge and planar features, \(\mathbf{F}_{t}=\{\mathbf{F}_{t}^{e},\ \mathbf{F}_{t}^{p}\}\), based on the smoothness metric shown in [13, 57]. In scan registration, all \(k\) features in \(\mathbf{F}_{t+1}\) of the current scan are associated with pose priors \(\mathbf{T}_{t+1,1:k}^{w}\) and used to find the best transformation \(\Delta\mathbf{T}_{t,t+1}^{w}\) from the last laser scan by solving an optimization problem that takes the distance between the corresponding features in \(\mathbf{F}_{t}\) using a Gauss-Newton algorithm.
In [2], a lidar-centric SLAM approach is presented that optionally fuses GPS positioning. This approach can only present accurate state estimates if the scan registration converges and sufficient global references (e.g., GPS position or loop closure) are available. In contrast to [2], we query the vehicle states at scan timestamps from a previously built time-centric graph and integrate the transformation \(\Delta\widetilde{\mathbf{T}}_{t,t+1}^{w}\) as between-pose constraints. After the graph optimization, we query the optimized states again using the GP motion model and update lidar keyframe poses in frame \(w\) using the following transformation
\[\mathbf{T}_{I,t}^{w}=\mathbf{T}_{I,\mathrm{anc}}^{e,-1}\mathbf{T}_{I,t}^{e}, \tag{19}\]
where the transformation matrices \(\mathbf{T}_{I,t}^{e}\) and \(\mathbf{T}_{I,t}^{w}\) denote lidar poses in frame \(e\) and frame \(w\), respectively. As lidar odometry requires a state-space representation in a local-world (a.k.a., local-tangent) frame \(w\) where the \(z\)-axis is gravity aligned, we query an anchor pose \(\mathbf{T}_{I,\mathrm{anc}}^{e}=\left[\begin{smallmatrix}\mathbf{R}_{\mathrm{anc}}^{e }&\mathbf{t}_{\mathrm{anc}}^{e}\\ \mathbf{0}&\mathbf{0}\end{smallmatrix}\right]\) of the lidar sensor on first scan and initialize a local-world frame of the lidar odometry by setting the anchor pose as its origin. In contrast to [32], a coarse orientation estimation is unnecessary in our work to align the local-world frame and the navigation frame because the vehicle orientation is given.
#### Iii-E3 Optical Speed Sensor
We employ a high-grade vehicle optical speed sensor that provides unbiased 2D velocity observations \(\tilde{\mathbf{v}}_{t}^{b}\) in the body frame at \(100\,\mathrm{Hz}\). The 2D velocity observations can be associated with the vehicle velocity in the state vector using
\[\tilde{\mathbf{v}}_{t}^{b}=\begin{bmatrix}\tilde{v}_{t,x}^{b}\\ \tilde{v}_{t,y}^{b}\end{bmatrix}=\begin{bmatrix}1&0&0\\ 0&1&0\end{bmatrix}\cdot\mathbf{R}_{e}^{b}\mathbf{v}_{b}^{e}, \tag{20}\]
where the vector \(\tilde{\mathbf{v}}_{t}^{b}\) represents the observed 2D velocity components in frame \(b\), which can be evaluated with the vehicle velocity variable \(\mathbf{v}_{b}^{e}\) transformed with the inverse rotation matrix \(\mathbf{R}_{e}^{b}\) back to frame \(b\).
## V FGO for Vehicle Localization
This section presents our implementation of the proposed GNSS-FGO for two sensor fusion schemes. In loosely coupled fusion, we obtain a baseline trajectory of our datasets by fusing the GNSS-PVT solution with the observed 2D vehicle velocity from a high-grade speed sensor and the lidar odometry. To defend the superiority of fusing raw GNSS observations for vehicle localization, we propose a tightly coupled fusion of raw GNSS observations with IMU measurements and lidar odometry, which is evaluated with the baseline trajectory. In this section, we introduce all probabilistic factor formulations and the proposed factor graph structures.
### _State Variables_
The state variable at timestamp \(t\) in this work is defined as
\[\mathbf{x}_{t}\triangleq\{\mathbf{T}_{b,t}^{e}\ \mathbf{b}_{b,t}^{\rm{e}}\ \mathbf{b}_{b,t}^{ \rm{g}}\ \mathbf{c}_{t}^{\rm{T}}\}. \tag{21}\]
We estimate the vehicle pose \(\mathbf{T}_{b,t}^{e}\in SE(3)\) and 6D velocity \(\mathbf{v}_{b}^{e}\) in frame \(e\). The vectors \(\mathbf{b}_{b,t}^{\rm{a}}\) and \(\mathbf{b}_{b,t}^{\rm{g}}\) denote the 3D biases of the accelerometer and gyroscope, respectively. The 2D vector \(\mathbf{c}_{t}^{\rm{r}}=[c_{b,t}\ c_{d,t}]^{T}\) represents the GNSS receiver clock bias \(c_{b,t}\) and drift \(c_{d,t}\), which is only estimated by the tightly coupled fusion of raw GNSS observations.
**Remark 2**.: _Unlike [12], we do not estimate 6D accelerations in GP motion models to reduce the dimension of the state vector. Instead, we consider the vehicle accelerations measured by the IMU as inputs to the WNOJ model._
### _Factor Formulations_
#### V-B1 Pre-Integrated IMU Factor
In graph-optimization-based state estimation approaches, the IMU pre-integration, introduced in [58, 59], is generally utilized to integrate high-frequency IMU measurements as between-state factors for the optimization procedures running at a lower rate. The pre-integrated IMU measurements represent the relative motion increments on manifold. These relative motion increments can be assumed unchanged while re-linearizing the consecutive state variables in the optimization iterations, resulting in efficient computation. Following [59], we define the error function of the IMU factor between two consecutive state variables at timestamps \(t_{i},\ t_{j}\) as
\[\left\|\mathbf{e}_{ij}^{\rm{imu}}\right\|^{2}=\left\|[\mathbf{r}_{\Delta\mathbf{R}_{ij}}^{ T}\ \mathbf{r}_{\Delta\mathbf{v}_{ij}}^{T}\ \mathbf{r}_{\Delta\mathbf{t}_{ij}}^{T}]^{T}\right\|_{\mathbf{\Sigma}^{\rm{imu}}}^{2}, \tag{22}\]
where
\[\mathbf{r}_{\Delta\mathbf{R}_{ij}} =(\Delta\tilde{\mathbf{R}}_{ij}(\mathbf{b}_{i}^{\rm{g}}))^{\vee}\mathbf{R}_{i }^{T}\mathbf{R}_{i}, \tag{23}\] \[\mathbf{r}_{\Delta\mathbf{v}_{ij}} =\mathbf{R}_{i}^{T}(\mathbf{v}_{j}-\mathbf{v}_{i}-\mathbf{g}\Delta t_{ij})-\Delta \tilde{\mathbf{v}}_{ij}(\mathbf{b}_{i}^{\rm{g}},\mathbf{b}_{i}^{\rm{g}}),\] (24) \[\mathbf{r}_{\Delta\mathbf{t}_{ij}} =\mathbf{R}_{i}^{T}(\mathbf{t}_{j}-\mathbf{t}_{i}-\mathbf{v}_{i}\Delta t_{ij}- \frac{1}{2}\mathbf{g}\Delta t_{ij}^{2})-\Delta\tilde{\mathbf{t}}_{ij}(\mathbf{b}_{i}^{\rm {g}}\ \mathbf{b}_{i}^{\rm{g}}). \tag{25}\]
In (23) to (25), we omit the bias derivatives that can be ignored between two state variables. The motion increments \(\{\Delta\tilde{\mathbf{R}}_{ij},\ \Delta\tilde{\mathbf{v}}_{ij},\ \Delta\tilde{\mathbf{t}}_{ij}\}\) are provided by the IMU pre-integration with
\[\Delta\tilde{\mathbf{R}}_{ij} =\prod_{k=i}^{j-1}((\tilde{\mathbf{\omega}}_{k}-\mathbf{b}_{i}^{\rm{g}}) \Delta t)^{\wedge}, \tag{26}\] \[\Delta\tilde{\mathbf{v}}_{ij} =\sum_{k=i}^{j-1}\Delta\tilde{\mathbf{R}}_{ik}(\tilde{\mathbf{a}}_{k}- \mathbf{b}_{i}^{\rm{g}})\Delta t,\] (27) \[\Delta\tilde{\mathbf{t}}_{ij} =\sum_{k=i}^{j-1}\Big{[}\Delta\mathbf{v}_{ik}\Delta t+\frac{1}{2} \Delta\tilde{\mathbf{R}}_{ik}(\tilde{\mathbf{a}}_{k}-\mathbf{b}_{i}^{\rm{g}})\Delta t^{2} \Big{]}, \tag{28}\]
where the raw vehicle acceleration \(\tilde{\mathbf{a}}\) and rotation rate \(\tilde{\mathbf{\omega}}\) from the IMU are integrated. The pre-defined noise parameters \(\{\mathbf{\eta}_{\rm{a}},\ \mathbf{\eta}_{\rm{g}}\}\) are propagated to acquire the covariance matrix \(\mathbf{\Sigma}^{\rm{imu}}\)[58]. The gravity vector is updated according to the current position in the \(e\) frame for each pre-integration.
As in [58], we estimate the accelerometer and gyroscope biases with the Brownian motion model by formulating the bias error function as
\[\left\|\mathbf{e}_{ij}^{b}\right\|^{2}=\left\|\mathbf{b}_{j}^{\rm{a}}-\mathbf{b}_{i}^{\rm{a }}\right\|_{\mathbf{\Sigma}^{\rm{a}}}^{2}+\left\|\mathbf{b}_{j}^{\rm{g}}-\mathbf{b}_{i}^{ \rm{g}}\right\|_{\mathbf{\Sigma}^{\rm{g}}}^{2}. \tag{29}\]
#### V-B2 Between-Pose Factor
For the relative odometry observations \(\Delta\tilde{\mathbf{T}}_{i,j}^{e}=\{\Delta\tilde{\mathbf{R}}_{i,j}^{e}\ \Delta\tilde{\mathbf{p}}_{i,j}^{e}\}\), we follow the original implementation in [60] and formulate the between pose factor represented as
\[\left\|\mathbf{e}_{i,j}^{\rm{bp}}\right\|^{2}=\left\|\ln(\mathbf{T}_{i}^{e,-1}\mathbf{T}_ {j}^{e}\Delta\tilde{\mathbf{T}}_{i,j}^{e})^{\vee}\right\|_{\mathbf{\Sigma}^{\rm{bp}}}^{2}, \tag{30}\]
where the pose \(\mathbf{T}_{i}^{e}\) and \(\mathbf{T}_{j}^{e}\) are queried using timestamps associated with two successive lidar scans.
#### V-B3 Velocity Factor
We use the 2D observations \(\tilde{\mathbf{v}}_{t}^{b}\) to formulate the navigation velocity factor. As the measured velocity can be directly associated with the velocity in state variables, as denoted in (20), we formulate the error function for the velocity observations considering the lever arm \(\mathbf{t}^{b,\rm{vel}}\) from the body frame to the sensor center as
\[\left\|\mathbf{e}_{i}^{\rm{vel}}\right\|^{2}=\left\|\begin{bmatrix}1&0&0\\ 0&1&0\end{bmatrix}\cdot(\mathbf{R}_{e,i}^{b}\mathbf{v}_{b,i}^{e}+\mathbf{\omega}_{i}^{b \wedge}\mathbf{t}^{b,\rm{vel}})-\tilde{\mathbf{v}}_{i}^{b}\right\|_{\mathbf{\Sigma}^{\rm{vel} }}^{2}. \tag{31}\]
#### V-B4 GNSS-PVT Factor
We propose a generalized implementation of the GNSS-PVT factor for the observed antenna position \(\tilde{\mathbf{t}}_{\rm{ant}}^{e}\) and the velocity \(\tilde{\mathbf{v}}_{\rm{ant}}^{n}\). Taking into account the lever arm \(\mathbf{t}_{\rm{ant}}^{b}\) from the IMU center to the phase center of the GNSS antenna, we calculate the antenna position at timestamp \(t_{i}\) as \(\mathbf{t}_{{\rm{ant}},i}^{e}=\mathbf{t}_{b,i}^{e}+\mathbf{R}_{b,i}^{e}\mathbf{t}_{\rm{ant}}^{b}\) and velocity as \(\mathbf{v}_{{\rm{ant}},i}^{e}=\mathbf{v}_{b,i}^{e}+\mathbf{R}_{b,i}^{e}(\mathbf{\omega}_{i}^{b })^{\wedge}\mathbf{t}_{\rm{ant}}^{b}\). Thus, the error function can be derived as
\[\left\|\mathbf{e}_{i}^{\rm{pv}}\right\|^{2}=\left\|[\mathbf{r}_{t_{i}}^{T}\ \mathbf{r}_{v_{i}}^{T}]^{T}\right\|_{\mathbf{\Sigma}^{\rm{pv}}}^{2}, \tag{32}\]
with
\[\mathbf{r}_{t_{i}} =\mathbf{t}_{{\rm{ant}},i}^{e}-\tilde{\mathbf{t}}_{{\rm{ant}},i}^{e}, \tag{33}\] \[\mathbf{r}_{\mathbf{v}_{i}} =\mathbf{R}_{e,i}^{e}\mathbf{v}_{{\rm{ant}},i}^{e}-\tilde{\mathbf{v}}_{{\rm{ ant}},i}^{n}, \tag{34}\]
where the rotation matrix \(\mathbf{R}_{e,i}^{n}\) can be calculated using the direction cosine matrix with the current geodetic coordinate \((\varphi_{i},\ \lambda_{i})\) as
\[\mathbf{R}_{e,i}^{n}=\begin{bmatrix}-\sin\lambda_{i}&-\cos\lambda_{i} \sin\varphi_{i}&\cos\lambda_{i}\sin\varphi_{i}\\ \cos\lambda_{i}&-\sin\lambda_{i}\sin\varphi_{i}&\sin\lambda_{i}\sin\varphi_{i} \\ 0&\cos\varphi_{i}&\sin\varphi_{i}\end{bmatrix}. \tag{35}\]
#### V-B5 Pseudorange and Deltarange (PrDr) Factor
We derive the error function for the pre-processed pseudorange and deltarange observations with (17) and (18) as
\[\left\|\mathbf{e}_{i}^{\rm PrDr}\right\|^{2}=\left\|[r_{i}^{\rm Pr}\ r_{i}^{\rm Dr }]^{T}\right\|_{\mathbf{\Sigma}^{\rm grDr}}^{2}, \tag{36}\]
where
\[r_{i}^{\rm Pr} =\left\|\mathbf{t}_{\rm ant,i}^{e}-\mathbf{t}_{s,i}^{e,k}\right\|+c_{b,i} -\tilde{\rho}_{i}^{k}, \tag{37}\] \[r_{i}^{\rm Dr} =\mathbf{u}_{\rm ant,i}^{s,T}\left(\mathbf{v}_{\rm ant,i}^{e}-\mathbf{v}_{s,i }^{e,k}\right)+c_{d,i}-\tilde{\nu}_{i}^{k}. \tag{38}\]
In (37) and (38), the vectors \(\mathbf{t}_{s,i}^{e,k}\) and \(\mathbf{v}_{s,i}^{e,k}\) are the position and velocity of \(k\)-th satellite in frame \(e\), respectively. The unit vector \(\mathbf{u}_{\rm ant,i}^{s,k}\) denotes the direction from the antenna to the \(k\)-th satellite. The receiver clock bias \(c_{b,i}\) and drift \(c_{d,i}\) are also evaluated in (36).
We consider a scaled Carrier-to-Noise ratio \((C/N_{0})\) with hyper-parameters \(\lambda_{\rho}\) and \(\lambda_{\nu}\) to represent the variance of pseudorange and deltarange observations, which is denoted as
\[\eta_{\rho}^{2}=\lambda_{\rho}10^{-\frac{C/N_{0}}{10}},\ \eta_{\rho}^{2}= \lambda_{\nu}10^{-\frac{C/N_{0}}{10}}. \tag{39}\]
Due to the strong corruption of GNSS observations in urban areas, we use m-Estimator [61] to enhance the optimization robustness by reforming the error function \(\mathbf{e}_{i}^{\rm PrDr}\) into
\[\hat{\mathbf{e}}_{i}^{\rm PrDr}=\phi(\mathbf{e}_{i}^{\rm PrDr}), \tag{40}\]
where the robust error formulation \(\phi(\cdot)\) can be defined with different loss functions such as _Huber_ or _Cauchy_[62].
#### V-B6 GNSS Receiver Clock Error Factor
In the tight coupling of the raw GNSS observations, the unknown receiver clock bias and drift (cbd) are estimated in the state variable by assuming a constant drifting model, which can be fused as
\[\left\|\mathbf{e}_{i}^{\rm cbd}\right\|^{2}=\left\|\begin{bmatrix}1 &\Delta t\\ 0&1\end{bmatrix}\begin{bmatrix}c_{b,i-1}\\ c_{d,i-1}\end{bmatrix}-\begin{bmatrix}c_{b,i}\\ c_{d,i}\end{bmatrix}\right\|_{\mathbf{\Sigma}^{\rm chd}}^{2}. \tag{41}\]
#### V-B7 GP-WNOJ Motion Prior Factor
We implement the GP-WNOJ motion model as between-state factors, similar to [46]. The error function was originally given in [12] using (15). We summarize this error function for convenience as
\[\left\|\mathbf{e}_{ij}^{\rm sp}\right\|^{2}=\left\|[r_{\mathbf{\Delta} \mathbf{\gamma}_{ij}}^{T}\ \mathbf{r}_{\mathbf{\Delta}\mathbf{\psi}_{ij}}^{T}]^{T}\right\|_{\mathbf{\Sigma}^{\rm gr}}^{2 }, \tag{42}\]
where
\[\mathbf{r}_{\mathbf{\Delta}\mathbf{\gamma}_{ij}} =\ln(\mathbf{T}_{j,i})^{\vee}-(t_{j}-t_{i})\mathbf{\varpi}_{i}-\frac{1} {2}(t_{j}-t_{i})^{2}\hat{\mathbf{\omega}}_{i}, \tag{43}\] \[\mathbf{r}_{\mathbf{\Delta}\mathbf{\varphi}_{ij}} =\mathbf{\mathcal{J}}_{j,i}^{\perp}\mathbf{\varpi}_{j}-\mathbf{\varpi}_{i}-(t _{j}-t_{i})\hat{\mathbf{\omega}}_{i}. \tag{44}\]
As introduced in Sec. V-A, we used the measured accelerations of the IMU in our GP motion models. Thus, only the 6D pose and the 6D velocity are evaluated in GP-WNOJ motion factors, so that \(\mathbf{e}_{ij}^{\rm gp}\in\mathbb{R}^{12}\). The analytical Jacobians of the GP motion models can be found in [12, 63].
### _Loosely Coupled FGO_
Although the loosely coupled fusion with GNSS and IMU measurements is shown to be less performant compared to tight coupling [6], we implemented a loosely coupled fusion of sensor observations to i) propose a baseline trajectory by integrating a high-grade speed sensor for our dataset in urban areas, where an RTK-fixed GNSS-PVA solution is unreliable; ii) evaluate the loosely and tightly coupled fusion for GNSS-based vehicle localization in challenging areas; iii) demonstrate the flexibility and scalability of the proposed method.
The proposed factor graph is shown in Fig. 5. The states \(\mathbf{x}_{1:t}\) are created deterministically on the graph independently of any measurement. If a measurement cannot be associated with any state variable, a state \(\hat{\mathbf{x}}_{i+\tau}\) between two state variables \(\hat{\mathbf{x}}_{i}\) and \(\hat{\mathbf{x}}_{i+1}\) (where \(0<\tau<1\)) is queried for the error evaluation.
The optimization problem can then be formulated as
\[\hat{\mathbf{x}} =\arg\!\min_{\mathbf{x}}\Big{(}\left\|\mathbf{e}^{0}\right\|_{\mathbf{\Sigma} _{0}}^{2}+\sum_{i=1}^{M}\left\|\mathbf{e}_{i}^{\rm imm}\right\|_{\mathbf{\Sigma}^{\rm imm }}^{2}+\sum_{i=1}^{M}\left\|\mathbf{e}_{i}^{\rm gp}\right\|_{\mathbf{\Sigma}^{\rm gr}} ^{2}+\] \[+\sum_{i=1}^{N}\left\|\mathbf{e}_{i}^{\rm vel}\right\|_{\mathbf{\Sigma}^{ \rm vel}}^{2}+\sum_{i=1}^{K}\left\|\mathbf{e}_{i}^{\rm pvt}\right\|_{\mathbf{\Sigma}^{ \rm pvt}}^{2}+\sum_{i=1}^{J}\left\|\mathbf{e}_{i}^{\rm bp}\right\|_{\mathbf{\Sigma}^{\rm pvt }}^{2}\Big{)}, \tag{45}\]
where the error term \(\mathbf{e}^{0}\) represents the prior factor obtained at initialization or from marginalization. Because sensor observations are received asynchronously other than estimation timestamps \(M\), we use different index notations \(N,\ K,\ J\) to indicate the number of sensor observations in (45).
### _Tightly Coupled FGO_
In contrast to the loosely coupled fusion approach, a tightly coupled fusion of raw GNSS observations contributes more constraints with multiple observed satellites to state variables, as illustrated in Fig. 6. Unlike Fig. 5, we include the pseudorange and deltarange factors in the graph, providing redundant constraints to each state variable. To improve the robustness while GNSS observations are degraded or lost in challenging areas, we include lidar odometry as between-state constraints to improve the consistency of the estimated trajectory. The receiver clock error factor is also added to the graph.
#### V-C
Figure 5: Proposed loosely coupled GNSS-FGO.
The optimization problem with sensor observations from different time domains becomes
\[\begin{split}\hat{\mathbf{X}}=\operatorname*{argmin}_{\mathbf{x}}\Big{(}& \left\|\mathbf{e}_{i}^{\text{p}}\right\|_{\mathbf{\Sigma}_{0}}^{2}+\sum_{i=1}^{M} \left\|\mathbf{e}_{i}^{\text{IMU}}\right\|_{\mathbf{\Sigma}^{\text{IMU}}}^{2}+\sum _{i=1}^{M}\left\|\mathbf{e}_{i}^{\text{gp}}\right\|_{\mathbf{\Sigma}^{\text{ep}}}^ {2}+\\ &+\sum_{i=1}^{N}\left\|\mathbf{e}_{i}^{\text{bp}}\right\|_{\mathbf{ \Sigma}^{\text{bp}}}^{2}+\sum_{i=1}^{M}\left\|\mathbf{e}_{i}^{\text{cbd}}\right\| _{\mathbf{\Sigma}^{\text{cbd}}}^{2}+\\ &+\sum_{i=1}^{J}\sum_{s=1}^{K}\left\|\hat{\mathbf{e}}_{s,i}^{\text{ PrDr}}\right\|_{\phi(\mathbf{\Sigma}^{\text{PrDr}})}^{2}\Big{)}.\end{split} \tag{46}\]
### _System Overview_
The system overview with the implementation of Alg. 1 and all data interfaces is shown in Fig. 7. The sensor data are received and pre-processed in separate processes. We construct the time-centric factor graph in a two-stage process, as introduced in Alg. 1. The first stage (line 4-10 of Alg. 1) includes between-state factors and delay-free IMU factors to build a deterministic graph on time. Subsequently, asynchronous sensor observations are fused into the deterministic graph by aligning the timestamps between the measurement and the state variables (line 11-24 of Alg. 1). For measurements that cannot be aligned with any state, two successive state variables are queried to construct a GP-interpolated state for measurement evaluation in optimization procedures. The time-centric graph can be optimized using a fixed-lag batch optimizer [64] or a fixed-lag incremental smoother iSAM2 [65] at a lower frequency. In the experimental results, the estimated trajectories in the error metrics are optimized using iSAM2. We also evaluate both smoothers with respect to both estimator performance and computation efficiency, as presented in Sec. VII-D. After each optimization procedure, we forward the optimized state variables to a state publisher and sensor pre-processing modules. The state publisher is associated with the IMU sensor and provides high-frequent state estimates at \(200\,\mathrm{Hz}\).
**Remark 3**.: _Near-Zero-Velocity Detection: While the vehicle is stationary, the state estimation exhibits random pose drift. This is a known problem in vehicle localization using inertial measurements [66]. In this case, the state observability degrades dramatically due to insufficient IMU excitation, leading to unbounded error accumulation. Thus, we follow the idea proposed in [66] to detect near-zero velocity motion by voting through multiple sensors that provide velocity information. If the vehicle is voted to be stationary, we temporally pause the graph optimization and state propagation._
### _Implementation_
We implemented our approach in C++ using Robot Operating System ROS26. The open-source software library GTSAM7 was extended to implement the graph and factor formulations. We adopted the software solution for lidar odometry from LIO-SAM7, where only the front-end feature extraction and association were adapted in our work. We used the positioning and orientation estimation solution from a dual-antenna GNSS setup to initialize the state variable \(\mathbf{x}_{0}\). In this work, we used a laptop with an Intel i9-9900K, 16 cores at max. \(4.7\,\mathrm{GHz}\) and \(64\,\mathrm{GB}\) memory for sensor pre-processing and graph optimization in experimental studies.
Footnote 6: [https://docs.ros.org/en/rolling/index.html](https://docs.ros.org/en/rolling/index.html)
Footnote 7: [https://gtsam.org](https://gtsam.org)
Footnote 8: [https://github.com/TixiaoShan/LIO-SAM](https://github.com/TixiaoShan/LIO-SAM)
### _Test Sequences_
Our dataset contains different driving scenarios: open-sky, semi-/dense-urban, and high-speed track. For a clear evaluation, we define different test sequences throughout multiple measurement campaigns and analyze the driving conditions for each sequence, as shown in Table I. The test sequences include lengthy runs with a max. \(17\,\mathrm{km}\) route, aiming to evaluate the estimation performance for long-term operations. For test sequences in urban areas, we chose data from scenarios with different urbanization rates containing tunnel and bridge crossings to evaluate the limitations of the proposed fusion approaches. In addition, we also considered open-sky areas on the high-speed track, where a maximum vehicle speed of \(170\,\mathrm{km}/\mathrm{h}\) was reached, creating significant motion distortion in the lidar point clouds.
### _Reference Trajectory and Metrics_
To evaluate the proposed fusion strategies, we employ the RTK-fixed GNSS-PVA solution associated with low uncertainties (\(\sigma_{\mathrm{pos}}<0.05\,\mathrm{m}\) and \(\sigma_{\mathrm{rot}}<1^{\circ}\)) to calculate the absolute root mean square error (RMSE).
Besides the error metrics, we employ Pythagoras' theorem implemented in the Open Motion Planning Library10 (OMPL) to calculate the trajectory smoothness (contrary to trajectory roughness) for all test sequences. The smoothness is given as the sum of angles between all path segments in the local-world frame, as denoted in (47), where the variables \(a_{i}\), \(b_{i}\) and \(c_{i}\) are the length of trajectory segments containing three successive vehicle positions in the Euclidean frame. For the same test sequence with \(k\) vehicle positions, a smaller \(s\) shows a large smoothness of the trajectory.
Footnote 10: [https://ompl.kavrakilab.org/](https://ompl.kavrakilab.org/)
\[s=\sum_{i=2}^{k-1}\left(\frac{2(\pi-\arccos\frac{a_{i}^{2}+b_{i}^{2}-c_{i}^{2} }{2a_{i}b_{i}})}{a_{i}+b_{i}}\right)^{2}. \tag{47}\]
## VII Experiments and Results
### _Experiment Setting_
To evaluate the proposed GNSS-FGO, we first benchmarked the loosely coupled fusion of the GNSS solution with multiple sensor observations using the proposed GNSS-FGO with a lidar-centric SLAM approach LIO-SAM[2], aiming to evaluate the robustness of the proposed method. For a fair evaluation, we have adapted the LIO-SAM implementation[6] using the same robust error models and parameterizations as in our method. We also enabled loop-closure detection in LIO-SAM to maximize state estimation performance. Later, we conducted experiments by fusing raw GNSS observations alongside IMU and lidar measurements in a tight coupling, which is expected to present more robust trajectory estimation in challenging areas compared to the loose coupling. Lastly, we discussed the smoother type and computation time using different lag sizes. We also evaluated the GP-WNOJ prior with the GP-WNOA prior.
### _General Error Metrics_
With pre-defined test sequences in Table I, we present the general error metrics for all experiments in Table II by taking the RTK-fixed GNSS-PVA solution as ground truth. Because an RTK-fixed solution is unavailable in challenging areas, we denote the solution rate used as a ground-truth reference to calculate error metrics of each test sequence in percentage in the column "Seq.".
#### Vii-B1 Lidar-Centric Fusion
As shown in Table II, the lidar-centric SLAM approach LIO-SAM failed in several test sequences even when the same factors with robust error modeling were used and loop-closure detection was enabled (see video demonstration[2]). The most frequent reason is that scan registration fails due to an invalid feature association, which can be observed in all failed test sequences. Fig. 0(a) demonstrates this result, where the estimation diverged and cannot be recovered after the vehicle entered a tunnel. In Seq. HS, the lidar-centric approach cannot even be initialized properly while the vehicle was moving very fast, which was not observed in the proposed GNSS-FGO.
We conjecture that because graph construction in [2] requires strict timestamp synchronization of GNSS measurements with lidar timestamps, asynchronous GNSS measurements are dropped, resulting in information loss, and thus, the trajectory smoothness and estimation accuracy are dramatically penalized (see Table II). This hypothesis is supported in the test sequences DUS and C01 (see Fig. 10 and Fig. 11), where the estimated height, orientation, and velocities were frequently diverted.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Seq.} & \multirow{2}{*}{\begin{tabular}{c} Leng. \\ (km) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Tunnel \\ (m) \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} Dura. \\ (s) \\ \end{tabular} } & \multirow{2}{*}{\(\bar{n}^{\mathrm{sat.}}\)} & \multirow{2}{*}{\(\bar{P}^{\mathrm{RTK}}_{\mathrm{fixed}}\)} & \multirow{2}{*}{\(R^{\mathrm{NS}}\)} \\ & & & & & & (\%) & (\%) \\ \hline AC & 17.0 & 270 & 247 & 27.25 & 11 & 76.51 & 1.7 \\ \hline DUS & 5.25 & - & 1350 & 13.48 & 8 & 52.06 & 0.9 \\ \hline C01 & 0.81 & 276 & 160 & 17.89 & 7 & 60.8 & 31.78 \\ C02 & 1.45 & 145 & 390 & 13.36 & 7 & 37.74 & 11.56 \\ \hline HS & 10.6 & - & 300 & 124.82 & 14 & 94.9 & 1.47 \\ \hline \hline \end{tabular}
\end{table}
Table I: Test sequences definition. We denote the test sequences in Aachen, Dusseldorf, Cologne, and high-speed tracks with “AC”, “DUS”, “C”, and “HS”, respectively. The variable \(\bar{v}\) represents the average speed and the scalar \(\bar{n}^{\mathrm{sat.}}\) is the average number of satellites used for a GNSS-PVA solution. We calculated the ratio of RTK-fixed solution and No-Solution due to insufficient GNSS observations denoted by \(R^{\mathrm{RTK}}_{\mathrm{fixed}}\) and \(R^{\mathrm{NS}}\), correspondingly.
Figure 8: Sensor setup and frames on the test vehicle.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Seq.**} & \multirow{2}{*}{**Configuration**} & \multirow{2}{*}{**Mean 2D**} & **2D Pos.** & **Max. 2D** & **Mean 3D** & **3D Pos.** & **Max. 3D** & **Mean** & **Yaw** & **Max.** & \multirow{2}{*}{**\(s\)**} \\ & & **Pos. Err.** & **STD** & **Pos. Err.** & **Pos. Err.** & **STD** & **Pos. Err.** & **Yaw Err.** & **STD** & **Yaw Err.** & **STD** & **Yaw Err.** & \\ & & **(m)** & **(m)** & **(m)** & **(m)** & **(m)** & **(m)** & **(m)** & **(\({}^{\circ}\))** & **(\({}^{\circ}\))** & **(\({}^{\circ}\))** & **(\({}^{\circ}\))** & **(\({}^{\circ}\))** & **(\({}^{\circ}\))** & \\ \hline \multirow{4}{*}{**AC**} & LIO-SAM [2] & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ & GSNS-FO-lc **0.188** & **0.052** & 5.653 & **0.385** & **0.075** & 17.61 & **0.36** & **1.03** & 86.14 & 5108.23 \\ \cline{2-13} \multirow{2}{*}{**AC**} & GNSS-FO-lc **0.936** & 0.650 & 5.877 & 1.276 & 1.392 & 6.127 & 2.76 & 2.68 & 88.53 & 1706.63 \\ \cline{2-13} & GNSS-FO-lc **0.476** & 0.738 & **1.605** & 1.057 & 1.088 & **2.343** & 1.18 & 2.13 & **85.12** & **982.40** \\ \hline \multirow{4}{*}{**AC**} & LIO-SAM [2] & 0.7359 & - & 9.129 & **1.024** & - & 17.391 & 1.758 & - & 80.44 & 2.15e+5 \\ & GNSS-FO-lc **0.397** & **0.028** & 6.821 & 1.146 & **0.042** & 19.36 & **0.216** & **0.44** & **30.64** & 5672.83 \\ \cline{2-13} \multirow{2}{*}{**(18\%)**} & GNSS-FO-lc **0.123** & 1.081 & 4.415 & 1.461 & 2.524 & 7.141 & 0.473 & 3.01 & 81.32 & 3595.72 \\ \cline{2-13} & GNSS-FO-lc **0.921** & **4.390** & 2.170 & 1.890 & **5.742** & 0.492 & 2.74 & 47.867 & **3260.06** \\ \hline \multirow{4}{*}{**C01**} & LIO-SAM [2] & 3.339 & - & 8.186 & 3.478 & - & 8.463 & 3.774 & - & 34.260 & 6.30e+5 \\ \cline{2-13} & GNSS-FO-lc **0.043** & **0.013** & 0.875 & **0.047** & **0.022** & 1.271 & **0.160** & **0.51** & **4.069** & 144.077 \\ \cline{2-13} \multirow{2}{*}{**(57\%)**} & GNSS-FO-lc **0.516** & 1.395 & 0.903 & 1.223 & 2.849 & 2.336 & 2.134 & 2.43 & 7.567 & 1.52e+5 \\ \cline{2-13} & GNSS-FO-lc **0.402** & 0.846 & **0.807** & 0.468 & 1.892 & **0.841** & 0.888 & 2.76 & 7.025 & **122.358** \\ \cline{2-13} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ & GSNS-FO-lc **0.010** & **0.010** & **0.164** & **0.014** & **0.023** & **0.165** & **0.085** & **0.79** & **1.146** & 2.15e+6 \\ \cline{2-13} \multirow{2}{*}{**(27\%)**} & GNSS-FO-lc **0.933** & 1.111 & 1.933 & 1.188 & 2.836 & 2.104 & 0.927 & 4.15 & 3.811 & **198.03** \\ \cline{2-13} & GNSS-FO-lc **0.936** & 0.264 & 1.039 & 0.499 & 0.328 & 2.402 & 0.565 & 0.496 & 2.40 & 9.716 & 218.42 \\ \cline{2-13} & LIO-SAM [2] & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ & GNSS-FO-lc **0.936** & 0.496 & 0.328 & 2.402 & 0.565 & 0.496 & 2.40 & 9.716 & 218.42 \\ \hline \multirow{4}{*}{**HS**} & LIO-SAM [2] & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ & GNSS-FO-**0.936** & 0.496 & 0.496 & 0.496 & 0.483 & 0.993 & 1.69 & 10.05 & 0.1388 \\ \cline{2-13} & GNSS-FO-**0.936** & 0.457 & 0.672 & 3.121 & 1.400 & 1.252 & 3.609 & 0.699 & 0.94 & 3.778 & 0.060 \\ \hline \hline \end{tabular}
\end{table}
Table 2: General Trajectory Estimation Metrics. The rate of RTK-fixed GNSS-PVA solution used for the error metrics is denoted in the first column. A test run is classified as failed if the algorithm diverges.
Figure 9: Trajectory plot (\(700\,\mathrm{s}\) - \(1400\,\mathrm{s}\)) in urban areas in Aachen. We plot the GNSS single point position (SPP) if the RTK-fixed solution is unavailable.
#### Vi-A2 Loosely Coupled (lc) Fusion
Because the position and velocity can be directly observed by fusing the GNSS solution in the loose coupling, the to-be-estimated state variables are effectively constrained, resulting in highly accurate trajectory estimation in open-sky areas. For instance, Seq. C01 and C02 present mean position errors less than \(5\,\mathrm{cm}\), while the max. position errors remain below \(1\,\mathrm{m}\). For long test sequences (e.g., Seg. AC), the loosely coupled fusion using the proposed GNSS-FGO also presents sufficient estimation performance by integrating multiple sensor observations. However, a fast divergence and larger max. position errors can be observed in challenging scenarios using this fusion mechanism. As shown in Fig. (b)b and Fig. (b)b, the estimated height (in red) diverges significantly once the GNSS positioning is corrupted. This result also proves that the 2D velocity measurements provided by the 2D speed sensor cannot sufficiently constrain the state space. For state variables such as the vertical velocity component \(v_{d}^{y}\) that are observed with noisy measurements, frequent drifting can be expected (see Fig. (d)d). Moreover, the loose coupling generally presents larger trajectory roughness compared to other configurations. Compared to the Seq. AC (see Fig. 9), a higher urbanization rate can be expected in Dusseldorf, leading to an unsmooth trajectory estimation.
#### Vi-A3 Tightly Coupled (tc) Fusion
Compared to loosely coupled sensor fusion, integrating pre-processed GNSS observations in a tight coupling contributes to redundant state constraints. Thus, the tightly coupled fusion can generally present more robust trajectory estimations with a smaller max. Position error and larger trajectory smoothness in lengthy runs, except in the high-speed scenario (Seq. HS). However, because the vehicle pose and velocity cannot be directly observed with pseudorange and deltarange, lower accuracy can be expected. In challenging urban areas such as Seq. C01 and C02, fusing the lidar odometry as between-state constraints generally improves the estimation performance and trajectory smoothness. This conclusion can also be drawn when referring to Fig. (b)b and Fig. (b)b, where more accurate height and velocity estimations can be observed by fusing lidar odometry in the graph. In high-speed scenarios, lidar scans suffer from serious motion distortion, and no sufficient features can be extracted compared to urban areas. Therefore, a limited performance improvement can be observed by fusing lidar odometry in the graph.
#### Vi-A4 Discussion
Based on the experimental results presented above, a robust trajectory estimation can be achieved in challenging scenarios using the proposed approach by fusing multiple sensor measurements, which supports our hypothesis proposed in Sec. II. On the contrary, it can be observed that trajectory drift cannot be effectively eliminated using the classic sensor-centric localization approach LIO-SAM. Even worse, the robustness and reliability of sensor-centric approaches cannot be guaranteed in challenging areas once the primary sensor is compromised. As online applications raise requirements on computation time and resources, sensor degradation due to, e.g., insufficient data processing, becomes nontrivial. The proposed GNSS-FGO presents an effective
Figure 10: Trajectory plot (\(450\,\mathrm{s}\) - \(1350\,\mathrm{s}\)) in challenging areas in Düsseldorf.
workaround while fusing multiple sensors to discharge dependence on a single sensor, which enables lossless information fusion and improves the robustness of the estimation if sensor failure can be expected.
Within GNSS-FGO, loosely fusing the GNSS positioning solution enables fast estimation convergence and higher accuracy in open-sky areas compared to tightly coupled fusion in our study. The loosely coupled fusion diverges quickly once the vehicle enters challenging areas, even when more sensor modalities are integrated. In contrast, the tightly coupled multi-sensor fusion presents a more robust trajectory estimation in our experimental studies. The same conclusion has also been shown in [6, 32]. However, an acceptable accuracy cannot be achieved, especially in dense urban scenarios. For instance, although all estimated trajectories using the proposed GNSS-FGO in Fig. 1 remain consistent, a large drift is presented using the proposed sensor integration. One possible reason can be traced back to lidar degradation and insufficient outlier rejection in GNSS observations. As GNSS-FGO provides a flexible fusion mechanism, this problem can be addressed by integrating more effective state constraints into the graph.
### _Challenging Scenarios_
In this part, we propose experimental studies regarding GNSS observations, lidar odometry, and solver settings. We also evaluated the GP-WNOA and GP-WNOJ priors and discussed the hyper-parameter tuning for \(\mathbf{Q}_{c}\).
#### Iv-C1 Loss of GNSS Observation
Generally, losing GNSS observations in a short time interval does not lead to immediate divergence or trajectory drift if multiple state constraints such as lidar odometry or motion prior factors are still presented. This conclusion can be drawn from our experiment in Seq. C01, where the vehicle crossed a large bridge at the central train station in Cologne, as shown in Fig. 11. However, fusing GNSS observations in a tight coupling extends the state variables with receiver clock bias and drift \(\mathbf{c}_{r}=[c_{b}\ c_{d}]^{T}\), which become unobservable if less than four satellites are visible. Fig. 12 shows the estimated clock bias \(c_{b}\) with respect to the number of received satellites. In the graph where only GNSS observations are integrated, the estimated clock bias drifts dramatically, which is not observed in the graph fused with the lidar odometry. Similar results can also be observed in other experiments, in which the unobservable state variables can lead to estimation divergence and an ill-posed optimization problem. Furthermore, if the global reference (e.g., GNSS observations) is lost over a long time interval, such as crossing a long tunnel, a large trajectory drift can be expected.
#### Iv-C2 Highly Corrupted GNSS Observations
Compared to the temporary loss of GNSS observations, we emphasize that including highly corrupted GNSS observations in the graph has a greater impact on estimation performance. This conclusion can be supported by Seq. C02, where the accuracy of our proposed fusion paradigms is significantly degraded in GNSS-corrupted areas, as shown in Fig. 1. In Fig. 13, we plot the estimated trajectories in this scenario by transforming the
Figure 11: Trajectory plot near the central station of Cologne. For the tightly coupled fusion, we omitted the near-zero-velocity detection in order to present the trajectory drifting while the receiver clock error is unobservable.
coordinates in the navigation frame (ENU). A large trajectory drift up to \(25\,\mathrm{m}\) can be observed in tightly coupled fusion without lidar odometry (see Fig. 13). Although fusing relative motion constraints, such as odometry, can effectively constrain divergence, trajectory drifts cannot be eliminated until valid global references are acquired.
#### V-C3 Lidar Odometry Degradation
As discussed in [67], traditional lidar odometry algorithms suffer from dramatic degradation in unstructured environments and high-speed scenarios. This problem can also be observed in our experiments. Fig. 14 illustrates three scenarios in which the accuracy of the lidar odometry is penalized if the vehicle is driving in featureless areas or in high-speed mode with an average vehicle speed of \(125\,\mathrm{km}/\mathrm{h}\). In low-speed driving mode and open-sky areas, lidar degradation does not reduce the estimation performance while high-quality GNSS measurements are available. However, if the vehicle is moving at high speed, the lidar odometry becomes inaccurate because of motion distortion. Therefore, including lidar odometry factors in the graph can decrease localization accuracy, as presented in Table II of Seq. HS. In scenarios with long tunnels, trajectory drifting can always be expected due to loss of global reference. This presents the major limitation of classic lidar odometers that calculate only pose increments. To overcome this limitation, more observed state variables, such as vehicle velocity, can be considered [67].
### _Smoother Type and Computation Time_
To study the impact of different smoother types and lag sizes, we evaluated batch and incremental smoother iSAM2 with different lag sizes for Seq. DUS. The performance metrics are presented in Table III. Compared to an incremental smoother, solving the optimization problem with a batch optimizer does not show a considerable improvement in accuracy. This happens because the graph structure becomes more similar to a Markov chain in large-scale localization applications where fewer loop-closure constraints are available. In this scenario, re-linearizing all past state variables does not contribute more information that improves the accuracy. For loosely coupled fusion, the batch smoother presents a smoother trajectory. However, this advantage is absent with the incremental smoother when fusing GNSS observations in a tight coupling.
Furthermore, the batch optimizer requires more computational resources than the incremental smoother (see Fig. 15), especially in urban areas with more measurement outliers. In online applications, estimation accuracy and trajectory smoothness can be penalized once optimization takes longer. This conclusion is supported by referring to the tightly coupled fusion in Table III. Similarly to the optimizer type, considering a large lag size does not contribute significantly. Moreover, even the incremental smoother with a large lag size frequently violates the desired optimization frequency, resulting in inefficient optimization procedures.
### _GP-WNOA/WNOJ Motion Model_
In this section, we evaluated the continuous-time trajectory representation using the Gaussian process interpolation with both white-noise-on-jerk (GP-WNOJ) and white-noise-on-acceleration (GP-WNOA) models. As introduced in Sec. IV-D, we do not discuss hyper-parameter tuning in this work. The hyper-parameter \(\mathbf{q}_{c}\) was manually tuned by penalizing the vehicle pose in \(\mathbf{Q}_{c}\) equally for both models.
Compared to the GP-WNOJ model, a GP-WNOA model assumes that the system transition follows a constant velocity model [7, 44]. As discussed in [12], representing vehicle trajectories with an approximately constant-velocity model may be insufficient in urban driving scenarios where the vehicle accelerates and brakes frequently. To evaluate the performance of both GP models, we chose a part of Seq. AC containing \(200\,\mathrm{s}\) test run in open-sky areas where the GNSS-PVA solution presents the ground-truth trajectory. We calculate the whitened error of the vehicle pose and the linear velocity in the body frame and plot the results on the histogram in Fig. 16. Because the GP-WNOJ model represents second-order system dynamics, it shows smaller errors in all linear velocity components. Both models perform similarly in position estimation, where the GP-WNOJ is more accurate in the main motion direction \(x-\)axis. For the rotation, the GP-WNOJ does not present considerable improvements compared to the GP-WNOA. One possible reason supporting this result can be traced back to the rotational acceleration that cannot be observed directly using the IMU, as introduced in Sec. V-A.
We have validated that the GP motion model formulates a valid continuous-time trajectory representation. However, tuning the power spectral matrix \(\mathbf{Q}_{c}\) that scales the system transition in the Gaussian process kernel has a large effect on numerical stability and estimation performance [12]. Although the GP-WNOJ model presents reliable velocity estimates compared to the GP-WNOA model, it requires more careful parameter tuning when incorporating accelerations in state propagation. For trajectory estimation applications that conduct multi-sensor fusion, the state variables are generally sufficiently constrained by heterogeneous sensor factors, making the data-driven hyper-parameter tuning possible [53].
## VIII Conclusion
This article proposes an online factor graph optimization that generalizes multi-sensor fusion for robust trajectory estimation with a focus on GNSS. The vehicle trajectory is represented in continuous time using a Gaussian process motion prior that enables arbitrary state querying, presenting a sensor-independent graph optimization. We successfully fused asynchronous sensor measurements, including GNSS, IMU, vehicle speed sensor, and lidar odometry, into the proposed method for robust vehicle localization in challenging environments. The experimental studies show that the proposed method is robust, flexible, accurate, and works online with multiple datasets collected from different challenging scenarios, including urban areas and high-speed tracks. All our FGO configurations, loosely and tightly coupled with and without lidar, succeed in all test sequences, whereas the classic state-of-the-art lidar-centric method [2] failed in some situations due to scan registration failures. Observed from the experimental results, the GP-WNOJ motion prior enables accurate trajectory representations in continuous time with properly tuned hyper-parameters. In addition, fusing GNSS observations in a tight coupling has demonstrated improved trajectory smoothness and estimation robustness. Future work will include extending GNSS observations with carrier-phases, online parameter tuning, and sensor noise identification.
## Acknowledgments
The authors thank Robin Taborsky from the Institute of Automatic Control at the RWTH Aachen University and the public order office in Aachen, Dusseldorf, and Cologne for their great support in measurement campaigns. We also thank David Yoon and Keenan Burnett from the Autonomous Space Robotics Laboratory at the University of Toronto for their discussions and support in this work.
Figure 16: Histogram of whitened state errors of GP models in an open-sky area.
Figure 14: Examples of lidar odometry degradation in three scenarios: a) unstructured feature-less area, b) high-speed scenario and c) long tunnel (\(400\,\mathrm{m}\)). |
2309.15352 | Structure and mechanical properties of monolayer amorphous carbon and
boron nitride | Amorphous materials exhibit various characteristics that are not featured by
crystals and can sometimes be tuned by their degree of disorder (DOD). Here, we
report results on the mechanical properties of monolayer amorphous carbon (MAC)
and monolayer amorphous boron nitride (maBN) with different DOD. The pertinent
structures are obtained by kinetic-Monte-Carlo (kMC) simulations using
machine-learning potentials (MLP) with density-functional-theory (DFT)-level
accuracy. An intuitive order parameter, namely the areal fraction Fx occupied
by crystallites within the continuous random network, is proposed to describe
the DOD. We find that Fx captures the essence of the DOD: Samples with the same
Fx but different sizes and distributions of crystallites have virtually
identical radial distributions functions as well as bond-length and bond-angle
distributions. Furthermore, by simulating the fracture process with molecular
dynamics, we found that the mechanical responses of MAC and maBN before
fracture are solely determined by Fx and are insensitive to the sizes and
specific arrangements of the crystallites. The behavior of cracks in the two
materials is analyzed and found to mainly propagate in meandering paths in the
CRN region and to be influenced by crystallites in distinct ways that toughen
the material. The present results reveal the relation between structure and
mechanical properties in amorphous monolayers and may provide a universal
toughening strategy for 2D materials. | Xi Zhang, Yu-Tian Zhang, Yun-Peng Wang, Shiyu Li, Shixuan Du, Yu-Yang Zhang, Sokrates T. Pantelides | 2023-09-27T01:48:34Z | http://arxiv.org/abs/2309.15352v1 | # Structure and mechanical properties of monolayer amorphous carbon and boron nitride
###### Abstract
Amorphous materials exhibit various characteristics that are not featured by crystals and can sometimes be tuned by their degree of disorder (DOD). Here, we report results on the mechanical properties of monolayer amorphous carbon (MAC) and monolayer amorphous boron nitride (maBN) with different DOD. The pertinent structures are obtained by kinetic-Monte-Carlo (kMC) simulations using machine-learning potentials (MLP) with density-functional-theory (DFT)-level accuracy. An intuitive order parameter, namely the areal fraction \(F_{\mathrm{x}}\) occupied by crystallites within the continuous random network, is proposed to describe the DOD. We find that \(F_{\mathrm{x}}\) captures the essence of the DOD: Samples with the same \(F_{\mathrm{x}}\) but different sizes and distributions of crystallites have virtually identical radial distributions functions as well as bond-length and bond-angle distributions. Furthermore, by simulating the fracture process with molecular dynamics, we found that the mechanical responses of MAC and maBN before fracture are solely determined by \(F_{\mathrm{x}}\) and are insensitive to the sizes and specific arrangements of the crystallites. The behavior of cracks in the two materials is analyzed and found to mainly propagate in meandering paths in the CRN region and to be influenced by crystallites in distinct ways that toughen the material. The present results reveal the relation between structure and mechanical properties in amorphous monolayers and may provide a universal toughening strategy for 2D materials.
## Introduction
Two-dimensional (2D) materials exhibit unique properties. Their mechanical properties, in particular fracture toughness, which describes the ability of a material containing a crack to resist fracture, are essential for their reliable integration into future electronic, composite, and nano-electromechanical applications [1-4]. However, cracks in 2D materials generally induce brittle behavior at room temperature [5-8]. Given the brittle nature of 2D materials, it is important to investigate their mechanical properties and find effective ways toughen them for applications. Introducing extrinsic defects and increasing the defect density is one way to increase the fracture toughness of graphene [9]. In contrast, binary materials like monolayer h-BN are intrinsically toughened by an asymmetric deformation at crack tips (due to asymmetric edge polarization) [10]. Overall, disorder engineering is an effective toughening strategy for 2D materials.
Amorphous materials that are highly disordered feature a wealth of mechanical properties [11-16], but their atomic structures are very complicated and highly debated. As a result, the construction of structure-properties relations for amorphous materials remains a long-standing riddle. The task is simpler in 2D, as it is possible to directly determine the atomic positions by high-resolution scanning transmission electron microscopy (STEM). In 2019, monolayer amorphous carbon (MAC) was successfully synthesized for the first time and atomic-resolution STEM directly revealed that MAC is a Zachariasen continuous random network (Z-CRN) containing crystallites. It was also found that MAC exhibits high toughness [17]. More recently, in the case of MAC, the degree of disorder (DOD) was found to be tunable by the growth temperature and to affect the electrical conductivity significantly [18]. Two order parameters that can be measured experimentally were introduced to correlate properties to the DOD.
Monolayer amorphous BN (maBN) has not been synthesized so far (only amorphous thin films have been reported [19]). The structure of maBN has been studied by kinetic Monte Carlo (kMC) simulations using empirical potentials [20]. It was found that maBN features pseudocrystallites, i.e., honeycomb regions comprising noncanonical hexagons with random B-B and N-N bonds, in a Z-CRN [20]. Furthermore, the mechanical and thermal properties of MAC and maBN have by now also been investigated by simulations based on empirical potentials [20-23]. However, the accuracy of empirical potentials is never as high as high as that of DFT calculations, especially for binary materials. kMC simulations based on DFT evaluations of total energies for the construction of amorphous structures still remain out of reach, but the advent of practical methods for generating DFT-based machine-learning potentials opens up new opportunities to investigate the structure and properties of amorphous materials.
In this paper, we investigate the structure and mechanical properties of monolayer
amorphous carbon and boron nitride using machine learning potentials (MLP) with density-functional theory (DFT)-level accuracy. The kMC simulation [24], a widely-used sampling method for fast exploration of potential energy surfaces, was employed to assist the active-learning procedure to train the MLPs. Then the structure evolution of MAC and maBN is simulated by kMC with the bonding energetics described by the as-trained MLPs. It is found that crystallites are in fact more energetically favored within maBN than pseudocrystallites. Moreover, an intuitive order parameter, \(F_{\kappa}\), the fraction of the area occupied by crystallites, is proposed to quantify the DOD of these amorphous materials. We find that \(F_{\kappa}\)captures the essence of the DOD: We demonstrate that _samples with very different atomic structures but the same \(F_{\kappa}\) have essentially identical radial distribution functions and bond-angle and bond-length distributions_. As a result, the mechanical properties of MAC and maBN samples with different DOD, namely the critical stress and strain that lead to fracture, investigated by MLP-based molecular dynamics (MD) simulations, are determined solely by the \(F_{\kappa}\) value of the sample. Moreover, we found that crack propagation exhibits very similar behaviors in MAC and maBN. Crack propagation can be regarded as the formation and the coalescence of voids. The existence of crystallites affects the locations of void formation, causing behaviors such as deflection, stopping, and bridging of cracks, which lead to rich toughening mechanisms compared with the crystalline material. The results deepen our understanding of the structure-mechanical-properties relationship in 2D amorphous materials.
## Results and discussion
The MLP set is trained by using the open-source DeepMD-kit package [25; 26], and the root-mean-square-error (RMSE) of the validation set is about 6 meV per atom, which reached the widely accepted standard to give accurate MLPs. Details of the generation process are described in Supplemental Material [SM]. We first validated the reliability of the as-generated MLPs for kMC simulations. The energies of 100 accepted (red triangles) and 100 denied (blue inverted triangles) monolayer carbon and BN structures from kMC simulations with distinct DOD are compared in Figures 1a and 1b, respectively. The energies calculated by MLPs are very close to those calculated by DFT. Furthermore, for the formation energies of some typical defects in graphene and h-BN that affect kMC simulations, the MLP results are in agreement with those of DFT calculations (Table. S3, S4). To validate the stretching simulations, we stretched several samples, and collected 150 stretched and fractured structures of MAC and maBN. The energies of these structures (green dots) are also compared in Fig. 1a and 1b respectively. It is clear that the MLPs can describe stretched and fractured amorphous structures with DFT-level accuracy.
We also provide a benchmark of MLPs in phonon dispersions, which are relevant to the mechanical properties. As shown in Fig. 1c and 1d, the calculated phonon dispersions of crystalline graphene and monolayer h-BN, using the as-generated MLPs, are in excellent agreement with the DFT results. In contrast, the calculated phonon dispersions using best-of-breed empirical potentials (AIREBO [27] for graphene, Extended Tersoff [28] for h-BN) show significant deviations from the DFT results. At the same time, the elastic constants and modulus of crystalline graphene and monolayer h-BN calculated by MLPs also outperform empirical potentials and are in good agreement with DFT results (Tables S1, S2). Overall, by comparing MLPs with DFT side by side, it can be concluded that the as-generated MLPs can describe crystalline, amorphous, stretched, and fractured systems, with DFT-level accuracies that the best-of-breed empirical potentials cannot match. MLPs are slowly becoming the standard for "DFT-level" simulations and calculations for systems that cannot be handled by straight DFT calculations.
The first step is to construct a reliable atomic structure using MLPs. We performed kMC simulations [24] of the structural evolution of monolayer amorphous materials (MAC and maBN). Starting from an initial configuration with randomly distributed
Figure 1: **Validation of MLP.** (a, b) Energy deviations between DFT and MLP calculations for rejected and accepted structures in kMC simulations, amorphous fractured structures, and crystalline structures of (a) carbon and (b) BN systems. (c, d) Phonon dispersion comparisons for (c) graphene and (d) h-BN, calculated using DFT (black solid), MLP (red dashed), and a state-of-the-art empirical potential (blue dashed).
atoms in a plane, five typical atomic structures of monolayer carbon and BN from different kMC steps are shown in Figs. 2a and 2b, respectively. The canonical hexagons in crystallite islands are colored green, while noncanonical hexagons in monolayer BN are colored blue. It is worth noting that, like monolayer carbon, monolayer BN also exhibits continuously growing crystallite regions during the kMC simulation with MLP. This result contrasts with earlier findings, based on kMC simulations using an empirical potential, that monolayer amorphous BN develops exclusively pseudocrystallites, namely honeycomb regions made up of noncanonical bonds [20]. We have now discovered that this difference arises because the extended Tersoff empirical potential substantially underestimates the energy of some noncanonical hexagons like those occurring in the recently predicted orthorhombic polymorph of BN (o-B\({}_{2}\)N\({}_{2}\)) [29], which directly affects the results of kMC simulations. In Table S4, we show that, unlike the empirical potential, the DFT-based MLPs reproduce the DFT-calculated formation energy of o-B\({}_{2}\)N\({}_{2}\) very accurately. Thus, even though maBN contains two different elements and has a high possibility of forming noncanonical hexagons from a random distribution of atoms, the more stable, lower-energy crystallite structures prevail.
Figure 2: **Atomic structures of monolayer carbon and BN in kMC simulation.** (a-b) Atomic structures of monolayer carbon and BN from different kMC steps, respectively. The hexagons in crystallite regions are colored in green (canonical hexagons) and blue (noncanonical hexagons). The percentage of canonical hexagons and corresponding DOD order parameter \(F_{\rm x}\) are listed. (c-d) RDF of monolayer carbon and BN with five different \(F_{\rm x}\), respectively.
In order to distinguish amorphous structures with different DOD, an order parameter is necessary. Previously, Tian _et al._ defined a DOD order parameter, \(\eta_{MRO}\), namely the ratio of the medium-range order (MRO) of amorphous and crystalline samples, obtained from the fluctuations of the experimental radial distribution functions (RDFs) in the medium-distance range. However, in order to correlate the conductivity with the DOD, it was found necessary to introduce a second order parameter, the density of conducting sites \(\rho_{\text{sites}}\), which was derived directly from atomic-scale images of MAC samples.
We explored the applicability of \(\eta_{MRO}\) to characterize the wide range of kMC-generated samples, i.e., individual kMC snapshots, of which Fig. 2a shows only five. We used MD simulations of several samples at room temperature and calculated their RDFs and \(\eta_{MRO}\). We found that samples with very similar MRO and hence similar \(\eta_{MRO}\) may differ significantly in their short-range RDFs, their bond-angle and bond-length distributions, and even more conspicuously in the fractions of the areas occupied by crystallites (see Fig. S3 for a detailed discussion). These observations motivated us to propose an alternative and more intuitive order parameter for monolayer amorphous materials that is directly based on the atomic structure, namely the fraction of the crystallite part of the structure, defined by
\[F_{\text{x}}=\frac{N_{\text{x}}}{N_{\text{CRN}}+N_{\text{x}}}.\]
Here \(N_{\text{x}}\) and \(N_{\text{CRN}}\) are the numbers of rings in the crystallites and the CRN regions, respectively (see more details in Supplemental Material). As a result, \(F_{\text{x}}\) ranges from 0 for the most disordered structure (fully CRN) to 1 for the most ordered structures (crystalline graphene or maBN). The calculated \(F_{\text{x}}\) values for samples with the same \(\eta_{MRO}\) are quite distinct (see Table S5). The \(F_{\text{x}}\) values of the five monolayer carbon samples in Fig. 2a are used to label their structures in Fig. 2a while their RDFs are shown in Fig. 2c. All structures show clear short-range order, but their RDF peaks are broadened differently. Structures with smaller \(F_{\text{x}}\) exhibit broader RDF peaks and broader bond-angle and bond-length distributions, i.e., larger DOD (see Figs. S4 and S5). The five \(F_{\text{x}}\) values are compared with the respective \(\eta_{\text{MRO}}\) values in Table S6. The net conclusion is that \(\eta_{\text{MRO}}\) appears to be relatively insensitive to increasing DOD in samples with CRN areas that occupy more than \(\sim\)50% of the sample, i.e., for \(F_{\text{x}}<0.5\).
To validate the effectiveness of \(F_{\text{x}}\), we generated samples with completely different sizes and distributions of crystallites, but with roughly equal values of \(F_{\text{x}}\). The atomic structures, RDFs, and distributions of bond lengths and bond angles of three MAC samples with identical \(F_{\text{x}}=0.5\) are compared in Fig. 3. Similar comparisons are made for two more groups, each with three MAC samples, and \(F_{\text{x}}=0.25\) and \(F_{\text{x}}=0.75\)
respectively, in Fig. S4. In all cases. we found that _samples with very different atomic structures but the same value of \(F_{\mathbf{x}}\), i.e., samples with very different sizes and arrangements of crystallites but similar total crystallite areal fraction, exhibit nearly identical RDFs and distributions of bond lengths and angles_. In other words, \(F_{\mathbf{x}}\) captures the essential indicators of DOD.
For the binary maBN systems, the values of \(F_{\mathbf{x}}\) are also calculated and shown in Fig. 2b while their RDFs are shown in Fig. 2d. We also generated samples with completely different local atomic structures, but with roughly equal values of \(F_{\mathbf{x}}\), and compared their RDFs and their distributions of bond lengths and bond angles, shown in Fig. 3 and Fig. S5. Once more, we find that _samples with very different atomic structures but the same value of \(F_{\mathbf{x}}\) exhibit nearly identical RDFs and distributions of bond lengths and bond angles_. As all the noncanonical B-B and N-N bonds are distributed in CRN regions, the areal fraction of crystallites, \(F_{\mathbf{x}}\), is able to capture both the structural and chemical DOD in binary monolayers. We have, therefore, established \(F_{\mathbf{x}}\) as an effective indicator (order parameter) of the DOD for 2D amorphous materials.
Figure 3: **Relation of \(F_{\mathbf{x}}\) to different manifestations of the DOD.** (a, b) Atomic structures of three different MAC and maBN samples with identical \(F_{\mathbf{x}}\) = 0.50, distinguished by labels: a, b, and c. (c) RDFs of three MAC (upper panel) and maBN (lower panel) samples in (a, b). (d) Bond-length and (e) bond-angle distributions of the three MAC (upper panel) and three maBN (lower panel) samples in (a, b).
Admittedly, the definition of \(F_{\mathbf{x}}\) as a measure of the DOD is accessible to experiments only through atomic-resolution images. However, this definition enables a detailed theoretical investigation of structure-properties relations. We have investigated the mechanical properties of MAC and maBN by performing MD simulations as follows. Structures with areas \(\sim\)40\(\times\)20 nm\({}^{2}\) were generated using the modified-building-blocks method [30]. Details are described in Fig. S6 and Fig. S7. As shown schematically in Fig. 4c, a 2-nm-long precrack was introduced at the center of the model. Then a far-field tensile load is applied in the \(y\)-direction until the sample breaks (a fracture develops throughout the sample). Figure 4a compares the nominal 2D stress-strain relations for the mechanical response of graphene along the zigzag direction and that of MAC with different values of \(F_{\mathbf{x}}\). In the small-strain region, the stress increases smoothly as the strain is enhanced. There are two differences between crystalline and amorphous mechanical responses: 1) the stress in MAC is much lower than that of graphene at the same strain; 2) unlike the linear stress-strain curve of graphene, the stress-strain curves of all MAC samples are nonlinear. This nonlinear stress-strain region suggests a plastic deformation in MAC samples, which is attributed to the rough
Figure 4: **Mechanical response of MAC and maBN.** (a) Strain-stress curves of MAC and with three different \(F_{\mathbf{x}}\) comparing with the precrack graphene along a zigzag direction. (b) Stress-strain curves of maBN with different \(F_{\mathbf{x}}\) compared with the precrack monolayer h-BN along a zigzag direction. (c) Schematic of stretching simulations. (d, e) Strain-stress curves for three groups of MAC and maBN samples, respectively. Samples in each group have similar \(F_{\mathbf{x}}\) values, but completely different atomic structures, with the starred sample in the legend corresponding to the sample in (a) for MAC and (b) for maBN.
surface and the relaxations of defects in MAC [22].
In a fracture process, when the strain increases to a critical value, the stress reaches its maximum and then drops, accompanied by the fracture propagation and the release of strain energy. As a typical brittle 2D material, graphene exhibits an abrupt drop in its stress-strain curve, while the stress of the MAC samples drops in a staircase fashion. Moreover, the curves in Fig. 4a show a sequential pattern: as the DOD increases, i.e., as \(F_{\mathrm{x}}\) decreases, the critical strain increases and the maximum stress decreases. We are not ready, however, to correlate \(F_{\mathrm{x}}\) with mechanical properties, because a given value of \(F_{\mathrm{x}}\) can correspond to different samples with distinct atomic structures. We, therefore, generated additional samples with similar values of \(F_{\mathrm{x}}\) but different atomic structures. Their stress-strain curves are shown in Fig. 4d. The stress-strain curves of different samples with similar \(F_{\mathrm{x}}\) are very similar to each other when the strain is smaller than the critical strain. Moreover, different structures with similar values of \(F_{\mathrm{x}}\) exhibit similar critical strain and maximum stress. Therefore, our results demonstrate that _the mechanical properties, i.e., the critical strain and the maximum stress of MAC is determined solely by \(F_{\mathrm{x}}\)_.
As mentioned above, the stress-strain curves of MAC exhibit kinks with abrupt drops in stress (Fig. 4a). Each kink observed on the stress-strain curve indicates an initiation, propagation, or arrest of a crack. To analyze the crack behaviors near the kinks, the stress distribution of a snapshot of MAC (\(F_{\mathrm{x}}=0.36\)) corresponding to the star mark on the stress-strain curve after the appearance of several kinks in Fig. 4a is shown in Fig. S8. It is found that there is no stress concentration near the crack tips of crack A and B, which suggests that cracks A and B already stopped [31]. These simulation results are consistent with experimental data on plasticity, large toughness, and arrested crack propagation in MAC [17].
We next turn to the mechanical properties of maBN. The stress-strain curves of h-BN along the zigzag direction and maBN samples with different and similar values of \(F_{\mathrm{x}}\) are shown in Fig. 4b and Fig. 4e. We see the maBN shares features with MAC, including the nonlinear stress-strain relation at the small-strain region and smaller critical strain at higher values of \(F_{\mathrm{x}}\). Note that, while graphene and h-BN exhibit distinct fracture properties -- with graphene having an atomically smooth cracked edge [5] but h-BN lacking such smoothness [10] -- they still demonstrate similar features after amorphization. Based on the present results shown in Fig. 4(b, e), we propose that the introduction of amorphousness may serve as a universal route to toughness enhancement of 2D materials.
Another obvious difference observed in the stress-strain curves of crystalline and amorphous materials is how stress decreases during the fracture process, which is associated with the propagation of cracks. In contrast to crystalline materials, stress
does not immediately drop to zero in amorphous materials. Instead, the stress in MAC and maBN decreases slowly and even fluctuates. To understand how the crack propagates in MAC and maBN, a detailed analysis of crack propagation was performed.
Figure 5a shows the pathway of the main crack in one maBN sample. It is found that the crack propagates mainly through the CRN regions between crystallites. The same is also true for MAC. The randomly distributed crystallites embedded in the CRN regions result in a meandering crack path, which costs much more energy than a straight crack path like that of crystalline graphene [5]. The crack propagation in the CRN region can be regarded as the formation and coalescence of voids as shown in Fig. S9, which is similar to bulk amorphous carbon [32]. Stress concentrations are more likely to occur in CRN regions than in crystallite regions due to the existence of holes and inhomogeneities in CRN regions. The stress concentration near the crack tip leads to the formation of voids near the crack tip, and the crack tip extends towards the voids and connects with these voids to form the new tip.
We next analyze the influence of crystallites on the crack propagation path. Some typical snapshots during crack propagation and corresponding schematics are shown in Fig. 5b-d. Figure 5b show one blunted crack tip (blue arrows mark the direction of crack propagation) pointing to a crystallite (labelled by the red circle). In addition, there is no
Figure 5: **Crack propagation.** (a) Crack path in crystallite-maBN. The main crack interacts with crystallites in three main ways: (b, e) crystallites stop the propagation of the main crack; (c, f) the main crack is deflected by crystallites; (d, g) while the main crack is stopped by crystallite, another crack initiates near the crystallite and a bridge is formed between the two cracks.
stress concentration (Fig. S10) nor emergent voids near the crack tip, which indicates that the propagation of the crack stops at the crystallite. In contrast, voids continuously form near the crack tip shown in Fig. 5c. As a result, this crack changes its direction and propagates along the edge of a crystallite instead of being blunted by the crystallite. In other cases, we found two cracks near a crystallite such as the snapshot shown in Fig. 5d. One of the cracks stops in front of the crystallite just as the crack shown in Fig. 5b. The other crack initiates from a void far from the first crack tip. The two cracks are separated by the crystallite in between, which acts as a bridge between them.
Through a detailed analysis of these snapshots, the propagation of cracks can be understood as follows. Voids are the precursors of crack tips. However, voids are very difficult to form thermally in crystallites because the formation energies of vacancies are large (7.5 eV for the vacancy in graphene) [33]. When the crack tip reaches a crystallite, there are several possible outcomes. If there is no 'unstable structure' that is prone to cause stress concentration on either side of the crack, the degree of stress concentration at the crack tip is not enough to induce the formation of voids, whereby the propagation of the crack is forced to stop (Fig. 5b, 5e). On the contrary, if there is an 'unstable structure' nearby, where voids can form from, the crack is deflected by the crystallite (Fig. 5c, 5f). The deflection of the crack path increases the energy cost of crack propagation and hence toughens the materials.
Another possibility is that, after a propagating crack stops at a crystallite, voids form on the other side of the crystallite far from the crack tip, whereby they are not able to directly merge into the crack tip. In this case, a new crack tip initiates at such a void and the crystallite forms a bridge between the two cracks (Fig. 5d, 5g). The formation of the bridge helps reduce the local stress in the wake of the crack and toughens the material [34]. Overall, embedded crystallites terminate or deflect a crack that is propagating towards them or make the crack discontinuous. A meandering crack path increases the energy cost of crack propagation and leads to toughening of amorphous materials.
## Conclusions
In summary, accurate MLPs for monolayer amorphous carbon and BN are trained with comprehensive sampling in the phase space by kMC-assisted active-learning. Crystallites are much more energetically favorable and can easily form in maBN, suggesting that pseudocrystallites are not likely to form in non-elemental materials. An intuitive order parameter, \(F_{\mathsf{x}}\), based on the atomic structures, is proposed to quantify the DOD in amorphous materials that comprise a Z-CRN and crystallites. Its effectiveness is demonstrated in MAC and maBN. For mechanical properties, large-scale uniform MAC and maBN samples were generated using the modified-building-block method. We find that the mechanical response before fracture, e.g., critical strain and stress, is
solely determined by \(F_{\kappa}\). With the increase of \(F_{\kappa}\), there is a noticeable downward (upward) trend in critical strain (stress), respectively (exploration of how electrical conductivity of amorphous monolayers [18] correlates with \(F_{\kappa}\), however, is beyond the scope of this paper). A high crack resistance is observed in amorphous samples during the fracture process. Our analysis of crack propagations reveals that the crack resistance is attributed to complicated crack behaviors resulting from the presence of the crystallites in a Z-CRN amorphous structure. In disordered CRN regions, there is high stress concentration, leading to the formation of voids and crack propagation. Conversely, the crystallite regions, which possess resistance to void formation, can stop or deflect crack propagation or even induce the initiation of another crack, acting as a bridge between cracks. In both MAC and maBN, these behaviors are common and contribute to the propagation of cracks in a meandering and more energetically costly manner, which indicates that amorphization can toughen the two different materials in the same way. This finding suggests that amorphization may be a universal toughening mechanism, capable of improving the mechanical properties of various 2D materials.
## Acknowledgements
This work was supported by the National Key R&D program of China (No. 2019YFA0308500), the National Natural Science Foundation of China (No. 52250402 and 61888102), CAS Project for Young Scientists in Basic Research (YSBR-003), and the Fundamental Research Funds for the Central Universities. A portion of the research was performed in CAS Key Laboratory of Vacuum Physics. Work at Vanderbilt was supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division grant No. DE-FG02-09ER46554 and by the McMinn Endowment at Vanderbilt University.
## References:
* [1] Y. Kim, J. Lee, M. S. Yeom, J. W. Shin, H. Kim, Y. Cui, J. W. Kysar, J. Hone, Y. Jung, S. Jeon _et al._, Nat. Commun. **4**, 2114 (2013).
* [2] C. Lee, X. Wei, J. W. Kysar, J. Hone, Science **321**, 385 (2008).
* [3] S. Kim, J. Yu, A. M. van der Zande, Nano Lett. **18**, 6686 (2018).
* [4] B. Ni, D. Steinbach, Z. Yang, A. Lew, B. Zhang, Q. Fang, M. J. Buehler, J. Lou, MRS Bull. **47**, 848 (2022).
* [5] P. Zhang, L. Ma, F. Fan, Z. Zeng, C. Peng, P. E. Loya, Z. Liu, Y. Gong, J. Zhang, X. Zhang _et al._, Nat. Commun. **5**, 3782 (2014).
* [6] T. Zhang, H. Gao, J. Appl. Mech. **82**, 051001 (2015).
* [7] A. Shekhawat, R. O. Ritchie, Nat. Commun. **7**, 10546 (2016).
* [8] Y. Yang, X. Li, M. Wen, E. Hacopian, W. Chen, Y. Gong, J. Zhang, B. Li, W. Zhou, P. M. Ajayan _et al._, Adv. Mater. **29**, 1604201 (2017).
* [9] G. Lopez-Polin, J. Gomez-Herrero, C. Gomez-Navarro, Nano Lett. **15**, 2050 (2015).
* [10] Y. Yang, Z. Song, G. Lu, Q. Zhang, B. Zhang, B. Ni, C. Wang, X. Li, L. Gu, X. Xie _et al._, Nature **594**, 57 (2021).
* [11] J. Robertson, Phys. Rev. Lett. **68**, 220 (1992).
* [12] C. Fan, C. Li, A. Inoue, V. Haas, Phys. Rev. B **61**, R3761 (2000).
* [13] Z. P. Lu, C. T. Liu, J. R. Thompson, W. D. Porter, Phys. Rev. Lett. **92**, 245503 (2004).
* [14] V. I. Ivashchenko, P. E. A. Turchi, V. I. Shevchenko, Phys. Rev. B **75**, 085209 (2007).
* [15] C. A. Schuh, T. C. Hufnagel, U. Ramamurty, Acta Mater. **55**, 4067 (2007).
* [16] M. Zhu, J. Zhou, Z. He, Y. Zhang, H. Wu, J. Chen, Y. Zhu, Y. Hou, H. Wu, Y. Lu, Mater. Horiz. (2023).
* [17] C. T. Toh, H. Zhang, J. Lin, A. S. Mayorov, Y. P. Wang, C. M. Orofeo, D. B. Ferry, H. Andersen, N. Kakenov, Z. Guo _et al._, Nature **577**, 199 (2020).
* [18] H. Tian, Y. Ma, Z. Li, M. Cheng, S. Ning, E. Han, M. Xu, P. F. Zhang, K. Zhao, R. Li _et al._, Nature **615**, 56 (2023).
* [19] S. Hong, C. S. Lee, M. H. Lee, Y. Lee, K. Y. Ma, G. Kim, S. I. Yoon, K. Ihm, K. J. Kim, T. J. Shin _et al._, Nature **582**, 511 (2020).
* [20] Y. T. Zhang, Y. P. Wang, X. Zhang, Y. Y. Zhang, S. Du, S. T. Pantelides, Nano Lett. **22**, 8018 (2022).
* [21] L. C. Felix, R. M. Tromer, P. A. S. Autreto, L. A. Ribeiro Junior, D. S. Galvao, J. Phys. Chem. C **124**, 14855 (2020).
* [22] W. Xie, Y. Wei, Nano Lett. **21**, 4823 (2021).
* [23] Y.-T. Zhang, Y.-P. Wang, Y.-Y. Zhang, S. Du, S. T. Pantelides, Appl. Phys. Lett. **120**, 222201 (2022).
* [24] F. Ding, B. I. Yakobson, J. Phys. Chem. Lett. **5**, 2922 (2014).
* [25] H. Wang, L. Zhang, J. Han, W. E, Comput. Phys. Commun. **228**, 178 (2018).
* [26] L. Zhang, J. Han, H. Wang, R. Car, W. E, Phys. Rev. Lett. **120**, 143001 (2018).
* [27] S. J. Stuart, A. B. Tutein, J. A. Harrison, J. Chem. Phys. **112**, 6472 (2000).
* [28] J. H. Los, J. M. H. Kroes, K. Albe, R. M. Gordillo, M. I. Katsnelson, A. Fasolino, Phys. Rev. B **96**, 184108 (2017).
* [29] S. Demirci, S. E. Rad, S. Kazak, S. Nezir, S. Jahangirov, Phys. Rev. B **101**, 125408 (2020).
* [30] B. Cai, X. Zhang, D. A. Drabold, Phys. Rev. B **83**, 092202 (2011).
* [31] A. A. Griffith, Philos. Trans. R. Soc. London, A **221**, 163 (1921).
* [32] S. M. Khosrownejad, J. R. Kermode, L. Pastewka, Phys. Rev. Mater. **5**, 023602 (2021).
* [33] M. D. Bhatt, H. Kim, G. Kim, RSC Adv. **12**, 21520 (2022).
* [34] R. O. Ritchie, Nat. Mater. **10**, 817 (2011).
**Supplemental Material for**
**"Structure and mechanical properties of monolayer amorphous carbon and boron nitride"**
Xi Zhang\({}^{1\sharp}\), Yu-Tian Zhang\({}^{1,2\sharp}\), Yun-Peng Wang\({}^{3}\), Shiyu Li\({}^{1}\), Shixuan Du\({}^{1,4}\), Yu-Yang Zhang\({}^{1\ast}\), Sokrates T. Pantelides\({}^{5,1}\)
1 University of Chinese Academy of Sciences and Institute of Physics, Chinese Academy of Sciences, Beijing 100049, China
2 CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China
3 Hunan Key Laboratory for Super Microstructure and Ultrafast Process, School of Physics and Electronics, Central South University, Changsha 410083, China
4 Songshan Lake Materials Laboratory, Dongguan, Guangdong 523808, China
5 Department of Physics and Astronomy and Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, Tennessee 37235, US
*Email: [email protected]
## Kinetic-Monte-Carlo (kMC) simulations
Kinetic-Monte-Carlo (kMC) [1, 2] simulations are employed to obtain atomic configurations of MAC and maBN with different DOD. The kMC simulation has been widely used in simulating the dynamical annealing process [1-4].
For both MAC and maBN, the atomic number density of structures is the same as in crystalline graphene and monolayer h-BN, respectively, and the B:N ratio is 1:1. During kMC simulations, two neighboring atoms are randomly selected for a Stone-Wales (SW) transformation at each step. Considering the binary nature of BN, both SW and anti-site transformations (exchange two atoms) are considered. Then the structures are relaxed and accepted by a probability defined as min{1, exp[-(\(E_{new}\)-\(E_{old}\))/\(k_{B}\)T]}, where \(E_{old}\) is the energy of the current configuration, \(E_{new}\) is the energy of the new configuration, and \(k_{B}\)T is set to 0.5 eV.
## Molecular dynamics (MD) simulations
MD simulations are performed using the large-scale atomic/molecular, massively parallel simulator (LAMMPS) [5] with the MLPs. Mechanical responses were evaluated by conducting a uniaxial tensile simulation with a time increment of 1 fs.
Before applying the loading conditions, all structures were equilibrated using the Nose-Hoover barostat and thermostat method (NPT) at 300K. Then, a constant engineering strain (10\({}^{-4}\) ps-1) was applied, and the NVT ensemble is employed to control the temperature fluctuations. In order to compare the stress of different monolayers, we use the nominal 2D stress, \(\sigma_{2D}\)=_F/L_, where \(L\) is the length of the side on which the force is applied (force per unit length).
## 2 Machine Learning Potentials (MLPs)
Recently developed machine learning potentials (MLPs) [6, 7] bring us a powerful tool to perform accurate molecular dynamics simulations in complex systems. By fitting the energy, force, and stress from density-functional-theory (DFT) calculations, the MLPs can reach DFT-level accuracy. However, the quality of the MLPs mainly depends on the quality of the training data, which consists of a series of structures and corresponding total energy, atomic force, and stress calculated by a higher-level calculation such as DFT.
### 2.0.1 Training (kMC-assisted active-learning):
To ensure the performance of MLP in complex amorphous systems, the training data have to include structures as diverse as possible. In our work, kinetic-Monte-Carlo (kMC) [2] simulations are employed to explore the potential energy surface (PES) and collect structures with different degrees of disorder (DOD). The open-source active-learning package DPGEN [8] is used to refine the dataset.
Figure S1 shows the workflow. We first perform some KMC simulations with
empirical potentials and collect a series of 2D structures with different DOD, including continuous random network (CRN), crystallite, pseudocrystallite, nanocrystalline, polycrystalline, and crystalline as the initial dataset for training the MLP, which in turns helps the KMC simulation with data collecting (Fig. S1). For fracture situations, some stretching simulations are performed to collect the fracture structures. The DPGEN is employed in the entire procedure to expand and refine the dataset.
In the dataset, the energy, force, and virial stress of all the configurations are calculated using the Vienna Ab initio Simulation Package (VASP v6.3.2) [9]. A plane wave cutoff of 500 eV, the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional [10], is employed. The PBE functional is considered suitable for the present investigations because they are based entirely based on total energies of ground-state electronic configurations (hybrid functionals are needed when excited states, e.g., electronic band gaps, are calculated). The K-grid is auto-generated by KSPACING=0.333.
The individual active-learning datasets are combined to train the final potential. The smooth edition of DeePMD, DeepPot-SE model (se_e2_a)[11] as implemented in the DeePMD-kit package [12; 13] was used to train the MLP. The cutoff radius of the model was set as 6.5A for neighbor searching while the smoothing function decays from 4.0A. The sizes of hidden layers of embedding net from the input end to the output end are 25, 50, and 100, respectively. The fitting net consists of three hidden layers with 120 neurons in each layer. Hyperbolic Tangent was employed as an activation function. The learning rate decayed from 10\({}^{-3}\) to 3.51\(\times\)10\({}^{-8}\) exponentially. 90% of the dataset was randomly selected for training and the rest for validation.
The deviations of energy and force between MLPs and DFT calculations are shown in Figure S2. The root-mean-square-error (RMSE) of the validation dataset is 6.8 (6.5) meV and 0.09 (0.2) eV/A for energy and force in the carbon (boron nitride) system, respectively.
In Tables S1 and S2, we compare the elastic constants of graphene and h-BN calculated by DFT, MLPs, and the best-of-breed empirical potentials. In each material, the MLP are in good accord with the corresponding DFT results. The formation energies of some typical defects, including Stone-Wales (SW), single vacancy (SV), double vacancies (DV), triple vacancies (TV), and tetra vacancies (TeV), in graphene (Table S3) and h-BN (Table S4) are compared. It is found that both MLP and empirical potential values are consistent with DFT calculations. However, when we consider the orthorhombic boron nitride (o-B\({}_{2}\)N\({}_{2}\)) [16], a stable single-layer crystal structure of boron nitride that comprises a kind of non-canonical hexagons, the MLPs and DFT give very similar formation energies, whereas the Extended Tersoff give very different results. The same kind of non-canonical hexagons exist in large numbers in pseudocrystallite-maBN generated using the Extended Tersoff. Thus, the inability of the Extended Tersoff potential to describe o-B\({}_{2}\)N\({}_{2}\) accurately results in the preference
for pseudocrystallites in maBN [4]
## 3 Order parameter for degree of disorder (DOD)
In a previous study [17], Tian _et al._ defined \(\eta_{\text{MRO}}\), namely the level of medium-range order (MRO), as one of two order parameters that are needed to describe the DOD-conductivity relationship in MAC. It is defined by
\[\eta_{\text{MRO}}=\frac{{d_{\text{MRO}}(\text{a})}}{{d_{\text{MRO}}(\text{c})}},\]
where \(A_{MRO}(\text{a})\) and \(A_{MRO}(\text{c})\) are the areas of the "medium-range order" regions in the RDF (6 to 10 A) of the amorphous and crystalline materials, respectively. This definition of a DOD order parameter is suitable for experimental investigations since it can be directly calculated from measured RDFs, as was done in Ref. [17]. However, in order to correlate the conductivity with the DOD, it was found necessary to introduce a
second order parameter, the density of conducting sites \(\rho_{\rm sites}\), which was derived directly from atomic-scale images of MAC samples.
We investigated the applicability of \(\eta_{\rm MRO}\) to characterize the multitude of kMC amorphous structures (snapshots) and found that samples with similar \(\eta_{\rm MRO}\) may have very different DOD. Three such samples with the same MRO and hence the same \(\eta_{\rm MRO}\) (0.03) are shown in Fig. S3a. Their short-range RDFs as well as the bond-angle and bond-length distributions (Figs. S3b-d, respectively), all of which comprise part of the DOD, are different. An even more conspicuous difference in the DOD of the three samples is the fraction of the area occupied by crystallites.
The above observations, led us to define the alternative order parameter, \(F_{\rm x}\), as the fraction of crystallite regions in the whole material. The fraction is defined by the ratio of the number of rings in the crystallites and the number of all the rings in the entire sample. In practice, one determines the number of hexagons \(N_{\rm x}\) in all the crystallites in a sample and the number of rings in the CRN, \(N_{\rm CRN}\).) Then,
\[F_{\rm x}=\frac{N_{\rm x}}{N_{\rm CRN}+N_{\rm x}}.\]
The criterion for a hexagon to be part of a crystallite is that it must be attached to two other adjacent hexagons. In maBN, the crystallite hexagons must be canonical and be attached to two adjacent canonical hexagons. Other noncanonical hexagons are treated as parts of the Z-CRN. This definition of a DOD order parameter is suitable for theoretical investigations as it depends on the atomic structure of amorphous
monolayers.
The calculated \(F_{\mathrm{x}}\) values of the three MAC samples with identical \(\eta_{\mathrm{MRO}}\) in Fig. S3 are listed in Table S5 along with their \(\eta_{\mathrm{MRO}}\) and formation energy, defined by
\[E_{f}=E_{ave}(a)-E_{ave}(c),\]
where \(E_{\mathrm{ave}}\)(a) and \(E_{\mathrm{ave}}\)(c) are the energies per atom of the amorphous and crystalline samples, respectively. Their different DOD, exhibited in Fig. S3, are also reflected in their formation energies, which decrease from 0.53 to 0.28 eV/atom while their \(F_{\mathrm{x}}\) increase from 0.11 to 0.46. Thus, \(F_{\mathrm{x}}\) clearly distinguishes the DOD differences among samples that have similar \(\eta_{\mathrm{MRO}}\).
The order parameters \(F_{\mathrm{x}}\) and \(\eta_{\mathrm{MRO}}\) of graphene and the five MAC structures with completely different DOD depicted in Fig. 2a of the main text have been calculated and are listed in Table S6. The RDFs of these five samples are shown in Fig. 2c of the main text. The limited sensitivity of \(\eta_{\mathrm{MRO}}\) in characterizing materials of high degree of amorphousness can also be seen in the case of MAC-1 and MAC-2: the formation energy decreases from 0.62 to 0.31 eV/atom and \(F_{\mathrm{x}}\) increases from 0.10 to 0.39, while \(\eta_{\mathrm{MRO}}\) only changes from 0.04 to 0.03. Furthermore, \(F_{\mathrm{x}}\) is also effective and sensitive for samples with lower DOD as it smoothly drops from 1.00 for graphene to 0.74 for MAC-4 while \(\eta_{\mathrm{MRO}}\) drops precipitously from 1 to 0.14 and after that it decreases very slowly.
Figure S4. Relation of \(F_{\mathbf{x}}\) to different manifestations of the MAC DOD. (a-b) Atomic structures of two groups of MAC samples with identical \(F_{\mathbf{x}}\) value of 0.25 and 0.75, respectively. Each groups containing three different MAC samples, distinguished by labels: a, b, and c. (c) RDFs of two groups of MAC samples in (a, b) (in each group, the purple line covers the other two colors almost completely). (d) Bond length and (d) bond angle distribution of two groups MAC samples in (a, b).
of maBN samples in (a, b) (in each group, the purple line covers the other two colors almost completely). (d) Bond length and (d) bond angle distribution of two groups MAC samples in (a, b).
Since \(F_{\mathrm{x}}\) is defined as the fraction of crystallite region, we need to examine the relationship between \(F_{\mathrm{x}}\) and other structural information for validation. We collect three groups of MAC and three groups of maBN structures with roughly equal \(F_{\mathrm{x}}\) in each group, but with recognizably different atomic structures. The group of MAC and maBN with \(F_{\mathrm{x}}\) value of 0.5 is shown in Fig. 3 of the main text. The atomic structures, as well as RDFs and distributions of bond angles and bond lengths, of other two groups are shown in Figs. S4 and Fig. S5 for MAC and maBN, respectively. It is found that samples with the same \(F_{\mathrm{x}}\) exhibit nearly identical RDFs and distributions of bond lengths and bond angles, which indicates that they have the same DOD. Since the \(\eta_{\mathrm{MRO}}\) is determined by the RDF and samples with the same \(F_{\mathrm{x}}\) exhibit very similar RDFs, they also have similar \(\eta_{\mathrm{MRO}}\). As Table S5 demonstrates, however, MAC samples with very similar \(\eta_{\mathrm{MRO}}\) can have very different \(F_{\mathrm{x}}\). Overall, we conclude that \(F_{\mathrm{x}}\) is both intuitive and effective in capturing the DOD of MAC with sufficient sensitivity. Additionally, \(F_{\mathrm{x}}\) also serves well as an order parameter in maBN, the binary amorphous system.
## 4 Modified Building blocks method:
The building blocks method [18], which is commonly used in amorphous systems, was employed to build large-scale monolayer amorphous structures for stretching simulations. In this work, we used kMC simulation instead of the melt-quench procedure to generate the amorphous structures, and we add some modifications to reduce the simulation times.
Figure S6 shows the workflow of the modified building blocks method. First, we generated 32 uncorrelated 5\(\times\)5 nm\({}^{2}\) monolayer amorphous structures with a similar \(F_{\mathrm{x}}\). Then the 400\(\times\)200 nm\({}^{2}\) sample is built from the building blocks. To smooth the boundaries that connect the building blocks (red region in Figure S6), we did the constraint kMC, in which we only did the SW transformation and bond exchange near the boundaries until the DOD of the boundaries and DOD of the building blocks are similar. Finally, we performed a global kMC in the entire structure to ensure uniformity.
### Validation of modified building block methods:
Figure S7a shows the atomic structure of a building block sample. The boundary regions and other regions are colored in red and blue respectively. To examine the validity of the modified building block method, the bond angles and lengths distribution of different regions are calculated and shown in Figure S7. It is found that the bond angles and lengths distribution of two different regions are very similar, which indicates that the entire sample is uniform and the modified building block method is valid.
## Additional simulation results:
In order to demonstrate the arrested crack propagation, we selected a snapshot during the stretching simulation, which experienced two stress drops (two kinks on stress-strain curve). The stress distribution in this particular snapshot was calculated and illustrated in Fig. S8. This snapshot contains three cracks, among which cracks A and B exhibit no obvious stress concentration at their crack tips, while only the right-side crack tip of crack C shows stress concentration. This finding indicates that two of the cracks in the snapshot have undergone an arrested propagation, and their arrests correspond with two kinks on the stress-strain curve.
Figure S9 shows two snapshots of maBN during crack propagation. The voids (labeled by the black circle) constantly form near the crack tip (labeled by the blue arrow) and coalesce with the crack. The voids form near some randomly distributed holes, which results in a meandering crack path.
Figure S10 shows the stress distribution of a MAC sample a crack reaches a crystallite and stops propagating. The blunted crack tip is magnified to illustrate the atomic structure, with the crystallite (CRN) colored green (gray) and labeled by a black circle. Comparing to the propagating crack tip, it is found that there is no stress distribution near the blunted crack tip, which indicates the crack tip already stopped. |
2309.03530 | Efficient Single Object Detection on Image Patches with Early Exit
Enhanced High-Precision CNNs | This paper proposes a novel approach for detecting objects using mobile
robots in the context of the RoboCup Standard Platform League, with a primary
focus on detecting the ball. The challenge lies in detecting a dynamic object
in varying lighting conditions and blurred images caused by fast movements. To
address this challenge, the paper presents a convolutional neural network
architecture designed specifically for computationally constrained robotic
platforms. The proposed CNN is trained to achieve high precision classification
of single objects in image patches and to determine their precise spatial
positions. The paper further integrates Early Exits into the existing
high-precision CNN architecture to reduce the computational cost of easily
rejectable cases in the background class. The training process involves a
composite loss function based on confidence and positional losses with dynamic
weighting and data augmentation. The proposed approach achieves a precision of
100% on the validation dataset and a recall of almost 87%, while maintaining an
execution time of around 170 $\mu$s per hypotheses. By combining the proposed
approach with an Early Exit, a runtime optimization of more than 28%, on
average, can be achieved compared to the original CNN. Overall, this paper
provides an efficient solution for an enhanced detection of objects, especially
the ball, in computationally constrained robotic platforms. | Arne Moos | 2023-09-07T07:23:55Z | http://arxiv.org/abs/2309.03530v1 | # Efficient Single Object Detection on Image Patches with Early Exit Enhanced High-Precision CNNs
###### Abstract
This paper proposes a novel approach for detecting objects using mobile robots in the context of the RoboCup Standard Platform League, with a primary focus on detecting the ball. The challenge lies in detecting a dynamic object in varying lighting conditions and blurred images caused by fast movements. To address this challenge, the paper presents a convolutional neural network architecture designed specifically for computationally constrained robotic platforms. The proposed CNN is trained to achieve high precision classification of single objects in image patches and to determine their precise spatial positions. The paper further integrates Early Exits into the existing high-precision CNN architecture to reduce the computational cost of easily rejectable cases in the background class. The training process involves a composite loss function based on confidence and positional losses with dynamic weighting and data augmentation. The proposed approach achieves a precision of 100% on the validation dataset and a recall of almost 87%, while maintaining an execution time of around 170 us per hypotheses. By combining the proposed approach with an Early Exit, a runtime optimization of more than 28%, on average, can be achieved compared to the original CNN. Overall, this paper provides an efficient solution for an enhanced detection of objects, especially the ball, in computationally constrained robotic platforms.
Keywords:RoboCup Standard Platform League Convolutional Neural Network Object Detection Humanoid Robots Early Exits Real-time Processing
## 1 Introduction
Mobile robots require robust, reliable, and precise object detection capabilities to effectively perform their tasks. This paper focuses on the RoboCup Standard Platform League, which involves playing soccer using the NAO V6 humanoid robot platform1. The robots use its cameras to detect objects in its environment, including static and dynamic ones like a rolling ball or other robots. Detecting
dynamic objects can be challenging due to varying lighting conditions and fast movements. Deep neural networks with many layers are typically used, which increases the demand for computing power, a requirement that is lacking on a mobile robot platform like the NAO V6. Precise object detection is therefore more important than recall, as it is better to miss an object for a few frames than to have false detections and focus on the wrong areas. High precision also allows for faster re-detection of an object after it is lost, because it can be relied upon a single detection.
In robot soccer, the ball is the most critical object to detect because a match cannot be won without accurate detection of the ball. Detecting a rolling ball is crucial for the robot to react quickly. Therefore, its detection was studied in this paper. Conventional preprocessing techniques, such as scan lines, are used to identify a larger number of candidate regions where a ball may be present. However, these regions must be classified with a high precision. At the same time, the exact position of the ball, i.e. its center, within this patch must be determined, since it cannot be assumed that the candidate regions are always exactly centered on the object. Typically, the candidate regions' input data passes through a fixed neural network architecture. However, this fixed feed forward execution does not take into account that many of the patches that belong to the background class are more easily detectable and can therefore be rejected at earlier stages in the neural network.
This paper's main contribution consists of two parts. First, it presents a convolutional neural network architecture designed for computationally constrained robotic platforms, which is trained to achieve high precision classification of single objects in image patches and to determine their precise spatial positions. Second, the paper integrates Early Exits into an existing high-precision CNN architecture to reduce the computational cost of easily rejectable cases in the background class.
The remainder of this paper is organized as follows: Section 2 presents object detection techniques for resource-constrained robots, particularly in the context of the RoboCup Standard Platform League. Furthermore, related approaches concerning the use of Early Exits are discussed. Section 3 explains the approach presented in this paper, including model design decisions, specialized training, and the addition of an Early Exit. The performance of the proposed approach is then evaluated in Section 4. Finally, Section 5 concludes with a summary and an outlook.
## 2 Related Work
This section covers two different topics of related work. In the first subsection, we present the different ball detection algorithms used by several teams in the RoboCup Standard Platform League, which include the use of neural networks and specialized algorithms. In the second subsection, we will highlight the concept of Early Exit neural networks and the various techniques proposed by researchers to incorporate them into deep neural networks.
### Ball Detection in the RoboCup Standard Platform League
In recent years, the RoboCup Standard Platform League has seen significant advances in ball detection algorithms for the NAO robot, especially since the transition to a black and white ball in 2016. These improvements have enabled robots to better detect, track, and respond to the ball during gameplay, resulting in more accurate and efficient play combined with passes.
Using a multistep process for ball detection, the B-Human team [18] scans for ball candidates using scan lines, followed by a neural network-based classification process to identify the real ball and estimate its center and radius. Their system includes three neural networks, one CNN for feature extraction and two DNNs for ball classification and position estimation. Similarly, the HTWK Robots team [8] uses a two-phase ball detection algorithm that involves an integral image and a deep convolutional neural network for hypothesis generation and classification, respectively. The rUNSWift team [1] uses a new convolutional neural network to improve their ball detection recall. Their framwork's ball candidate finder undergoes pre-processing, heuristic checks, and quality modifiers for consistent region of interest scaling. Using a candidate generator based on filtered segments and multiple neural networks, the HULKs team's [5] approach involves a pre-classification network for higher recall and a second classification network for higher precision. The position and radius of the ball are determined by a third neural network, which is optimized for maximum candidate throughput using a genetic algorithm. The Dutch Nao team [4] developed an improved ball detection system, which uses a convolutional neural network for candidate generation and a field border detection system to reduce false positives. The Berlin United team [13] proposes a two-step approach to detecting the ball used in competitions. Their approach involves finding candidates through perspective key points detection and classifying them using a measure function based on integral images. They also employ heuristics and neural networks to make the process tractable.
Menashe et al. [14], affiliated with the UT Austin Villa team, present an approach that combines color and texture features to distinguish the ball from the field and other objects in the image. The authors use a sliding window technique to localize the ball and apply a machine learning classifier to verify the detection. In [21], Yan et al. propose a real-time lightweight CNN for ball detection in robots with limited computational resources, utilizing a combination of convolutional and pooling layers to achieve high accuracy while keeping the model small. Additionally, the paper by O'Keeffe and Villing [15] proposes a benchmark data set and evaluation of deep learning architectures for ball detection in the RoboCup SPL, which can be used to compare the effectiveness of various ball detection approaches.
### Early Exit Neural Networks
Deep neural networks (DNNs) have shown remarkable performance in various fields, such as computer vision and natural language processing. However, they are computationally expensive and require significant resources, hindering their
deployment on resource-constrained devices. One approach to address this challenge is the use of Early Exits in DNNs. Early Exits allow a neural network to terminate its inference process early, bypassing unnecessary computations for some inputs, thereby reducing the overall computational cost.
Researchers have proposed methods for incorporating Early Exits in DNNs for efficient inference. Teerapittayanon et al. [20] introduced BranchyNet, a framework for fast inference via early exiting from DNNs by training auxiliary classifiers for intermediate layers. Huang et al. [9] presented Multi-Scale Dense Networks, which utilize a dense connectivity pattern and multiple paths with different resolutions to enable Erly Exits. Figurnov et al. [6] introduced Spatially Adaptive Computation Time for Residual Networks, which dynamically adjusts the computation time for different regions of an input image. Bolukbasi et al. [3] proposed Adaptive Neural Networks for Efficient Inference, which use a reinforcement learning-based approach to decide when to exit early. Panda et al. [16] presented Conditional Deep Learning, which employs an energy-based gating mechanism to selectively execute layers. Jayakodi et al. [10] proposed a co-design approach for trading-off accuracy and energy of deep inference on embedded systems. Berestizhevsky and Even [2] introduced cascaded inference based on soft max confidence, which dynamically sacrifices accuracy for reduced computation. Passalis et al. [17] proposed a hierarchical Early Exit approach that adapts the number and position of Early Exits for different input instances. Matsubara et al. [12] identified the benefits of early exiting in split computing architectures, including reduced memory consumption, faster inference, and better load balancing across different processing units.
The approaches mentioned above have their unique strategies for Early Exits, and they can be categorized based on their input-adaptive, spatially adaptive, or hierarchical architectures. Input-adaptive methods dynamically adjust the network depth and width based on the input data [3, 16]. Spatially adaptive methods adjust the computation time or number of operations needed for different input regions [6]. Hierarchical Early Exit methods split the computation into multiple parts and terminate the computation based on an Early Exit criterion [20, 9, 17]. Other approaches aim to trade-off accuracy and energy by co-designing hardware and software for deep inference on embedded systems [10] or utilize cascaded inference based on soft max confidence, which dynamically sacrifices accuracy for reduced computation in [2].
The input-adaptive and spatially adaptive methods are particularly useful for handling large variations in input data, such as in image classification tasks with varying sizes or aspect ratios. On the other hand, hierarchical Early Exit methods are more suitable for tasks with a clear hierarchy of features, such as in object detection or segmentation tasks. This paper falls into the latter category. However, the Early Exits are utilized quite differently from those mentioned before. In our case, there are only very few examples of the positive class to be recognized, but quite a few cases of the background class. Therefore, this paper presents an approach for accelerating the classification of the background class through an Early Exit enhancement.
## 3 Approach
As described in Section 1, this work's approach is to classify image regions (patches) obtained through preprocessing. Thereby, this work focuses on detecting the ball, which is one of the most crucial objects in robot soccer. In a worst-case scenario, where no ball is present in the image, the preprocessing stage generates up to 80 hypotheses per frame, with a mean and standard deviation of \(30\pm 10\) hypotheses.
Now, on the one hand, this leads to the constraint that the detection must be executed with a high precision of at least 99.99%. Consequently, since in robot soccer there is only one ball on the field at a time, it also means that once a ball was detected with a high precision, processing all subsequent patches can be avoided for this frame. Nevertheless, it is imperative that the total execution time does not exceed the robot's real-time data processing capability, which is typically 30 FPS for the cameras. Thus, it is crucial to ensure that the robot can execute other important modules subsequently to ball detection.
Section 3.1 presents the neural network architecture we propose together with the chosen design decisions. Then, in Section 3.2 the dataset is explained, followed by a discussion on the training process in Section 3.3. Finally, Section 3.4 presents the enhancement of the neural network using Early Exits.
### Model Architecture
When designing a convolutional neural network (CNN) for detecting a ball in an image patch, the first and most important constraint considered is the execution time. For the framework running on the NAO V6, the ball detection should not exceed 8 ms in the normal case to provide enough buffer for the subsequent modules. When considering an average of 30 + 10 hypotheses per frame, this results in a maximum inference time of 0.2 ms per hypothesis.
Considering that convolution layers consume most of the execution time, it is apparent that there is room for optimization. Hence, we follow the MobileNet[7, 19] approach, where computationally intensive convolutions are substituted with depthwise separable convolutions. A depth multiplier greater than 1 is utilized, temporarily increasing the number of filters for the depthwise convolution and subsequently reducing them for the pointwise convolution. This allows the extraction of more complex features while adhering to the execution time limitations.
Since most processors support SIMD instructions of some kind, including the NAO V6 with up to SSE 4.2, this possibility of parallel processing is also taken into account. With the NAO V6, four data types with 4 bytes (e.g., floats) can be processed simultaneously by SIMD instructions using a 128-bit register. To take advantage of this, care was taken in the design to ensure that the number of filters must be divisible by four.
The final model architecture for the CNN can be seen in Table 1.
### Dataset
The dataset we use for this work consists of small image patches in the RGB format with a size of 32x32 pixels. These were obtained from the preprocessing of the Nao Devils framework during games in recent years. A total of 225350 patches were labeled by hand. The dataset was then split in a ratio of around 70/30 between training and validation data. A detailed distribution can be found in Table 2. In addition to the initial classification, the following properties were also labeled:
* **Bounding Box:** The upper left and lower right corners of the surrounding bounding box. Here, the coordinates can also be outside the patch because in the end, only the center of the object is used, so truncated objects can also be detected properly.
* **Concealed:** Indication of whether another object (i.e., in the foreground) partially conceals the object to be classified.
* **Visibility:** The visibility of the object in discrete increments with 25% steps (i.e., 0-25%, 25-50%,...) based on the size of the bounding box that is inside the image, as well as the degree of concealment.
These additional properties account for image patch complexity in detection. A clear, fully visible ball being undetected is worse than a blurry or partially obscured ball.
\begin{table}
\begin{tabular}{c c c c c c} Layer (type) & Filter Kernel & Stride & Depth M. \#MAC & Output \\ \hline Input & - & - & - & - & - & 32x32x3 \\ \hline SeparableConv2D* & 8 & 3x3 & 2x2 & 1 & 13055 & 16x16x8 \\ \hline Conv2D & 4 & 1x1 & 1x1 & - & 8190 & 16x16x4 \\ \hline SeparableConv2D & 16 & 3x3 & 2x2 & 2 & 12800 & 8x8x16 \\ \hline Conv2D & 8 & 1x1 & 1x1 & - & 8190 & 8x8x8 \\ \hline SeparableConv2D & 20 & 3x3 & 2x2 & 4 & 14850 & 4x4x20 \\ \hline Conv2D & 12 & 1x1 & 1x1 & - & 3840 & 4x4x12 \\ \hline SeparableConv2D & 32 & 3x3 & 2x2 & 8 & 15745 & 2x2x32 \\ \hline Conv2D & 16 & 1x1 & 1x1 & - & 2050 & 2x2x16 \\ \hline Flatten & - & - & - & - & - & 64 \\ \hline Dense & - & - & - & - & 192 & 3 \\ \hline \end{tabular}
\begin{tabular}{c c c c c c} Total \#MAC: 78912 & & & & \\ Total \#Params: 6686 & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Architecture of the CNN for the ball detection on an image patch. Each Separable-/Convolutional layer follows a Batch Normalization and a Leaky ReLu layer. The * marks the layer after which the Early Exit is attached.
### Training
The training is conducted in TensorFlow2, a software framework for machine learning. For the inference of the trained neural network on the NAO V6, TensorFlow Lite is used, which is specially designed for inference on mobile edge devices. To enable the optimizer to perform effectively, a loss function that meets the requirements of the problem is needed. Since there are different objectives in the detection process, we present a composite loss function based on the two main objectives, combined with a dynamic weighting:
Footnote 2: [https://www.tensorflow.org/](https://www.tensorflow.org/)
* **Confidence Loss:** Since the prediction of the confidence corresponds to probability distributions in the value range between 0 and 1, the use of a binary cross entropy seems to be appropriate. However, this loss function does not include any weighting to focus more on difficult examples. Therefore, the use of the Focal Loss [11] is proposed, which is based on cross entropy but adds a weighting factor to down weight the nearly correct classified examples and thus focus more on difficult examples.
* **Positional Loss:** To evaluate the deviation of the position between ground truth and prediction, the Manhattan Distance is suggested. By using it, it is possible to determine the pixel difference between the true position and the prediction.
* **Dynamic Weighting:** The Dynamic Weighting has two parts. The first part uses dataset properties, as described in Section 3.2, to penalize misclassification of simple examples more severely. This prioritizes objects that need to be recognized and increases recall of simple examples. The second part optimizes the training process for high precision by assigning each patch to one of four sections of the confusion matrix and multiplying them by weighting factors. To teach the neural network to avoid false positives, a large factor of \(w_{fp}=1000\) is proposed.
At the end, the two loss functions are combined with specific weighting factors. The factors \(w_{c}=1.0\) for the Confidence Loss and \(w_{p}=0.5\) for the Positional Loss have proven to be effective in creating a total loss.
To improve the generalization of the neural network, data augmentation is employed. The amount of augmentation is dynamically controlled and gradually increased. Initially, affine transformations like scaling, translation, rotation, and shear are applied, as well as left-right flipping. Later, more augmentations such
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Ball & No ball & Total \\ \hline Training & 69544 _(44.07\%)_ & 88274 _(55.93\%)_ & 157818 \\ Validation & 28985 _(42.92\%)_ & 38547 _(57.08\%)_ & 67532 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of patches in the dataset belonging to each class and its distribution to training and validation sets.
as brightness, contrast, and color changes, as well as motion blur and JPEG compression artifacts, are added.
### Adding an Early Exit
As stated in Section 3, numerous ball hypotheses require classification. Not all images are of equal difficulty for classification. Therefore, it would be advantageous if the neural network is executed only up to the layer where precise prediction is possible to conserve computational time. Some methods for achieving this have already been introduced in Section 2.
However, the distribution of object-to-background classes presented in this paper exhibits a substantial class imbalance, given that at most one ball should be present on the field/image. In our case, the primary goal is the rapid rejection of the background class, which only results in a change in recall, but it does not affect precision, which remains at a very high level.
Therefore, this paper proposes a method to enhance a deep neural network by adding an Early Exit to stop further inference when the background class has already been detected. This approach is generic and not specifically limited to the model presented in Section 3.1. The procedure for inserting the Early Exit is as follows:
1. Design a neural network model that satisfies the more precision targeted requirements. The execution time can be at the upper bound of the runtime limit.
2. Train the model normally until there is no further improvement on the validation dataset. After training, lock all layer weights. In TensorFlow, this can be done using the trainable flag of the layers.
3. Examine the neural network model, and we suggest inserting an Early Exit after the first convolutional layer. The combination of Max Pooling and Dense Layer has been found to be the most promising. Table 1 shows, marked with an *, after which layer the Early Exit is inserted for the CNN presented in this paper, while Table 3 shows its structure. The Early Exit enhancement can be applied directly to the trained and locked model or a new model with transferred weights can be created.
4. Train the layers of the Early Exit using the Confidence Loss, as described in Section 3.3, with a high weighting factor for the false negatives \(w_{fn}=100\). Achieving high recall is crucial for the Early Exit to avoid discarding potential positive objects too early.
5. Separate the neural network at the Early Exit, resulting in two models. The first model uses the image patch as input and outputs the convolution output and the Early Exit classification. If the confidence at the Early Exit is high enough, indicating that the patch probably contains the expected object, execute the second model with the convolution output of the first model as its input.
## 4 Evaluation
For the evaluation of the developed CNN presented in Section 3.1, the dataset described in Section 3.2 is utilized. As outlined in Section 3.3, the ball detection CNN is initially trained without modification, after which it is enhanced with an Early Exit following the first convolutional layer and called EE-CNN. The evaluation criteria comprise both the runtime, as can be seen in Section 4.1 and the performance, which is assessed using a confusion matrix with precision and recall subsequently determined shown in Section 4.2.
### Runtime Evaluation
In order to measure the runtime on the NAO robot, we use the TensorFlow lite runtime environment. In this process, 3600 measurements were performed, and the results are shown in Table 4. It is directly evident that the runtime of the fully executed EE-CNN with 180 us 7.14% slower than the original CNN. Because, as can be seen in Table 3, several new layers have been added that require additional computations. However, it can also be seen that the execution time up to the Early Exit with 64 us needs about 62% less runtime, which enables the approach presented in this paper to gain a performance advantage and the possibility to reduce the execution time.
### Dataset Evaluation
To compare the performance of the new EE ball detection CNN to the original CNN, we executed both on the same training and validation dataset. Based on
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Mean [ms] & Std [ms] & Min [ms] & Max [ms] \\ \hline Full CNN & 0.168 & 0.077 & 0.129 & 1.411 \\ \hline \hline Full EE-CNN & 0.180 & 0.083 & 0.136 & 1.406 \\ & _+7.14\%_ & _+7.79\%_ & _+5.43\%_ & _-0.36\%_ \\ \hline Early Exit & 0.064 & 0.049 & 0.043 & 1.299 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Execution times measured on the NAO V6 over 3600 measurements.
\begin{table}
\begin{tabular}{c c c c c} Layer (type) & Pool Size & Stride & \#MAC & Output \\ \hline Input & - & - & - & 16x16x8 \\ \hline MaxPooling2D & 2x2 & 2x2 & 510 & 8x8x8 \\ \hline Flatten & - & - & - & 512 \\ \hline Dense & - & - & 1025 & 1 \\ \hline \multicolumn{5}{c}{Total \# MAC: 1535 _(+1.95\%)_} \\ \multicolumn{5}{c}{Total \# Params: 513 _(+7.67\%)_} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Architecture of the Early Exit for the proposed ball detection CNN.
the predicted classifications, we calculated the confusion matrix and determined the precision and recall. Results on training data are provided as additional information only. The evaluation is performed solely on validation data. As can be seen in Table 5, the original CNN achieves a precision of 100% on the validation dataset, with a recall of almost 87%. This shows that the CNN presented here is able to detect many balls with a very high precision. Also, an average deviation of the ball center with around 0.471\(\pm\)0.795 pixel proves the effectiveness of the presented CNN model for detecting the correct spatial position.
When considering the Early Exit extended CNN in Table 6, there is only a small change in recall with no change in precision. The latter is also clear, since the Early Exit presented in this work never contributes to a preliminary classification of the positive class, i.e., the ball. Thus, only the recall decreases slightly by 0.35%. On the other hand, the much more relevant part is shown in the last column, showing how often the Early Exit has decided to stop early. It can be seen that on the validation data set for roughly 43% of the hypotheses, a decision can be made after the Early Exit without significantly influencing the recall.
Based on the class distribution of the dataset shown in Table 2, the runtimes as shown in Table 4, and the number of early exits, this leads to the shown mean execution time of 131 us, which corresponds to a runtime optimization of more than 28% compared to the original CNN.
## 5 Conclusion and Future Work
The paper proposes a novel approach for object detection in mobile robots on computationally constrained platforms. The main focus is on detecting the ball
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & TP & FP & TN & FN & P & R & **\#EE** \\ \hline \multirow{2}{*}{Training} & 59769 & 0 & 88359 & 9824 & 100\% & 85.88\% & **66046** \\ & _-0.35\%_ & _\(\pm\)0\%_ & _\(\pm\)0\%_ & _+2.17\%_ & _\(\pm\)0\%_ & _-0.3\%_ & _41.85\%_ \\ \hline \multirow{2}{*}{Validation} & 25033 & 0 & 38581 & 3970 & 100\% & 86.31\% & **28750** \\ & _-0.40\%_ & _\(\pm\)0\%_ & _\(\pm\)0\%_ & _+2.61\%_ & _\(\pm\)0\%_ & _-0.35\%_ & _42.57\%_ \\ \hline \multirow{2}{*}{} & \multicolumn{6}{c}{Mean Execution Time [μs]} & **0.131** \\ & \multicolumn{6}{c}{_-28.24\%_} \\ \end{tabular}
\end{table}
Table 6: Results for the combined EE-CNN. The #EE column specifies how many times the Early Exit triggered in order to save computation time.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & TP & FP & TN & FN & P & R \\ \hline Training & 59978 & 0 & 88359 & 9615 & 100\% & 86.18\% \\ \hline Validation & 25134 & 0 & 38581 & 3869 & 100\% & 86.66\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for the original CNN.
in robot soccer games, where a high level of precision and real-time processing is required. The paper highlights the challenges in detecting dynamic objects in varying lighting conditions and fast movements, which requires a high level of computational power. The proposed approach can detect single objects in image patches and determine their precise spatial positions with a high precision classification. The proposed method utilizes a convolutional neural network with depthwise separable convolutions, which is optimized to achieve the highest possible accuracy while adhering to the time constraints. The paper also explores the concept of Early Exit neural networks and its potential for reducing computational costs while maintaining performance. Early Exits are integrated in order to terminate the network's inference process early, thereby reducing computational costs. This approach is evaluated and compared to the original CNN, which shows a decrease in the average execution time by 28% for the Early Exit version, with equal precision and almost equal recall.
Future work could focus on optimizing the network architecture further to reduce the computational cost and increase the speed of execution. Additionally, exploring other methods for Early Exits and combining them with other techniques, such as pruning or quantization, could result in more efficient and accurate object detection in mobile robots. Finally, investigating the robustness of the proposed approach to changing lighting conditions and fast movements in various game scenarios as well as a different class distribution could further improve its applicability in practical use cases.
|
2309.05627 | The expected Euler characteristic approximation to excursion
probabilities of smooth Gaussian random fields with general variance
functions | Consider a centered smooth Gaussian random field $\{X(t), t\in T \}$ with a
general (nonconstant) variance function. In this work, we demonstrate that as
$u \to \infty$, the excursion probability $\mathbb{P}\{\sup_{t\in T} X(t) \geq
u\}$ can be accurately approximated by $\mathbb{E}\{\chi(A_u)\}$ such that the
error decays at a super-exponential rate. Here, $A_u = \{t\in T: X(t)\geq u\}$
represents the excursion set above $u$, and $\mathbb{E}\{\chi(A_u)\}$ is the
expectation of its Euler characteristic $\chi(A_u)$. This result substantiates
the expected Euler characteristic heuristic for a broad class of smooth
Gaussian random fields with diverse covariance structures. In addition, we
employ the Laplace method to derive explicit approximations to the excursion
probabilities. | Dan Cheng | 2023-09-11T17:14:06Z | http://arxiv.org/abs/2309.05627v1 | The expected Euler characteristic approximation to excursion probabilities of smooth Gaussian random fields with general variance functions
###### Abstract
Consider a centered smooth Gaussian random field \(\{X(t),t\in T\}\) with a general (non-constant) variance function. In this work, we demonstrate that as \(u\to\infty\), the excursion probability \(\mathbb{P}\{\sup_{t\in T}X(t)\geq u\}\) can be accurately approximated by \(\mathbb{E}\{\chi(A_{u})\}\) such that the error decays at a super-exponential rate. Here, \(A_{u}=\{t\in T:X(t)\geq u\}\) represents the excursion set above \(u\), and \(\mathbb{E}\{\chi(A_{u})\}\) is the expectation of its Euler characteristic \(\chi(A_{u})\). This result substantiates the expected Euler characteristic heuristic for a broad class of smooth Gaussian random fields with diverse covariance structures. In addition, we employ the Laplace method to derive explicit approximations to the excursion probabilities.
**Keywords**: Gaussian random fields, excursion probability, excursion set, Euler characteristic, nonconstant variance, asymptotics, super-exponentially small.
**Mathematics Subject Classification**: 60G15, 60G60, 60G70.
## 1 Introduction
Let \(X=\{X(t),\,t\in T\}\) represent a real-valued Gaussian random field defined on the probability space \((\Omega,\mathcal{F},\mathbb{P})\), where \(T\) denotes the parameter space. The study of excursion probabilities, denoted as \(\mathbb{P}\{\sup_{t\in T}X(t)\geq u\}\), is a classical and fundamental problem in both probability and statistics. It finds extensive applications across numerous domains, including \(p\)-value computations, risk control and extreme event analysis, etc.
In the field of statistics, excursion probabilities play a critical role in tasks such as controlling family-wise error rates [13, 14], constructing confidence bands [10], and detecting signals in noisy data [8, 13]. However, except for only a few examples, computing the exact values of these probabilities is almost impossible. To address this challenge, many researchers have developed various methods for precise approximations of \(\mathbb{P}\{\sup_{t\in T}X(t)\geq u\}\). These methods encompass techniques like the double sum method [6], the tube method [9] and the Rice method [3, 4]. For
comprehensive theoretical insights and related applications, we refer readers to the survey by Adler [1] and the monographs by Piterbarg [6], Adler and Taylor [2], and Azais and Wschebor [4], as well as the references therein.
In recent years, the expected Euler characteristic (EEC) method has emerged as a powerful tool for approximating excursion probabilities. This method, originating from the works of Taylor et al. [12] and Adler and Taylor [2], provides the following approximation:
\[\mathbb{P}\bigg{\{}\sup_{t\in T}X(t)\geq u\bigg{\}}=\mathbb{E}\{\chi(A_{u})\} +\text{error},\quad\text{as $u\to\infty$}, \tag{1.1}\]
where \(\chi(A_{u})\) represents the Euler characteristic of the excursion set \(A_{u}=\{t\in T:X(t)\geq u\}\). This approximation (1.1) is highly elegant and accurate, primarily due to the fact that the principle term \(\mathbb{E}\{\chi(A_{u})\}\) is computable and the error term decays exponentially faster than the major component. However, it is essential to note that this method assumes a Gaussian field with constant variance, limiting its applicability in various scenarios.
In this paper, we extend the EEC method to accommodate smooth Gaussian random fields with general (nonconstant) variance functions. Our main objective is to demonstrate that the EEC approximation (1.1) remains valid under these conditions, with the error term exhibiting super-exponential decay. For a precise description of our findings, please refer to Theorem 3.1 below. Our derived approximation result shows that the maximum variance of \(X(t)\), denoted by \(\sigma_{T}^{2}\) (see (2.1) below), plays a pivotal role in both \(\mathbb{E}\{\chi(A_{u})\}\) and the super-exponentially small error. In our analysis, we observe that the points where \(\sigma_{T}^{2}\) is attained make the most substantial contributions to \(\mathbb{E}\{\chi(A_{u})\}\). Building on this observation, we establish two simpler approximations: one in Theorem 3.2, which incorporates boundary conditions on nonzero derivatives of the variance function over points where \(\sigma_{T}^{2}\) is attained, and another in Theorem 3.3, assuming only a single point attains \(\sigma_{T}^{2}\).
In general, the EEC approximation can be expressed as an integral using the Kac-Rice formula, as outlined in (3.2) in Theorem 3.1. While [12, 2] provided an elegant expression for \(\mathbb{E}\{\chi(A_{u})\}\) termed the Gaussian kinematic formula, this expression heavily relies on the assumption of unit variance, which simplifies the calculation. In our case, where the variance function of \(X(t)\) varies across \(T\), deriving an explicit expression for \(\mathbb{E}\{\chi(A_{u})\}\) becomes challenging. Instead, we apply the Laplace method to extract the term with the leading order of \(u\) from the integral, leaving a remaining error that is \(\mathbb{E}\{\chi(A_{u})\}o(1/u)\). For a more detailed explanation, we offer specific calculations in Sections 8 and 9. To intuitively grasp the EEC approximation, one can roughly consider the major term as \(g(u)e^{-u^{2}/(2\sigma_{T}^{2})}\), while the error term diminishes as \(o(e^{-u^{2}/(2\sigma_{T}^{2})-\alpha u^{2}})\), where \(g(u)\) is a polynomial in \(u\), and \(\alpha>0\) is a constant.
The structure of this paper is as follows: We begin by introducing the notations and assumptions in Section 2. In Section 3, we present our main results, including Theorems 3.1, 3.2, and 3.3. To understand our approach, we outline the main ideas in Section 4 and delve into the analysis of super-exponentially small errors in Sections 5 and 6. Finally, we provide the proofs of our main results in Section 7. In Section 8, we apply the Laplace method to
derive explicit approximations (Theorems 8.3 and 8.4) for cases where a unique maximum point of the variance is present. In Section 9, we demonstrate several examples that illustrate the evaluation of EEC and the subsequent approximation of excursion probabilities.
## 2 Notations and assumptions
Let \(\{X(t),\,t\in T\}\) be a real-valued and centered Gaussian random field, where \(T\) is a compact rectangle in \(\mathbb{R}^{N}\). We define
\[\nu(t)=\sigma_{t}^{2}=\operatorname{Var}(X(t))\quad\text{and}\quad\sup_{t\in T }\nu(t)=\sigma_{T}^{2}. \tag{2.1}\]
Here, \(\nu(\cdot)\) represents the variance function of the field and \(\sigma_{T}^{2}\) is the maximum variance over \(T\). For a function \(f(\cdot)\in C^{2}(\mathbb{R}^{N})\) and \(t\in\mathbb{R}^{N}\), we introduce the following notations on derivatives:
\[\begin{split} f_{i}(t)&=\frac{\partial f(t)}{ \partial t_{i}},\quad f_{ij}(t)=\frac{\partial^{2}f(t)}{\partial t_{i}\partial t _{j}},\quad\forall i,j=1,\dots,N;\\ \nabla f(t)&=(f_{1}(t),\dots,f_{N}(t))^{T},\quad \nabla^{2}f(t)=\left(f_{ij}(t)\right)_{i,j=1,\dots,N}.\end{split} \tag{2.2}\]
Let \(B\prec 0\) (negative definite) and \(B\preceq 0\) (negative semi-definite) denote that a symmetric matrix \(B\) has all negative or nonpositive eigenvalues, respectively. Additionally, we use \(\operatorname{Cov}(\xi_{1},\xi_{2})\) and \(\operatorname{Corr}(\xi_{1},\xi_{2})\) to represent the covariance and correlation between two random variables \(\xi_{1}\) and \(\xi_{2}\). The density of the standard Normal distribution is denoted as \(\phi(x)\), and its tail probability is \(\Psi(x)=\int_{x}^{\infty}\phi(y)dy\). Let \(\mathbb{S}^{j}\) be the \(j\)-dimensional unit sphere.
Consider the domain \(T=\prod_{i=1}^{N}[a_{i},b_{i}]\), where \(-\infty<a_{i}<b_{i}<\infty\). We draw from the notation established by Adler and Taylor in [2] to demonstrate that \(T\) can be decomposed into the union of its interior and lower-dimensional faces. This decomposition forms the basis for calculating the Euler characteristic of the excursion set \(A_{u}\), as elaborated in Section 3.
Each face \(K\) of dimension \(k\) is defined by fixing a subset \(\tau(K)\subset\{1,\dots,N\}\) of size \(k\) and a subset \(\varepsilon(K)=\{\varepsilon_{j},j\notin\tau(K)\}\subset\{0,1\}^{N-k}\) of size \(N-k\) so that
\[K=\{t=(t_{1},\dots,t_{N})\in T: a_{j}<t_{j}<b_{j}\text{ if }j\in\tau(K),\] \[t_{j}=(1-\varepsilon_{j})a_{j}+\varepsilon_{j}b_{j}\text{ if }j \notin\tau(K)\}.\]
Denote by \(\partial_{k}T\) the collection of all \(k\)-dimensional faces in \(T\). The interior of \(T\) is designated as \(\overset{\circ}{T}=\partial_{N}T\), while the boundary of \(T\) is formulated as \(\partial T=\cup_{k=0}^{N-1}\cup_{K\in\partial_{k}T}K\). This allows us to partition \(T\) in the following manner:
\[T=\bigcup_{k=0}^{N}\partial_{k}T=\bigcup_{k=0}^{N}\bigcup_{K\in\partial_{k}T}K.\]
For each \(t\in T\), let
\[\begin{split}\nabla X_{|K}(t)&=(X_{i_{1}}(t),\ldots,X_{ i_{k}}(t))_{i_{1},\ldots,i_{k}\in\tau(K)}^{T},\quad\nabla^{2}X_{|K}(t)=(X_{mn}(t))_{m,n\in \tau(K)},\\ \Sigma(t)&=\mathbb{E}\{X(t)\nabla^{2}X(t)\}=( \mathbb{E}\{X(t)X_{ij}(t)\})_{1\leq i,j\leq N},\\ \Sigma_{K}(t)&=\mathbb{E}\{X(t)\nabla^{2}X_{|K}(t) \}=(\mathbb{E}\{X(t)X_{ij}(t)\})_{i,j\in\tau(K)},\\ \Lambda(t)&=\mathrm{Cov}(\nabla X(t))=(\mathbb{E}\{ X_{i}(t)X_{j}(t)\})_{1\leq i,j\leq N},\\ \Lambda_{K}(t)&=\mathrm{Cov}(\nabla X_{|K}(t))=( \mathbb{E}\{X_{i}(t)X_{j}(t)\})_{i,j\in\tau(K)}.\end{split} \tag{2.3}\]
For each \(K\in\partial_{k}T\), we define the _number of extended outward maxima above \(u\) on face \(K\)_ as
\[M_{u}^{E}(K):=\#\{t\in K:X(t)\geq u,\nabla X_{|K}(t)=0,\nabla^{2}X_{|K}(t) \prec 0,\varepsilon_{j}^{*}X_{j}(t)\geq 0,\forall j\notin\tau(K)\},\]
where \(\varepsilon_{j}^{*}=2\varepsilon_{j}-1\), and define the _number of local maxima above \(u\) on face \(K\)_ as
\[M_{u}(K):=\#\{t\in K:X(t)\geq u,\nabla X_{|K}(t)=0,\nabla^{2}X_{|K}(t)\prec 0\}.\]
Clearly, \(M_{u}^{E}(X,K)\leq M_{u}(X,K)\).
For each \(t\in T\) with \(\nu(t)=\sigma_{T}^{2}\), we define the index set \(\mathcal{I}(t)=\{\ell:\nu_{\ell}(t)=0\}\) representing the directions along which the partial derivatives of \(\nu(t)\) vanish. If \(t\in K\in\partial_{k}T\) with \(\nu(t)=\sigma_{T}^{2}\), then we have \(\tau(K)\subset\mathcal{I}(t)\) since \(\nu_{\ell}(t)=0\) for all \(\ell\in\tau(K)\). It is worth noting that since \(\nu_{i}(t)=2\mathbb{E}\{X_{i}(t)X(t)\}\), we can also express this index set as \(\mathcal{I}(t)=\{\ell:\mathbb{E}\{X(t)X_{\ell}(s)\}=0\}\).
Our analytical framework relies on the following conditions for smoothness (**H1**) and regularity (**H2**), in addition to curvature conditions (**H3**) or (**H3\({}^{\prime}\)**).
* \(X\in C^{2}(\mathbb{R}^{N})\) almost surely and the second derivatives satisfy the _uniform mean-square Holder condition_: there exist constants \(C,\delta>0\) such that \[\mathbb{E}(X_{ij}(t)-X_{ij}(t^{\prime}))^{2}\leq C\|t-t^{\prime}\|^{2\delta}, \quad\forall t,t^{\prime}\in T,\ i,j=1,\ldots,N.\]
* For every pair \((t,t^{\prime})\in T^{2}\) with \(t\neq t^{\prime}\), the Gaussian vector \[\big{(}X(t),\nabla X(t),X_{ij}(t),X(t^{\prime}),\nabla X(t^{\prime}),X_{ij}(t^ {\prime}),1\leq i\leq j\leq N\big{)}\] is non-degenerate.
* For every \(t\in K\in\partial_{k}T\), \(0\leq k\leq N-2\), such that \(\nu(t)=\sigma_{T}^{2}\) and \(\mathcal{I}(t)\) contains at least two indices, we have \[\left(\mathbb{E}\{X(t)X_{ij}(t)\}\right)_{i,j\in\mathcal{I}(t)}\prec 0.\] (2.4)
* For every \(t\in K\in\partial_{k}T\), \(0\leq k\leq N-2\), such that \(\nu(t)=\sigma_{T}^{2}\) and \(\mathcal{I}(t)\) contains at least two indices, we have \[\left(\nu_{ij}(t)\right)_{i,j\in\mathcal{I}(t)}\preceq 0.\] (2.5)
Conditions \(({\bf H}3)\) and \(({\bf H}3^{\prime})\) involve the behavior of the variance function \(\nu(t)\) at critical points, and they are closely related, as shown in Proposition 2.1 below. Here we provide some additional insights into \(({\bf H}3^{\prime})\). Despite its initially technical appearance, \(({\bf H}3^{\prime})\) is in fact a mild condition that specifically applies to lower-dimensional boundary points \(t\) where \(\nu(t)=\sigma_{T}^{2}\). In essence, it indicates that the variance function should possess a negative semi-definite Hessian matrix at these boundary critical points where \(\nu(t)=\sigma_{T}^{2}\) while concurrently exhibiting at least two zero partial derivatives.
For example, in the 1D case, since \({\cal I}(t)\) contains at most one index, there is no need to check \(({\bf H}3^{\prime})\). Similarly, in the 2D case, we only need to check \(({\bf H}3^{\prime})\) or (2.5) when \(\sigma_{T}^{2}\) is achieved at corner points \(t\in\partial_{0}T\) with \({\cal I}(t)=\{1,2\}\). Moreover, if the variance function \(\nu(t)\) demonstrates strict monotonicity in all directions across \(\mathbb{R}^{N}\), then \({\cal I}(t)=\emptyset\) and there is no need to verify \(({\bf H}3^{\prime})\).
**Proposition 2.1**.: _The condition \(({\bf H}3^{\prime})\) implies \(({\bf H}3)\). In addition, \(({\bf H}3)\) implies that_
\[(\mathbb{E}\{X(t)X_{ij}(t)\})_{i,j\in{\cal I}(t)}\prec 0,\quad\forall t\in T \text{ with }\nu(t)=\sigma_{T}^{2}. \tag{2.6}\]
Proof.: Taking the second derivative on both sides of \(\nu(t)=\mathbb{E}\{X(t)^{2}\}\), we obtain \(\nu_{ij}(t)/2=\mathbb{E}\{X(t)X_{ij}(t)\}+\mathbb{E}\{X_{i}(t)X_{j}(t)\}\), implying
\[(\mathbb{E}\{X(t)X_{ij}(t)\})_{i,j\in{\cal I}(t)}=\frac{1}{2}(\nu_{ij}(t))_{i, j\in{\cal I}(t)}-(\mathbb{E}\{X_{i}(t)X_{j}(t)\})_{i,j\in{\cal I}(t)}. \tag{2.7}\]
Note that, as a covariance matrix, \((\mathbb{E}\{X_{i}(t)X_{j}(t)\})_{i,j\in{\cal I}(t)}\) is positive definite by \(({\bf H}2)\). Therefore, (2.5) implies (2.4), or equivalently \(({\bf H}3^{\prime})\) implies \(({\bf H}3)\).
Next we demonstrate that \(({\bf H}3)\) implies (2.6). It suffices to show (2.4) for \(k=N-1\) and \(k=N\), and for the case that \({\cal I}(t)\) contains at most one index, which complement those cases in \(({\bf H}3)\).
(i) If \(k=N\), then \(t\) becomes a maximum point of \(\nu\) within the interior of \(T\) and \({\cal I}(t)=\tau(K)=\{1,\cdots,N\}\), implying (2.5), and hence (2.4) holds by (2.7).
(ii) For \(k=N-1\), we consider two scenarios. If \({\cal I}(t)=\tau(K)\), then \(t\) becomes a maximum point of \(\nu\) restricted on \(K\), hence (2.4) is satisfied as discussed above. If \({\cal I}(t)=\{1,\cdots,N\}\), then it follows from Taylor's formula that
\[\nu(t^{\prime})=\nu(t)+(t^{\prime}-t)^{T}\nabla^{2}\nu(t)(t^{\prime}-t)+o(\|t ^{\prime}-t\|^{2}),\quad t^{\prime}\in T.\]
Notice that \(\{(t^{\prime}-t)/\|t^{\prime}-t\|:t^{\prime}\in T\}\) contains all directions in \(\mathbb{R}^{N}\) since \(t\in K\in\partial_{N-1}T\), together with the fact \(\nu(t)=\sigma_{T}^{2}\), we see that \(\nabla^{2}\nu(t)\) cannot have any positive eigenvalue, thus (2.5) and hence (2.4) hold.
(iii) Finally, it's evident from the 1D Taylor's formula that (2.5) is valid when \({\cal I}(t)\) contains only one index.
The condition (2.6) established in Proposition 2.1 serves as the fundamental requirement for our main results, as demonstrated in Theorems 3.1, 3.2 and 3.3 below. As seen from Proposition
2.1, we can simplify (2.6) to condition (**H**3). Thus our main results will be presented under the assumption of condition (**H**3).
Furthermore, it is worth highlighting that, in practical applications, verifying (**H**3\({}^{\prime}\)) can often be a more straightforward process. This condition directly pertains to the variance function \(\nu(t)\), making it easier to assess. Thus, Proposition 2.1 provides the flexibility to check (**H**3\({}^{\prime}\)) instead of (**H**3). This insight simplifies the verification procedure, enhancing the practical applicability of our results.
## 3 Main results
Here, we will present our main results Theorems 3.1, 3.2 and 3.3, whose proofs are given in Section 7. Define the _number of extended outward critical points of index \(i\) above level \(u\) on face \(K\)_ be
\[\mu_{i}(K):=\#\{t\in K:X(t)\geq u,\nabla X_{|K}(t)=0,\text{index}( \nabla^{2}X_{|K}(t))=i,\] \[\varepsilon_{j}^{*}X_{j}(t)\geq 0\text{ for all }j\notin\tau(K)\}.\]
Recall that \(\varepsilon_{j}^{*}=2\varepsilon_{j}-1\) and the index of a matrix is defined as the number of its negative eigenvalues. It is evident to observe that \(\mu_{N}(K)=M_{u}^{E}(K)\). It follows from (**H**1), (**H**2) and the Morse theorem (see Corollary 9.3.5 or pages 211-212 in Adler and Taylor [2]) that the Euler characteristic of the excursion set \(A_{u}\) can be represented as
\[\chi(A_{u})=\sum_{k=0}^{N}\sum_{K\in\partial_{k}T}(-1)^{k}\sum_{i=0}^{k}(-1)^{ i}\mu_{i}(K). \tag{3.1}\]
Now we state the following general result on the EEC approximation for the excursion probability.
**Theorem 3.1**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying (**H**1), (**H**2) and (**H**3). Then there exists a constant \(\alpha>0\) such that as \(u\to\infty\),_
\[\begin{split}&\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\\ =&\sum_{k=0}^{N}\sum_{K\in\partial_{k}T}(-1)^{k}\int_ {K}\mathbb{E}\big{\{}\text{det}\nabla^{2}X_{|K}(t)\mathds{1}_{\{X(t)\geq u,\ \varepsilon_{\ell}^{*}X_{\ell}(t)\geq 0\ \text{for all }\ell\notin\tau(K)\}}\big{|}\nabla X_{|K}(t)=0\big{\}}\\ &\times p_{\nabla X_{|K}(t)}(0)dt+o\left(\exp\left\{-\frac{u^{2}} {2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right)\\ =&\mathbb{E}\{\chi(A_{u})\}+o\left(\exp\left\{-\frac{ u^{2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right).\end{split} \tag{3.2}\]
In general, computing the EEC approximation \(\mathbb{E}\{\chi(A_{u})\}\) is a challenging task because it involves conditional expectations over the joint covariance of the Gaussian field and its Hessian, given zero gradient, which vary across \(T\). However, one can apply the Laplace method to
extract the term with the largest order of \(u\) from \(\mathbb{E}\{\chi(A_{u})\}\) such that the remaining error is \(o(1/u)\mathbb{E}\{\chi(A_{u})\}\). Examples demonstrating the Laplace method are presented in Section 9.
It is important to note that in the expression (3.2), when \(k=0\), all terms involving \(\nabla X_{|K}(t)\) and \(\nabla^{2}X_{|K}(t)\) vanish. Consequently, if \(k=0\), we treat the integral in (3.2) as the usual Gaussian tail probabilities. This notation is also adopted in the results presented in Theorems 3.2 and 3.3 below.
The proof of Theorem 3.1 reveals that points where the maximum variance \(\sigma_{T}^{2}\) is attained make the most significant contribution to \(\mathbb{E}\{\chi(A_{u})\}\). Therefore, in many cases, the general EEC approximation \(\mathbb{E}\{\chi(A_{u})\}\) can be simplified. The following result is based on the boundary condition (3.3) and is applicable at boundary points where nonzero partial derivatives of the variance function occur when \(\sigma_{T}^{2}\) is reached.
**Theorem 3.2**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying_ (**H**1)_, (**H**2) and the following boundary condition_
\[\Big{\{}t\in J:\,\nu(t)=\sigma_{T}^{2},\prod_{i\notin\tau(J)}\nu_{i}(t)=0 \Big{\}}=\emptyset,\quad\forall\text{ face }J\subset T. \tag{3.3}\]
_Then there exists a constant \(\alpha>0\) such that as \(u\to\infty\),_
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\} =\sum_{k=0}^{N}\sum_{K\in\partial_{k}T}(-1)^{k}\int_{K}\mathbb{E} \big{\{}\mathrm{det}\nabla^{2}X_{|K}(t)\mathbbm{1}_{\{X(t)\geq u\}}\big{|} \nabla X_{|K}(t)=0\big{\}}\] \[\quad\times p_{\nabla X_{|K}(t)}(0)dt+o\left(\exp\left\{-\frac{u^ {2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right).\]
In other words, the boundary condition (3.3) indicates that, for any point \(t\in J\) attaining the maximum variance \(\sigma_{T}^{2}\), there must be \(\nu_{i}(t)\neq 0\) for all \(i\notin\tau(J)\). In particular, as an important property, we observe that (3.3) implies the condition (**H**3\({}^{\prime}\)) and hence (**H**3). The following result provides an asymptotic approximation for the special case where the variance function attains its maximum \(\sigma_{T}^{2}\) only at a unique point.
**Theorem 3.3**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying_ (**H**1)_, (**H**2) _and_ (**H**3)_. Suppose \(\nu(t)\) attains its maximum \(\sigma_{T}^{2}\) only at a single point \(t^{*}\in K\), where \(K\in\partial_{k}T\) with \(k\geq 0\). Then there exists a constant \(\alpha>0\) such that as \(u\to\infty\),_
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\] \[=\sum_{J}(-1)^{\dim(J)}\int_{J}\mathbb{E}\big{\{}\mathrm{det} \nabla^{2}X_{|J}(t)\mathbbm{1}_{\{X(t)\geq u,\ \varepsilon_{\ell}^{*}X_{\ell}(t)\geq 0\ \text{for all }\ell\in\mathcal{I}(t^{*})\setminus\tau(J)\}}\big{|} \nabla X_{|J}(t)=0\big{\}}\] \[\quad\times p_{\nabla X_{|J}(t)}(0)dt+o\left(\exp\left\{-\frac{u^ {2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right),\]
_where the sum is taken over all faces \(J\) of \(T\) such that \(t^{*}\in\bar{J}\) and \(\tau(J)\subset\mathcal{I}(t^{*})\)._
Employing the Laplace method, we will provide refined explicit approximation results in Section 8 under the assumptions in Theorem 3.3. Furthermore, we demonstrate several examples that illustrate the evaluation of approximating excursion probabilities in Section 9.
Outline of the proofs
Here we show the main idea for proving the main results above. Let \(f\) be a smooth real-valued function, then \(\sup_{t\in T}f(t)\geq u\) if and only if there exists at least one extended outward local maximum above \(u\) on some face of \(T\). Thus, under conditions (**H**1) and (**H**2), the following relation holds for each \(u\in\mathbb{R}\):
\[\left\{\sup_{t\in T}X(t)\geq u\right\}=\bigcup_{k=0}^{N}\bigcup_{K \in\partial_{k}T}\{M_{u}^{E}(K)\geq 1\}\quad\text{a.s.} \tag{4.1}\]
This implies that the probability of the supremum of the Gaussian random field exceeding \(u\) is equal to the probability that there exists at least one extended outward local maximum above \(u\) on some face \(K\) of \(T\). Therefore, we obtain the following upper bound for the excursion probability:
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\leq\sum_{k=0}^{N }\sum_{K\in\partial_{k}T}\mathbb{P}\{M_{u}^{E}(K)\geq 1\}\leq\sum_{k=0}^{N} \sum_{K\in\partial_{k}T}\mathbb{E}\{M_{u}^{E}(K)\}. \tag{4.2}\]
On the other hand, notice that
\[\mathbb{E}\{M_{u}^{E}(K)\}-\mathbb{P}\{M_{u}^{E}(K)\geq 1\} =\sum_{i=1}^{\infty}(i-1)\mathbb{P}\{M_{u}^{E}(K)=i\}\] \[\leq\sum_{i=1}^{\infty}i(i-1)\mathbb{P}\{M_{u}^{E}(K)=i\}= \mathbb{E}\{M_{u}^{E}(K)[M_{u}^{E}(K)-1]\}\]
and
\[\mathbb{P}\{M_{u}^{E}(K)\geq 1,M_{u}^{E}(K^{\prime})\geq 1\}\leq\mathbb{E}\{M_ {u}^{E}(K)M_{u}^{E}(K^{\prime})\}.\]
Applying the Bonferroni inequality to (4.1) and combining these two inequalities, we obtain the following lower bound for the excursion probability:
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\] \[\geq\sum_{k=0}^{N}\sum_{K\in\partial_{k}T}\mathbb{P}\{M_{u}^{E}(K )\geq 1\}-\sum_{K\neq K^{\prime}}\mathbb{P}\{M_{u}^{E}(K)\geq 1,M_{u}^{E}(K^{ \prime})\geq 1\} \tag{4.3}\] \[\geq\sum_{k=0}^{N}\sum_{K\in\partial_{k}T}\left(\mathbb{E}\{M_{u }^{E}(K)\}-\mathbb{E}\{M_{u}^{E}(K)[M_{u}^{E}(K)-1]\}\right)-\sum_{K\neq K^{ \prime}}\mathbb{E}\{M_{u}^{E}(K)M_{u}^{E}(K^{\prime})\},\]
where the last sum is taken over all possible pairs of different faces \((K,K^{\prime})\).
**Remark 4.1**: Note that, following the same arguments above, we have that the expectations on the number of extended outward maxima \(M_{u}^{E}(\cdot)\) in both (4.2) and (4.3) can be replaced by the expectations on the number of local maxima \(M_{u}(\cdot)\)
We call a function \(h(u)\)_super-exponentially small_ [when compared with the excursion probability \(\mathbb{P}\{\sup_{t\in T}X(t)\geq u\}\) or \(\mathbb{E}\{\chi(A_{u})\}\)], if there exists a constant \(\alpha>0\) such that \(h(u)=o(e^{-u^{2}/(2\sigma_{T}^{2})-\alpha u^{2}})\) as \(u\to\infty\). The main idea for proving the EEC approximation Theorem 3.1 consists of the following two steps: (i) show that, except for the upper bound in (4.2), all terms in the lower bound in (4.3) are super-exponentially small; and (ii) demonstrate that the difference between the upper bound in (4.2) and \(\mathbb{E}\{\chi(A_{u})\}\) is also super-exponentially small. The proofs for Theorems 3.2 and 3.3 follow the same ideas, aiming to establish super-exponential smallness for the terms involved in the lower bounds, as well as for the difference between the upper bound and EEC.
## 5 Estimation of super-exponential smallness for terms in the lower bound
### Factorial moments
We first state the following result, which is a modified version (restricted on a face \(K\)) of Lemma 4 in Piterbarg [7], characterizing the decaying rate for factorial moments of the number of critical points exceeding a high level for Gaussian fields.
**Lemma 5.1**.: _Assume \((\mathbf{H}1)\) and \((\mathbf{H}2)\). Then there exists a positive constant \(C\) such that for any \(\varepsilon>0\) one can find a number \(\varepsilon_{1}>0\) such that for any \(K\in\partial_{k}T\),_
\[\mathbb{E}\{M_{u}(K)(M_{u}(K)-1)\}\leq Cu^{2k+1}\exp\bigg{\{}-\frac{u^{2}}{2 \beta_{K}^{2}+\varepsilon}\bigg{\}}+Cu^{4k+2}\exp\bigg{\{}-\frac{u^{2}}{2 \sigma_{K}^{2}-\varepsilon_{1}}\bigg{\}}, \tag{5.1}\]
_where_
\[\beta_{K}^{2}=\sup_{t\in K}\sup_{e\in\mathbb{S}^{k-1}}\mathrm{Var}(X(t)|\nabla X _{|K}(t),\nabla^{2}X_{|K}(t)e),\quad\sigma_{K}^{2}=\sup_{t\in K}\mathrm{Var}(X (t)).\]
The following result shows that the factorial moments in (4.3) are super-exponentially small under our assumptions.
**Proposition 5.2**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying \((\mathbf{H}1)\), \((\mathbf{H}2)\) and \((\mathbf{H}3)\). Then there exists \(\alpha>0\) such that as \(u\to\infty\),_
\[\sum_{k=0}^{N}\sum_{K\in\partial_{k}T}\mathbb{E}\{M_{u}(K)(M_{u}(K)-1)\}=o \left(e^{-u^{2}/(2\sigma_{T}^{2})-\alpha u^{2}}\right). \tag{5.2}\]
Proof.: Due to Lemma 5.1, it suffices to show that for each \(K\in\partial_{k}T\), \(\beta_{K}^{2}<\sigma_{T}^{2}\), which is equivalent to \(\mathrm{Var}(X(t)|\nabla X_{|K}(t),\nabla^{2}X_{|K}(t)e)<\sigma_{T}^{2}\) for all \(t\in\bar{K}=K\cup\partial K\) and \(e\in\mathbb{S}^{k-1}\). Suppose \(\mathrm{Var}(X(t)|\nabla X_{|K}(t),\nabla^{2}X_{|K}(t)e)=\sigma_{T}^{2}\) for some \(t\in K\), then
\[\sigma_{T}^{2}=\mathrm{Var}(X(t)|\nabla X_{|K}(t),\nabla^{2}X_{|K}(t)e)\leq \mathrm{Var}(X(t)|\nabla^{2}X_{|K}(t)e)\leq\mathrm{Var}(X(t))\leq\sigma_{T}^{ 2}.\]
Note that
\[\text{Var}(X(t)|\nabla^{2}X_{|K}(t)e)=\text{Var}(X(t))\Leftrightarrow\mathbb{E}\{X (t)(\nabla^{2}X_{|K}(t)e)\}=0\Leftrightarrow\Sigma_{K}(t)e=0.\]
But \(t\) is a point with \(\nu(t)=\sigma_{T}^{2}\), thus \(\Sigma_{K}(t)\prec 0\) by Proposition 2.1, implying \(\Sigma_{K}(t)e\neq 0\) for all \(e\in\mathbb{S}^{k-1}\) and causing a contradiction.
On the other hand, suppose \(\text{Var}(X(t)|\nabla X_{|K}(t),\nabla^{2}X_{|K}(t)e)=\sigma_{T}^{2}\) for some \(t\in\partial K\), then \(\text{Var}(X(t)|\nabla X_{|K}(t))=\sigma_{T}^{2}\) and hence \(\nu_{i}(t)=0\) for all \(i\in\tau(K)\), implying \(\Sigma_{K}(t)\prec 0\) by Proposition 2.1. Similarly to the previous arguments, this will lead to a contradiction. The proof is completed.
### Non-adjacent faces
For two sets \(D,D^{\prime}\subset\mathbb{R}^{N}\), let \(d(D,D^{\prime})=\inf\{\|t-t^{\prime}\|:t\in D,t^{\prime}\in D^{\prime}\}\) denote their distance. The following result demonstrates that the last two sums involving the joint moment of two non-adjacent faces in (4.3) are super-exponentially small.
**Proposition 5.3**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying \((\mathbf{H}1)\) and \((\mathbf{H}2)\). Then there exists \(\alpha>0\) such that as \(u\to\infty\),_
\[\mathbb{E}\{M_{u}(K)M_{u}(K^{\prime})\}=o\left(\exp\left\{-\frac{u^{2}}{2 \sigma_{T}^{2}}-\alpha u^{2}\right\}\right), \tag{5.3}\]
_where \(K\) and \(K^{\prime}\) are different faces of \(T\) with \(d(K,K^{\prime})>0\)._
Proof.: Consider first the case where \(\dim(K)=k\geq 1\) and \(\dim(K^{\prime})=k^{\prime}\geq 1\). By the Kac-Rice formula for high moments [2], we have
\[\mathbb{E}\{M_{u}(K)M_{u}(K^{\prime})\}\] \[=\int_{K}dt\int_{K^{\prime}}dt^{\prime}\,\mathbb{E}\big{\{}| \text{det}\nabla^{2}X_{|K}(t)||\text{det}\nabla^{2}X_{|K^{\prime}}(t^{\prime}) |\mathbb{1}_{\{X(t)\geq u,X(t^{\prime})\geq u\}}\] \[\quad\times\mathbb{1}_{\{\nabla^{2}X_{|K}(t)\prec 0,\,\nabla^{2}X_{|K ^{\prime}}(t^{\prime})\prec 0\}}\big{|}\nabla X_{|K}(t)=0,\nabla X_{|K^{\prime}}(t^{ \prime})=0\big{\}}p_{\nabla X_{|K}(t),\nabla X_{|K^{\prime}}(t^{\prime})}(0,0)\] \[\leq\int_{K}dt\int_{K^{\prime}}dt^{\prime}\int_{u}^{\infty}dx \int_{u}^{\infty}dx^{\prime}\,p_{X(t),X(t^{\prime})}(x,x^{\prime})p_{\nabla X _{|K}(t),\nabla X_{|K^{\prime}}(t^{\prime})}(0,0|X(t)=x,X(t^{\prime})=x^{ \prime})\] \[\quad\times\mathbb{E}\big{\{}|\text{det}\nabla^{2}X_{|K}(t)|| \text{det}\nabla^{2}X_{|K^{\prime}}(t^{\prime})|\big{|}X(t)=x,X(t^{\prime})=x ^{\prime},\nabla X_{|K}(t)=0,\nabla X_{|K^{\prime}}(t^{\prime})=0\}. \tag{5.4}\]
Notice that the following two inequalities hold: for constants \(a_{i_{1}}\) and \(b_{i_{2}}\),
\[\prod_{i_{1}=1}^{k}|a_{i_{1}}|\prod_{i_{2}=1}^{k^{\prime}}|b_{i_{2}}|\leq\frac {\sum_{i_{1}=1}^{k}|a_{i_{1}}|^{k+k^{\prime}}+\sum_{i_{2}=1}^{k^{\prime}}|b_{i_ {2}}|^{k+k^{\prime}}}{k+k^{\prime}};\]
and for any Gaussian variable \(\xi\) and positive integer \(m\), by Jensen's inequality,
\[\mathbb{E}|\xi|^{m}\leq\mathbb{E}(|\mathbb{E}\xi|+|\xi-\mathbb{E}\xi|)^{m} \leq 2^{m-1}(|\mathbb{E}\xi|^{m}+\mathbb{E}|\xi-\mathbb{E}\xi|^{m})=2^{m-1}(| \mathbb{E}\xi|^{m}+B_{m}(\text{Var}(\xi))^{m/2}),\]
where \(B_{m}\) is some constant depending only on \(m\). Combining these two inequalities with the well-known conditional formula for Gaussian variables, we obtain that there exist positive constants \(C_{1}\) and \(N_{1}\) such that for sufficiently large \(x\) and \(x^{\prime}\),
\[\sup_{t\in K,t^{\prime}\in K^{\prime}} \mathbb{E}\big{\{}|\mathrm{det}\nabla^{2}X_{|K}(t)||\mathrm{det} \nabla^{2}X_{|K^{\prime}}(t^{\prime})||X(t)=x,X(t^{\prime})=x^{\prime},\nabla X _{|K}(t)=0,\nabla X_{|K^{\prime}}(t^{\prime})=0\big{\}} \tag{5.5}\] \[\leq C_{1}+(xx^{\prime})^{N_{1}}.\]
Further, there exists \(C_{2}>0\) such that
\[\sup_{t\in K,t^{\prime}\in K^{\prime}}p_{\nabla X_{|K}(t),\nabla X _{|K^{\prime}}(t^{\prime})}(0,0|X(t)=x,X(t^{\prime})=x^{\prime}) \tag{5.6}\] \[\leq\sup_{t\in K,t^{\prime}\in K^{\prime}}(2\pi)^{-(k+k^{\prime}) /2}[\mathrm{detCov}(\nabla X_{|K}(t),\nabla X_{|K^{\prime}}(t^{\prime})|X(t)=x,X(t^{\prime})=x^{\prime})]^{-1/2}\] \[\leq C_{2}.\]
Plugging (5.5) and (5.6) into (5.4), we obtain that there exists \(C_{3}\) such that, for \(u\) large enough,
\[\mathbb{E}\{M_{u}(K)M_{u}(K^{\prime})\} \leq C_{3}\sup_{t\in K,t^{\prime}\in K^{\prime}}\mathbb{E}\{(C_{1 }+|X(t)X(t^{\prime})|^{N_{1}})\mathbbm{1}_{\{X(t)\geq u,X(t^{\prime})\geq u\}}\} \tag{5.7}\] \[\leq C_{3}\sup_{t\in K,t^{\prime}\in K^{\prime}}\mathbb{E}\{(C_{1 }+(X(t)+X(t^{\prime}))^{2N_{1}})\mathbbm{1}_{\{[X(t)+X(t^{\prime})]/2\geq u\}}\}\] \[\leq C_{3}\exp\left(-\frac{u^{2}}{(1+\rho)\sigma_{T}^{2}}+ \varepsilon u^{2}\right),\]
where \(\varepsilon\) is any positive number and \(\rho=\sup_{t\in K,t^{\prime}\in K^{\prime}}\mathrm{Corr}[X(t),X^{\prime}(t)]<1\) due to (**H**2). The case when one of the dimensions of \(K\) and \(K^{\prime}\) is zero can be proved similarly.
### Adjacent faces
The following result shows that the last two sums involving the joint moment of two adjacent faces in (4.3) are super-exponentially small.
**Proposition 5.4**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying_ (**H**1)_,_ (**H**2) _and_ (**H**3)_. Then there exists \(\alpha>0\) such that as \(u\to\infty\),_
\[\mathbb{E}\{M_{u}^{E}(K)M_{u}^{E}(K^{\prime})\}=o\left(\exp\left\{-\frac{u^{2 }}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right), \tag{5.8}\]
_where \(K\) and \(K^{\prime}\) are different faces of \(T\) with \(d(K,K^{\prime})=0\)._
Proof.: Let \(I:=\bar{K}\cap\bar{K^{\prime}}\), which is nonempty since \(d(K,K^{\prime})=0\). To simplify notation, let us assume without loss of generality:
\[\tau(K) =\{1,\ldots,m,m+1,\ldots,k\},\] \[\tau(K^{\prime}) =\{1,\ldots,m,k+1,\ldots,k+k^{\prime}-m\},\]
where \(0\leq m\leq k\leq k^{\prime}\leq N\) and \(k^{\prime}\geq 1\). If \(k=0\), we conventionally consider \(\tau(K)=\emptyset\). Under this assumption, \(K\in\partial_{k}T\), \(K^{\prime}\in\partial_{k^{\prime}}T\), \(\dim(I)=m\), and all elements in \(\varepsilon(K)\) and \(\varepsilon(K^{\prime})\) are \(1\).
We first consider the case when \(k\geq 1\) and \(l\geq 1\). By the Kac-Rice formula,
\[\begin{split}&\mathbb{E}\{M_{u}^{E}(K)M_{u}^{E}(K^{\prime})\}\\ &\leq\int_{K}dt\int_{K^{\prime}}dt^{\prime}\int_{u}^{\infty}dx \int_{u}^{\infty}dx^{\prime}\int_{0}^{\infty}dz_{k+1}\cdots\int_{0}^{\infty} dz_{k+k^{\prime}-m}\int_{0}^{\infty}dw_{m+1}\cdots\int_{0}^{\infty}dw_{k}\\ &\quad\mathbb{E}\big{\{}|{\rm det}\nabla^{2}X_{|K}(t)||{\rm det} \nabla^{2}X_{|K^{\prime}}(t^{\prime})||X(t)=x,X(t^{\prime})=x^{\prime},\nabla X _{|K}(t)=0,X_{k+1}(t)=z_{k+1},\\ &\quad\ldots,X_{k+k^{\prime}-m}(t)=z_{k+k^{\prime}-m},\nabla X_{ |K^{\prime}}(t^{\prime})=0,X_{m+1}(t^{\prime})=w_{m+1},\ldots,X_{k}(t^{\prime })=w_{k}\big{\}}\\ &\quad\times p_{t,t^{\prime}}(x,x^{\prime},0,z_{k+1},\ldots,z_{k +k^{\prime}-m},0,w_{m+1},\ldots,w_{k})\\ &:=\int\int\int_{K\times K^{\prime}}A(t,t^{\prime},u)\,dtdt^{ \prime},\end{split} \tag{5.9}\]
where \(p_{t,t^{\prime}}(x,x^{\prime},0,z_{k+1},\ldots,z_{k+k^{\prime}-m},0,w_{m+1}, \ldots,w_{k})\) is the density of the joint distribution of the variables involved in the given condition. We define
\[\mathcal{M}_{0}:=\{t\in I:\,\nu(t)=\sigma_{T}^{2},\,\nu_{i}(t)=0,\ \forall i=1,\ldots,k+k^{\prime}-m\}, \tag{5.10}\]
and consider two cases for \(\mathcal{M}_{0}\).
**Case (i): \(\mathcal{M}_{0}=\emptyset\).** Under this case, since \(I\) is a compact set, by the uniform continuity of conditional variance, there exist constants \(\varepsilon_{1},\delta_{1}>0\) such that
\[\sup_{t\in B(I,\delta_{1}),\,t^{\prime}\in B^{\prime}(I,\delta_{1})}\text{ Var}(X(t)|\nabla X_{|K}(t),\nabla X_{|K^{\prime}}(t^{\prime}))\leq\sigma_{T}^{2}- \varepsilon_{1}, \tag{5.11}\]
where \(B(I,\delta_{1})=\{t\in K:d(t,I)\leq\delta_{1}\}\) and \(B^{\prime}(I,\delta_{1})=\{t^{\prime}\in K^{\prime}:d(t^{\prime},I)\leq\delta _{1}\}\). By partitioning \(K\times K^{\prime}\) into \(B(I,\delta_{1})\times B^{\prime}(I,\delta_{1})\) and \((K\times K^{\prime})\backslash(B(I,\delta_{1})\times B^{\prime}(I,\delta_{1}))\) and applying the Kac-Rice formula, we obtain
\[\begin{split}&\quad\mathbb{E}\{M_{u}(K)M_{u}(K^{\prime})\}\\ \leq&\int_{(K\times K^{\prime})\backslash(B(I,\delta_{1 })\times B^{\prime}(I,\delta_{1}))}dtdt^{\prime}\,p_{\nabla X_{|K}(t),\nabla X _{|K^{\prime}}(t^{\prime})}(0,0)\\ &\quad\times\mathbb{E}\big{\{}|{\rm det}\nabla^{2}X_{|K}(t)||{\rm det }\nabla^{2}X_{|K^{\prime}}(t^{\prime})|\mathbb{1}_{\{X(t)\geq u,X(t^{\prime}) \geq u\}}\big{|}\nabla X_{|K}(t)=0,\nabla X_{|K^{\prime}}(t^{\prime})=0\big{\}} \\ +&\int_{B(I,\delta_{1})\times B^{\prime}(I,\delta_{1}) }dtdt^{\prime}\,p_{\nabla X_{|K}(t),\nabla X_{|K^{\prime}}(t^{\prime})}(0,0) \\ &\quad\times\mathbb{E}\big{\{}|{\rm det}\nabla^{2}X_{|K}(t)||{\rm det }\nabla^{2}X_{|K^{\prime}}(t^{\prime})|\mathbb{1}_{\{X(t)\geq u,X(t^{\prime}) \geq u\}}\big{|}\nabla X_{|K}(t)=0,\nabla X_{|K^{\prime}}(t^{\prime})=0\big{\}} \\ &:=I_{1}(u)+I_{2}(u).\end{split} \tag{5.12}\]
Note that
\[(K\times K^{\prime})\backslash(B(I,\delta_{1})\times B^{\prime}(I, \delta_{1}))=\Big{(}(K\backslash B(I,\delta_{1}))\times B^{\prime}(I,\delta_{1 })\Big{)}\bigcup\Big{(}B(I,\delta_{1})\times(K\backslash B(I,\delta_{1})) \Big{)}\] \[\bigcup\Big{(}(K\backslash B(I,\delta_{1}))\times(K\backslash B( I,\delta_{1}))\Big{)},\]
where each product on the right hand side consists of two sets with a positive distance. It then follows from Proposition 5.3 that \(I_{1}(u)\) is super-exponentially small. On the other hand, since \(\mathbbm{1}_{\{X(t)\geq u,X(t^{\prime})\geq u\}}\leq\mathbbm{1}_{\{[X(t)+X(t^{ \prime})]/2\geq u\}}\), one has
\[I_{2}(u) \leq\int_{B(I,\delta_{1})\times B^{\prime}(I,\delta_{1})}dtdt^{ \prime}\int_{u}^{\infty}dx\,p_{\frac{X(t)+X(t^{\prime})}{2}}(x|\nabla X_{|K}(t )=0,\nabla X_{|K^{\prime}}(t^{\prime})=0) \tag{5.13}\] \[\quad\times\mathbb{E}\big{\{}|{\rm det}\nabla^{2}X_{|K}(t)||{\rm det }\nabla^{2}X_{|K^{\prime}}(t^{\prime})||[X(t)+X(t^{\prime})]/2=x,\] \[\qquad\nabla X_{|K}(t)=0,\nabla X_{|K^{\prime}}(t^{\prime})=0 \big{\}}p_{\nabla X_{|K}(t),\nabla X_{|K^{\prime}}(t^{\prime})}(0,0).\]
Combining this with (5.11), we conclude that \(I_{2}(u)\) and hence \(\mathbb{E}\{M_{u}^{E}(X,K)M_{u}^{E}(X,K^{\prime})\}\) are super-exponentially small.
**Case (ii): \(\mathcal{M}_{0}\neq\emptyset\).** Let
\[B(\mathcal{M}_{0},\delta_{2}):=\{(t,t^{\prime})\in K\times K^{\prime}:d(t, \mathcal{M}_{0})\lor d(t^{\prime},\mathcal{M}_{0})\leq\delta_{2}\},\]
where \(\delta_{2}\) is a small positive number to be specified. Note that, by the definitions of \(\mathcal{M}_{0}\) and \(B(\mathcal{M}_{0},\delta_{2})\), there exists \(\varepsilon_{2}>0\) such that
\[\sup_{(t,t^{\prime})\in(K\times K^{\prime})\setminus B(\mathcal{M}_{0},\delta _{2})}{\rm Var}([X(t)+X(t^{\prime})]/2|\nabla X_{|K}(t),\nabla X_{|K^{\prime}} (t^{\prime}))\leq\sigma_{T}^{2}-\varepsilon_{2}. \tag{5.14}\]
Similarly to (5.13), we obtain that \(\int_{(K\times K^{\prime})\setminus B(\mathcal{M}_{0},\delta_{2})}A(t,t^{ \prime},u)dtdt^{\prime}\) is super-exponentially small. It suffices to show below that \(\int_{B(\mathcal{M}_{0},\delta_{2})}A(t,t^{\prime},u)\,dtdt^{\prime}\) is super-exponentially small.
Due to (**H**3) and Proposition 2.1, we can choose \(\delta_{2}\) small enough such that for all \((t,t^{\prime})\in B(\mathcal{M}_{0},\delta_{2})\),
\[\Lambda_{K\cup K^{\prime}}(t):=-\mathbb{E}\{X(t)\nabla^{2}X_{|K\cup K^{\prime }}(t)\}=-(\mathbb{E}\{X(t)X_{ij}(t)\})_{i,j=1,\dots,k+k^{\prime}-m}\]
are positive definite. Let \(\{e_{1},e_{2},\dots,e_{N}\}\) be the standard orthonormal basis of \(\mathbb{R}^{N}\). For \(t\in K\) and \(t^{\prime}\in K^{\prime}\), let \(e_{t,t^{\prime}}=(t^{\prime}-t)/\|t^{\prime}-t\|\) and \(\alpha_{i}(t,t^{\prime})=\langle e_{i},\Lambda_{K\cup K^{\prime}}(t)e_{t,t^{ \prime}}\rangle\). Then
\[\Lambda_{K\cup K^{\prime}}(t)e_{t,t^{\prime}}=\sum_{i=1}^{N}\langle e_{i}, \Lambda_{K\cup K^{\prime}}(t)e_{t,t^{\prime}}\rangle e_{i}=\sum_{i=1}^{N} \alpha_{i}(t,t^{\prime})e_{i} \tag{5.15}\]
and there exists \(\alpha_{0}>0\) such that for all \((t,t^{\prime})\in B(\mathcal{M}_{0},\delta_{2})\),
\[\langle e_{t,t^{\prime}},\Lambda_{K\cup K^{\prime}}(t)e_{t,t^{\prime}}\rangle \geq\alpha_{0}. \tag{5.16}\]
Since all elements in \(\varepsilon(K)\) and \(\varepsilon(K^{\prime})\) are \(1\), we may write
\[t =(t_{1},\dots,t_{m},t_{m+1},\dots,t_{k},b_{k+1},\dots,b_{k+k^{ \prime}-m},0,\dots,0),\] \[t^{\prime} =(t^{\prime}_{1},\dots,t^{\prime}_{m},b_{m+1},\dots,b_{k},t^{ \prime}_{k+1},\dots,t^{\prime}_{k+k^{\prime}-m},0,\dots,0),\]
where \(t_{i}\in(a_{i},b_{i})\) for \(i\in\tau(K)\) and \(t^{\prime}_{j}\in(a_{j},b_{j})\) for \(j\in\tau(K^{\prime})\). Therefore,
\[\langle e_{i},e_{t,t^{\prime}}\rangle \geq 0,\quad\forall\ m+1\leq i\leq k,\] \[\langle e_{i},e_{t,t^{\prime}}\rangle \leq 0,\quad\forall\ k+1\leq i\leq k+k^{\prime}-m, \tag{5.17}\] \[\langle e_{i},e_{t,t^{\prime}}\rangle =0,\quad\forall\ k+k^{\prime}-m<i\leq N.\]
Let
\[\begin{split} D_{i}&=\{(t,t^{\prime})\in B(\mathcal{M}_{0},\delta_{2}):\alpha_{i}(t,t^{\prime})\geq\beta_{i}\},\quad\text{if }m+1\leq i\leq k,\\ D_{i}&=\{(t,t^{\prime})\in B(\mathcal{M}_{0},\delta_ {2}):\alpha_{i}(t,t^{\prime})\leq-\beta_{i}\},\quad\text{if }k+1\leq i\leq k+k^{\prime}-m,\\ D_{0}&=\bigg{\{}(t,t^{\prime})\in B(\mathcal{M}_{0},\delta_{2}):\sum_{i=1}^{m}\alpha_{i}(t,t^{\prime})\langle e_{i},e_{t,t^{ \prime}}\rangle\geq\beta_{0}\bigg{\}},\end{split} \tag{5.18}\]
where \(\beta_{0},\beta_{1},\ldots,\beta_{k+k^{\prime}-m}\) are positive constants such that \(\beta_{0}+\sum_{i=m+1}^{k+k^{\prime}-m}\beta_{i}<\alpha_{0}\). It follows from (5.17) and (5.18) that, if \((t,s)\) does not belong to any of \(D_{0},D_{m+1},\ldots,D_{k+k^{\prime}-m}\), then by (5.15),
\[\langle\Lambda_{K\cup K^{\prime}}(t)e_{t,t^{\prime}},e_{t,t^{\prime}}\rangle= \sum_{i=1}^{N}\alpha_{i}(t,t^{\prime})\langle e_{i},e_{t,t^{\prime}}\rangle \leq\beta_{0}+\sum_{i=m+1}^{k+k^{\prime}-m}\beta_{i}<\alpha_{0},\]
which contradicts (5.16). Thus \(D_{0}\cup\cup_{i=m+1}^{k+k^{\prime}-m}D_{i}\) is a covering of \(B(\mathcal{M}_{0},\delta_{2})\). By (5.9),
\[\mathbb{E}\{M_{u}^{E}(K)M_{u}^{E}(K^{\prime})\}\leq\int_{D_{0}}A(t,t^{\prime}, u)\,dtdt^{\prime}+\sum_{i=m+1}^{k+k^{\prime}-m}\int_{D_{i}}A(t,t^{\prime},u)\, dtdt^{\prime}.\]
By the Kac-Rice metatheorem and the fact \(\mathbb{1}_{\{X(t)\geq u,Y(s)\geq u\}}\leq\mathbb{1}_{\{X(t)\geq u\}}\), we obtain
\[\begin{split}&\int_{D_{0}}A(t,t^{\prime},u)\,dtdt^{\prime}\\ \leq&\int_{D_{0}}dtdt^{\prime}\int_{u}^{\infty}dx\,p_ {\nabla X_{|K}(t),\nabla X_{|K^{\prime}}(t^{\prime})}(0,0)p_{X(t)}(x|\nabla X_ {|K}(t)=0,\nabla X_{|K^{\prime}}(t^{\prime})=0)\\ &\times\mathbb{E}\big{\{}|\mathrm{det}\nabla^{2}X_{|K}(t)|| \mathrm{det}\nabla^{2}X_{|K^{\prime}}(t^{\prime})|\big{|}X(t)=x,\nabla X_{|K}( t)=0,\nabla X_{|K^{\prime}}(t^{\prime})=0\big{\}},\end{split} \tag{5.19}\]
and that for \(i=m+1,\ldots,k\),
\[\begin{split}&\int_{D_{i}}A(t,t^{\prime},u)\,dtdt^{\prime}\\ \leq&\int_{D_{i}}dtdt^{\prime}\int_{u}^{\infty}dx\int_ {0}^{\infty}dw_{i}\,p_{X(t),\nabla X_{|K}(t),X_{i}(t^{\prime}),\nabla X_{|K^{ \prime}}(t^{\prime})}(x,0,w_{i},0)\\ &\times\mathbb{E}\big{\{}|\mathrm{det}\nabla^{2}X_{|K}(t)|| \mathrm{det}\nabla^{2}X_{|K^{\prime}}(t^{\prime})|\big{|}X(t)=x,\nabla X_{|K}( t)=0,X_{i}(t^{\prime})=w_{i},\nabla X_{|K^{\prime}}(t^{\prime})=0\big{\}}.\end{split} \tag{5.20}\]
Comparing (5.19) and (5.20) with Eqs. (4.33) and (4.36) respectively in the proof of Theorem 4.8 in Cheng and Xiao [5], one can employ the same reasoning therein to show that \(\mathrm{Var}(X(t)|\nabla X_{|K}(t),\nabla X_{|K^{\prime}}(t^{\prime}))<\sigma_{ 2}^{2}\) uniformly on \(D_{0}\) and \(\mathbb{P}(X(t)>u,X_{i}(t^{\prime})>0|\nabla X_{|K}(t)=0,\nabla X_{|K^{\prime}} (t^{\prime})=0)=o(e^{-u^{2}/(2\sigma_{2}^{2})-\alpha u^{2}})\) uniformly on \(D_{i}\), and deduce that \(\int_{D_{0}}A(t,t^{\prime},u)\,dtdt^{\prime}\) and \(\int_{D_{i}}A(t,t^{\prime},u)\,dtdt^{\prime}\)\((i=m+1,\ldots,k)\) are super-exponentially small.
It is similar to show that \(\int_{D_{i}}A(t,t^{\prime},u)\,dtdt^{\prime}\) are super-exponentially small for \(i=k+1,\ldots,k+k^{\prime}-m\). For the case \(k=0\) or \(l=0\), the argument is even simpler when applying the Kac-Rice formula; the details are omitted here. The proof is finished.
In the proof of Proposition 5.4, we have shown in (5.12) that, if \(\mathcal{M}_{0}=\emptyset\), then the moment \(\mathbb{E}\{M_{u}(X,K)M_{u}(X,K^{\prime})\}\) is super-exponentially small. It is important to note that, the boundary condition (3.3) implies (and generalizes) the condition \(\mathcal{M}_{0}=\emptyset\), yielding the following result.
**Proposition 5.5**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying_ (**H**1)_,_ (**H**2) _and the boundary condition (3.3). Then there exists \(\alpha>0\) such that as \(u\to\infty\),_
\[\mathbb{E}\{M_{u}(K)M_{u}(K^{\prime})\}=o\Big{(}\exp\Big{\{}-\frac{u^{2}}{2 \sigma_{T}^{2}}-\alpha u^{2}\Big{\}}\Big{)},\]
_where \(K\) and \(K^{\prime}\) are adjacent faces of \(T\)._
## 6 Estimation of the difference between EEC and the upper bound
In this section, we demonstrate that the difference between \(\mathbb{E}\{\chi(A_{u})\}\) and the expected number of extended outward local maxima, i.e. the upper bound in (4.2), is super-exponentially small.
**Proposition 6.1**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying_ (**H**1)_,_ (**H**2) _and_ (**H**3)_. Then there exists \(\alpha>0\) such that for any \(K\in\partial_{k}T\) with \(k\geq 0\), as \(u\to\infty\),_
\[\begin{split}\mathbb{E}\{M_{u}^{E}(K)\}&=(-1)^{k} \int_{K}\mathbb{E}\big{\{}\mathrm{det}\nabla^{2}X_{|K}(t)\mathds{1}_{\{X(t)\geq u,\ \varepsilon_{\ell}^{*}X_{\ell}(t)\geq 0\ \mathrm{for\ all}\ \ell\notin\tau(K)\}}\big{|}\nabla X_{|K}(t)=0\big{\}}\\ &\quad\times p_{\nabla X_{|K}(t)}(0)dt+o\left(\exp\left\{-\frac{u^ {2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right)\\ &=(-1)^{k}\mathbb{E}\bigg{\{}\bigg{(}\sum_{i=0}^{k}(-1)^{i}\mu_{ i}(K)\bigg{)}\bigg{\}}+o\left(\exp\left\{-\frac{u^{2}}{2\sigma_{T}^{2}}- \alpha u^{2}\right\}\right).\end{split} \tag{6.1}\]
Proof.: The second equality in (6.1) arises from the application of the Kac-Rice formula:
\[\begin{split}&\mathbb{E}\bigg{\{}\bigg{(}\sum_{i=0}^{k}(-1)^{i} \mu_{i}(K)\bigg{)}\bigg{\}}\\ &=\sum_{i=0}^{k}(-1)^{i}\int_{K}\mathbb{E}\big{\{}|\mathrm{det} \nabla^{2}X_{|K}(t)|\mathds{1}_{\{\mathrm{index}(\nabla^{2}X_{|K}(t))=i\}}\\ &\quad\times\mathds{1}_{\{X(t)\geq u,\ \varepsilon_{\ell}^{*}X_{ \ell}(t)\geq 0\ \mathrm{for\ all}\ \ell\notin\tau(K)\}}\big{|}\nabla X_{|K}(t)=0\big{\}}p_{\nabla X_{|K}(t)}(0)\,dt \\ &=\int_{K}\mathbb{E}\big{\{}\mathrm{det}\nabla^{2}X_{|K}(t) \mathds{1}_{\{X(t)\geq u,\ \varepsilon_{\ell}^{*}X_{\ell}(t)\geq 0\ \mathrm{for\ all}\ \ell\notin\tau(K)\}}\big{|}\nabla X_{|K}(t)=0\big{\}}p_{\nabla X_{|K}(t)}(0)\,dt.\end{split}\]
To prove the first approximation in (6.1) and convey the main idea, we start with the case when the face \(K\) represents the interior of \(T\).
**Case (i): \(\boldsymbol{k=N}\).** By the Kac-Rice formula, we have
\[\mathbb{E}\{M_{u}^{E}(K)\}\] \[=\int_{K}p_{\nabla X(t)}(0)dt\int_{u}^{\infty}dx\,p_{X(t)}(x|\nabla X (t)=0)\mathbb{E}\big{\{}\mathrm{det}\nabla^{2}X(t)\mathbbm{1}_{\{\nabla^{2}X(t) \prec 0\}}\big{|}X(t)=x,\nabla X(t)=0\big{\}}\] \[:=\int_{K}p_{\nabla X(t)}(0)dt\int_{u}^{\infty}A(t,x)dx.\]
Let
\[\mathcal{M}_{1} =\{t\in\bar{K}=T:\nu(t)=\sigma_{T}^{2},\ \nabla\nu(t)=2\mathbb{E}\{X(t) \nabla X(t)\}=0\}, \tag{6.2}\] \[B(\mathcal{M}_{1},\delta_{1}) =\left\{t\in K:d\left(t,\mathcal{M}_{1}\right)\leq\delta_{1}\right\},\]
where \(\delta_{1}\) is a small positive number to be specified. Then, we only need to estimate
\[\int_{B(\mathcal{M}_{1},\delta_{1})}p_{\nabla X(t)}(0)dt\int_{u}^{\infty}A(t, x)dx, \tag{6.3}\]
since the integral above with \(B(\mathcal{M}_{1},\delta_{1})\) replaced by \(K\backslash B(\mathcal{M}_{1},\delta_{1})\) becomes super-exponentially small due to the fact
\[\sup_{t\in K\backslash B(\mathcal{M}_{1},\delta_{1})}\mathrm{Var}(X(t)|\nabla X (t)=0)<\sigma_{T}^{2}.\]
Notice that, by Proposition 2.1, \(\mathbb{E}\{X(t)\nabla^{2}X(t)\}\prec 0\) for all \(t\in\mathcal{M}_{1}\). Thus there exists \(\delta_{1}\) small enough such that \(\mathbb{E}\{X(t)\nabla^{2}X(t)\}\prec 0\) for all \(t\in B(\mathcal{M}_{1},\delta_{1})\). In particular, let \(\lambda_{0}\) be the largest eigenvalue of \(\mathbb{E}\{X(t)\nabla^{2}X(t)\}\) over \(B(\mathcal{M}_{1},\delta_{1})\), then \(\lambda_{0}<0\) by the uniform continuity. Also note that \(\mathbb{E}\{X(t)\nabla X(t)\}\) tends to \(0\) as \(\delta_{1}\to 0\). Therefore, as \(\delta_{1}\to 0\),
\[\mathbb{E}\{X_{ij}(t)|X(t)=x,\nabla X(t)=0\}\] \[=\big{(}\mathbb{E}\{X_{ij}(t)X(t)\},\mathbb{E}\{X_{ij}(t)X_{1}(t) \},\ldots,\mathbb{E}\{X_{ij}(t)X_{N}(t)\}\big{)}\left[\mathrm{Cov}(X(t), \nabla X(t))\right]^{-1}(x,0,\ldots,0)^{T}\] \[=\frac{\mathbb{E}\{X_{ij}(t)X(t)\}x}{\sigma_{T}^{2}}(1+o(1)).\]
Thus, for all \(x\geq u\) and \(t\in B(\mathcal{M}_{1},\delta_{1})\) with \(\delta_{1}\) small enough,
\[\Sigma_{1}(t,x):=\mathbb{E}\{\nabla^{2}X(t)|X(t)=x,\nabla X(t)=0\}\prec 0.\]
Let \(\Delta_{1}(t,x)=\nabla^{2}X(t)-\Sigma_{1}(t,x)\). We have
\[\int_{u}^{\infty}A(t,x)dx =\int_{u}^{\infty}p_{X(t)}(x|\nabla X(t)=0)\mathbb{E}\big{\{} \mathrm{det}(\Delta_{1}(t,x)+\Sigma_{1}(t,x)) \tag{6.4}\] \[\qquad\times\mathbbm{1}_{\{\Delta_{1}(t,x)+\Sigma_{1}(t,x)\prec 0 \}}\big{|}X(t)=x,\nabla X(t)=0\big{\}}\,dx\] \[:=\int_{u}^{\infty}p_{X(t)}(x|\nabla X(t)=0)E(t,x)\,dx.\]
Note that the following is a centered Gaussian random matrix not depending on \(x\):
\[\Omega(t)=(\Omega_{ij}(t))_{1\leq i,j\leq N}=(\Delta_{1}(t,x)|X(t)=x,\nabla X (t)=0).\]
Let \(h_{t}(v)\) denote the density of the Gaussian vector \(((\Omega^{X}_{ij}(t))_{1\leq i\leq j\leq N}\) with \(v=(v_{ij})_{1\leq i\leq j\leq N}\in\mathbb{R}^{N(N+1)/2}\). Then
\[\begin{split} E(t,x)&=\mathbb{E}\big{\{}\text{det}( \Omega(t)+\Sigma_{1}(t,x))\mathbbm{1}_{\{\Omega(t)+\Sigma_{1}(t,x)\prec 0 \}}\big{\}}\\ &=\int_{v:\,(v_{ij})+\Sigma_{1}(t,x)\prec 0}\text{det}((v_{ij})+ \Sigma_{1}(t,x))h_{t}(v)dv,\end{split} \tag{6.5}\]
where \((v_{ij})\) is the abbreviations of the matrix \(v=(v_{ij})_{1\leq i,j\leq N}\). There exists a constant \(c>0\) such that for \(\delta_{1}\) small enough and all \(t\in B(\mathcal{M}_{1},\delta_{1})\), and \(x\geq u\), we have
\[(v_{ij})+\Sigma_{1}(t,x)\prec 0,\quad\forall\|(v_{ij})\|:=\Big{(}\sum_{i,j=1}^ {N}v_{ij}^{2}\Big{)}^{1/2}<cu.\]
This implies \(\{v:\,(v_{ij})+\Sigma_{1}(t,x)\not\prec 0\}\subset\{v:\,\|(v_{ij})\|\geq cu\}\). Consequently, the integral in (6.5) with the domain of integration replaced by \(\{v:\,(v_{ij})+\Sigma_{1}(t,x)\not\prec 0\}\) is \(o(e^{-\alpha^{\prime}u^{2}})\) uniformly for all \(t\in B(\mathcal{M}_{1},\delta_{1})\), where \(\alpha^{\prime}\) is a positive constant. As a result, we conclude that, uniformly for all \(t\in B(\mathcal{M}_{1},\delta_{1})\) and \(x\geq u\),
\[E(t,x)=\int_{\mathbb{R}^{N(N+1)/2}}\text{det}((v_{ij})+\Sigma_{1}(t,x))h_{t}( v)dv+o(e^{-\alpha^{\prime}u^{2}}).\]
By substituting this result into (6.4), we observe that the indicator function \(\mathbbm{1}_{\{\nabla^{2}X(t)\prec 0\}}\) in (6.3) can be eliminated, causing only a super-exponentially small error. Thus, for sufficiently large \(u\), there exists \(\alpha>0\) such that
\[\begin{split}\mathbb{E}\{M^{E}_{u}(K)\}&=\int_{K}p_ {\nabla X(t)}(0)dt\int_{u}^{\infty}\mathbb{E}\{\text{det}\nabla^{2}X(t)|X(t)=x,\nabla X(t)=0\}\\ &\quad\times p_{X(t)}(x|\nabla X(t)=0)dx+o\Big{(}\exp\Big{\{}- \frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u^{2}\Big{\}}\Big{)}.\end{split}\]
**Case (ii): \(k,l\geq 0\).** It is worth noting that when \(k=0\), the terms in (6.1) related to the Hessian will vanish, simplifying the proof. Therefore, without loss of generality, let \(k\geq 1\), \(\tau(K)=\{1,\cdots,k\}\) and assume all the elements in \(\varepsilon(K)\) are \(1\). By the Kac-Rice formula,
\[\begin{split}\mathbb{E}\{M^{E}_{u}(K)\}&=(-1)^{k} \int_{K}p_{\nabla X_{|K}(t)}(0)dt\int_{u}^{\infty}p_{X(t)}\big{(}x\big{|}\nabla X _{|K}(t)=0\big{)}\mathbb{E}\big{\{}\text{det}\nabla^{2}X_{|K}(t)\\ &\quad\times\mathbbm{1}_{\{\nabla^{2}X_{|K}(t)\prec 0\}} \mathbbm{1}_{\{X_{k+1}(t)>0,\dots,X_{N}(t)>0\}}\big{|}X(t)=x,\nabla X_{|K}(t)=0 \big{\}}dx\\ &:=(-1)^{k}\int_{K}p_{\nabla X_{|K}(t)}(0)dt\int_{u}^{\infty}A^{ \prime}(t,x)dx.\end{split}\]
Let
\[\begin{split}\mathcal{M}_{2}&=\{t\in\bar{K}:\nu(t)= \sigma_{T}^{2},\,\,\nabla\nu_{|K}(t)=2\mathbb{E}\{X(t)\nabla X_{|K}(t)\}=0\}, \\ B(\mathcal{M}_{2},\delta_{2})&=\{t\in K:d\left(t, \mathcal{M}_{2}\right)\leq\delta_{2}\}\,,\end{split} \tag{6.6}\]
where \(\delta_{2}\) is another small positive number to be specified. Here, we only need to estimate
\[\int_{B(\mathcal{M}_{2},\delta_{2})}p_{\nabla X_{|K}(t)}(0)dt\int_{u}^{\infty} A^{\prime}(t,x)dx, \tag{6.7}\]
since the integral above with \(B(\mathcal{M}_{2},\delta_{2})\) replaced by \(K\backslash B(\mathcal{M}_{2},\delta_{2})\) is super-exponentially small due to the fact
\[\sup_{t\in K\backslash B(\mathcal{M}_{2},\delta_{2})}\operatorname{Var}(X(t)| \nabla X(t)=0)<\sigma_{T}^{2}.\]
On the other hand, following similar arguments in the proof for Case (i), we have that removing the indicator functions \(\mathbb{1}_{\{\nabla^{2}X_{|K}(t)\prec 0\}}\) in (6.7) will only cause a super-exponentially small error. Combining these results, we conclude that the first approximation in (6.1) holds, thus completing the proof.
From the proof of Proposition 6.1, it is evident that the same line of reasoning and arguments can be readily extended to \(\mathbb{E}\{M_{u}(X,K)\}\), leading to the following result.
**Proposition 6.2**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying \((\mathbf{H}1)\), \((\mathbf{H}2)\) and \((\mathbf{H}3)\). Then there exists a constant \(\alpha>0\) such that for any \(K\in\partial_{k}T\), as \(u\to\infty\),_
\[\mathbb{E}\{M_{u}(K)\} =(-1)^{k}\int_{K}\mathbb{E}\big{\{}\mathrm{det}\nabla^{2}X_{|K}( t)\mathbb{1}_{\{X(t)\geq u\}}\big{|}\nabla X_{|K}(t)=0\big{\}}p_{\nabla X_{|K} (t)}(0)dt\] \[\quad+o\left(\exp\left\{-\frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u^ {2}\right\}\right).\]
## 7 Proofs of the main results
Proof of Theorem 3.1.: By Propositions 5.2, 5.3 and 5.4, together with the fact \(M_{u}^{E}(K)\leq M_{u}(K)\), we obtain that the factorial moments and the last two sums in (4.3) are super-exponentially small. Therefore, from (4.2) and (4.3), it follows that there exists a constant \(\alpha>0\) such that as \(u\to\infty\),
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\sum_{k=0}^{N}\sum_{K\in \partial_{k}T}\mathbb{E}\{M_{u}^{E}(K)\}+o\left(\exp\left\{-\frac{u^{2}}{2 \sigma_{T}^{2}}-\alpha u^{2}\right\}\right).\]
This desired result follows as an immediate consequence of Proposition 6.1.
Proof of Theorem 3.2.: Remark 4.1 indicates that both inequalities (4.2) and (4.3) hold with \(M_{u}^{E}(\cdot)\) replaced by \(M_{u}(\cdot)\). Therefore, the corresponding factorial moments and the last two sums in (4.3) with \(M_{u}^{E}(\cdot)\) replaced by \(M_{u}(\cdot)\) are super-exponentially small by Propositions 5.2, 5.3 and 5.5. Consequently, there exists a constant \(\alpha>0\) such that as \(u\to\infty\),
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\sum_{k=0}^{N}\sum_{K\in \partial_{k}T}\mathbb{E}\{M_{u}(K)\}+o\left(\exp\left\{-\frac{u^{2}}{2\sigma_{ T}^{2}}-\alpha u^{2}\right\}\right).\]
The desired result follows directly from Proposition 6.2.
Proof of Theorem 3.3.: Note that, in the proof of Theorem 3.1, we have seen that the points in \(\mathcal{M}_{2}\) defined in (6.6) make major contribution to the excursion probability. That is, with up to a super-exponentially small error, we can focus only on those faces, say \(J\), whose closure
\(\bar{J}\) contains the unique point \(t^{*}\) with \(\nu(t^{*})=\sigma_{T}^{2}\) and satisfying \(\tau(J)\subset\mathcal{I}(t^{*})\) (i.e., the partial derivatives of \(\nu\) are \(0\) at \(t^{*}\) restricted on \(J\)). To formalize this concept, we define a set of faces \(T^{*}\) as follows:
\[T^{*}=\{J\in\partial_{k}T:t^{*}\in\bar{J},\,\tau(J)\subset\mathcal{I}(t^{*}),\, k=0,\dots,N\}.\]
For each \(J\in T^{*}\), let
\[M_{u}^{E^{*}}(J):=\#\{t\in J:X(t)\geq u,\nabla X_{|J}(t)=0, \nabla^{2}X_{|J}(t)\prec 0,\] \[\varepsilon_{j}^{*}X_{j}(t)\geq 0\text{ for all }j\in\mathcal{I}(t^{*}) \setminus\tau(J)\}.\]
Note that, both inequalities (4.2) and (4.3) remain valid when we replace \(M_{u}^{E}(J)\) with \(M_{u}^{E^{*}}(J)\) for faces \(J\) belonging to \(T^{*}\), and replace \(M_{u}^{E}(J)\) with \(M_{u}(J)\) otherwise. Employing analogous reasoning as used in the derivation of Theorems 3.1 and 3.2, we obtain that, there exists \(\alpha>0\) such that as \(u\to\infty\),
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\sum_{J\in T^{*}}\mathbb{E} \{M_{u}^{E^{*}}(J)\}+o\left(\exp\left\{-\frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u ^{2}\right\}\right).\]
This desired result is then deduced from Proposition 6.1.
## 8 Gaussian fields with a unique maximum point of the variance
In this section, we delve deeper into EEC approximations when the variance function \(\nu(t)\) reaches its maximum value \(\sigma_{T}^{2}\) at a solitary point \(t^{*}\). While Theorem 3.3 provides an implicit formula for such scenarios, our objective here is to obtain explicit formulae by employing integral approximation techniques based on the Kac-Rice formula. To facilitate this process, we begin by presenting some auxiliary results related to the Laplace method for integral approximations.
### Auxiliary lemmas on Laplace approximation
The following two formulas state the results on the Laplace approximation method. Lemma 8.1 can be found in many books on the approximations of integrals; here we refer to Wong [15]. Lemma 8.2 can be derived by following similar arguments in the proof of the Laplace method for the case of boundary points in [15].
**Lemma 8.1**.: [Laplace method for interior points] _Let \(t_{0}\) be an interior point of \(T\). Suppose the following conditions hold: (i) \(g(t)\in C(T)\) and \(g(t_{0})\neq 0\); (ii) \(h(t)\in C^{2}(T)\) and attains its minimum only at \(t_{0}\); and (iii) \(\nabla^{2}h(t_{0})\) is positive definite. Then as \(u\to\infty\),_
\[\int_{T}g(t)e^{-uh(t)}dt=\frac{(2\pi)^{N/2}}{u^{N/2}(\det\nabla^{2}h(t_{0}))^ {1/2}}g(t_{0})e^{-uh(t_{0})}(1+o(1)).\]
**Lemma 8.2**.: [Laplace method for boundary points] _Let \(t_{0}\in K\in\partial_{k}T\) with \(0\leq k\leq N-1\). Suppose that conditions (i), (ii) and (iii) in Lemma 8.1 hold, and additionally \(\nabla h(t_{0})=0\). Then as \(u\to\infty\),_
\[\int_{T}g(t)e^{-uh(t)}dt=\frac{(2\pi)^{N/2}\mathbb{P}\{Z_{i_{\ell}}\varepsilon_{ i_{\ell}}^{*}>0,\forall i_{\ell}\notin\tau(K)\}}{u^{N/2}(\det\nabla^{2}h(t_{0}))^{1/ 2}}g(t_{0})e^{-uh(t_{0})}(1+o(1)),\]
_where \((Z_{i_{1}},\ldots,Z_{i_{N-k}})\) is a centered \((N-k)\)-dimensional Gaussian vector with covariance matrix \((h_{i_{\ell}i_{\ell^{\prime}}}(t_{0}))_{i_{\ell},i_{\ell^{\prime}}\notin\tau(K)}\) and \(\tau(K)\) and \(\varepsilon_{i_{\ell}}^{*}\) are defined in Section 2._
### Gaussian fields satisfying the boundary condition (3.3)
For \(t\in T\), we define the following notation for conditional variances:
\[\tilde{\nu}_{|K}(t)=\text{Var}(X(t)|\nabla X_{|K}(t)=0),\quad\tilde{\nu}(t)= \text{Var}(X(t)|\nabla X_{|K}(t)=0). \tag{8.1}\]
The following result provides explicit approximations to the excursion probabilities when the maximum of the variance is reached only at a single point and the boundary condition (3.3) is satisfied.
**Theorem 8.3**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying \((\mathbf{H}1)\) and \((\mathbf{H}2)\). Suppose \(\nu\) attains its maximum \(\sigma_{T}^{2}\) only at \(t^{*}\in K\in\partial_{k}T\), \(\nu_{i}(t^{*})\neq 0\) for all \(i\notin\tau(K)\), and \(\nabla^{2}\nu_{|K}(t^{*})\prec 0\). Then, as \(u\to\infty\),_
\[\begin{split}&\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}= \Psi\left(\frac{u}{\sigma_{T}}\right)+o\left(\exp\left\{-\frac{u^{2}}{2\sigma_ {T}^{2}}-\alpha u^{2}\right\}\right)\text{for some }\alpha>0,\quad\text{ if }k=0,\\ &\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\sqrt{\frac{ \det(\Sigma_{K}(t^{*}))}{\det(\Lambda_{K}(t^{*})+\Sigma_{K}(t^{*}))}}\Psi \left(\frac{u}{\sigma_{T}}\right)(1+o(1)),\quad\text{ if }k\geq 1,\end{split} \tag{8.2}\]
_where \(\Lambda_{K}(t^{*})\) and \(\Sigma_{K}(t^{*})\) are defined in (2.3)._
Proof.: If \(k=0\), then \(\nu_{i}(t^{*})\neq 0\) for all \(i\geq 1\), and hence \(\mathcal{I}(t^{*})=\emptyset\). The first line of (8.2) follows from Theorem 3.3 that
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\mathbb{P}\{X(t^{*})\geq u\} +o\left(\exp\left\{-\frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right).\]
Now, let us consider the case when \(k\geq 1\). Note that the assumption on partial derivatives of \(\nu(t)\) implies \(\mathcal{I}(t^{*})=\tau(K)\). By Theorem 3.3, we have
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=(-1)^{k}I(u,K)+o\left(\exp \left\{-\frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right), \tag{8.3}\]
where
\[I(u,K) =\int_{K}\mathbb{E}\big{\{}\det\nabla^{2}X_{|K}(t)\mathbbm{1}_{ \{X(t)\geq u\}}\big{|}\nabla X_{|K}(t)=0\big{\}}p_{\nabla X_{|K}(t)}(0)dt\] \[=\int_{u}^{\infty}\int_{K}\frac{(2\pi)^{-(k+1)/2}}{\sqrt{\tilde{ \nu}_{|K}(t)\text{det}\left(\Lambda_{K}(t)\right)}}\mathbb{E}\big{\{}\text{ det}\nabla^{2}X_{|K}(t)\big{|}X(t)=x,\nabla X_{|K}(t)=0\big{\}}e^{-\frac{x^{2} }{2\tilde{\nu}(t)}}dtdx.\]
Applying the Laplace method in Lemma 8.1 with
\[g(t) =\frac{1}{\sqrt{\tilde{\nu}_{|K}(t)\text{det}\left(\Lambda_{K}(t) \right)}}\mathbb{E}\big{\{}\text{det}\nabla^{2}X_{|K}(t)\big{|}X(t)=x,\nabla X_{ |K}(t)=0\big{\}},\] \[h(t) =\frac{1}{2\tilde{\nu}_{|K}(t)},\quad u=x^{2},\]
and noting that the Hessian matrix of \(1/\big{(}2\tilde{\nu}_{|K}(t)\big{)}\) evaluated at \(t^{*}\) is
\[-\frac{1}{2\tilde{\nu}_{|K}^{2}(t^{*})}\left(\tilde{\nu}_{ij}(t^{*})\right)_{i, j\in\tau(K)}=-\frac{1}{2\sigma_{T}^{4}}\nabla^{2}\tilde{\nu}_{|K}(t^{*})\succ 0, \tag{8.4}\]
we obtain
\[I(u,K)=\frac{(2\sigma_{T}^{4})^{k/2}}{\sqrt{2\pi\sigma_{T}^{2}\text{det}\left( \Lambda_{K}(t^{*})\right)}\sqrt{|\text{det}\nabla^{2}\tilde{\nu}_{|K}(t^{*}) |}}I(u)(1+o(1)), \tag{8.5}\]
where
\[I(u) =\int_{u}^{\infty}\mathbb{E}\big{\{}\text{det}\nabla^{2}X_{|K}(t ^{*})\big{|}X(t^{*})=x,\nabla X_{|K}(t^{*})=0\big{\}}x^{-k}e^{-\frac{x^{2}}{2 \sigma_{T}^{2}}}\,dx \tag{8.6}\] \[=\text{det}(\Sigma_{K}(t^{*}))\int_{u}^{\infty}\mathbb{E}\big{\{} \text{det}(Q\nabla^{2}X_{|K}(t^{*})Q)\big{|}X(t^{*})=x,\nabla X_{|K}(t^{*})=0 \big{\}}x^{-k}e^{-\frac{x^{2}}{2\sigma_{T}^{2}}}\,dx.\]
Here, noting that \(\Sigma_{K}(t^{*})=\mathbb{E}\{X(t)\nabla^{2}X_{|K}(t^{*})\}\prec 0\) by Proposition 2.1, we let \(Q\) in (8.6) be a \(k\times k\) positive definite matrix such that \(Q(-\Sigma_{K}(t^{*}))Q=I_{k}\), where \(I_{k}\) is the size-\(k\) identity matrix. Then
\[\mathbb{E}\{X(t)(Q\nabla^{2}X_{|K}(t^{*})Q)\}=Q\Sigma_{K}(t^{*})Q=-I_{k}.\]
Notice that \(\mathbb{E}\{X(t^{*})\nabla X_{|K}(t^{*})\}=0\) due to \(\nu_{|K}(t^{*})=0\), we have
\[\mathbb{E}\big{\{}Q\nabla^{2}X_{|K}(t^{*})Q\big{|}X(t^{*})=x,\nabla X_{|K}(t^ {*})=0\big{\}}=-\frac{x}{\sigma_{T}^{2}}I_{k}.\]
One can write
\[\mathbb{E}\big{\{}\text{det}(Q\nabla^{2}X_{|K}(t^{*})Q)\big{|}X(t^{*})=x, \nabla X_{|K}(t^{*})=0\big{\}}=\mathbb{E}\{\text{det}(\Delta(t^{*})-(x/\sigma _{T}^{2})I_{k})\},\]
where \(\Delta(t^{*})\) is a centered Gaussian random matrix with covariance independent of \(x\). According to the Laplace expansion of determinant, \(\mathbb{E}\{\text{det}(\Delta(t^{*})-(x/\sigma_{T}^{2})I_{k})\}\) is a polynomial in \(x\) with the highest-order term being \((-1)^{k}\sigma_{T}^{-2k}x^{k}\). Plugging this into (8.6) and (8.5), we obtain
\[I(u,K)=\frac{(-1)^{k}2^{k/2}|\text{det}(\Sigma_{K}(t^{*}))|}{\sqrt{\text{det}( \Lambda_{K}(t^{*}))}\sqrt{|\text{det}\left(\nabla^{2}\tilde{\nu}_{|K}(t^{*}) \right)|}}\Psi\left(\frac{u}{\sigma_{T}}\right)(1+o(1)).\]
Finally, note that
\[\tilde{\nu}_{|K}(t)=\mathbb{E}\{X(t)^{2}\}-\mathbb{E}\{X(t)\nabla X_{|K}(t)\} ^{T}\Lambda_{K}^{-1}(t)\mathbb{E}\{X(t)\nabla X_{|K}(t)\},\]
we have
\[\begin{split}\nabla^{2}\tilde{\nu}_{|K}(t^{*})&=2[ \Lambda_{K}(t^{*})+\Sigma_{K}(t^{*})]-2[\Lambda_{K}(t^{*})+\Sigma_{K}(t^{*})] \Lambda_{K}^{-1}(t^{*})[\Lambda_{K}(t^{*})+\Sigma_{K}(t^{*})]\\ &=-2\Sigma_{K}(t^{*})[I_{k}+\Lambda_{K}^{-1}(t^{*})\Sigma_{K}(t^{* })].\end{split} \tag{8.7}\]
Therefore,
\[I(u,K)=(-1)^{k}\sqrt{\frac{\det(\Sigma_{K}(t^{*}))}{\det(\Lambda_{K}(t^{*})+ \Sigma_{K}(t^{*}))}}\Psi\left(\frac{u}{\sigma_{T}}\right)(1+o(1)),\]
where \(\Sigma_{K}(t^{*})\prec 0\) by Proposition 2.1 and \(\Lambda_{K}(t^{*})+\Sigma_{K}(t^{*})=\nabla^{2}\nu_{|K}(t^{*})/2\prec 0\) by assumption. Plugging this into (8.3) yields the desired result.
Now we apply Theorem 8.3 to the 1D case when \(T=[a,b]\). If \(t^{*}=a\) or \(t^{*}=b\), then it is a direct application of the first line in (8.2). If \(t^{*}\in(a,b)\), then it follows from (8.2) that
\[\mathbb{P}\left\{\sup_{t\in[a,b]}X(t)\geq u\right\}=\sqrt{\frac{\mathbb{E}\{X (t^{*})X^{\prime\prime}(t^{*})\}}{\operatorname{Var}(X^{\prime}(t^{*}))+ \mathbb{E}\{X(t^{*})X^{\prime\prime}(t^{*})\}}}\Psi\left(\frac{u}{\sigma_{T}} \right)(1+o(1)).\]
### Gaussian fields not satisfying the boundary condition (3.3)
We consider here the other case when \(\nu_{i}(t^{*})\neq 0\) for some \(i\notin\tau(K)\). For a symmetric matrix \(B=(B_{ij})_{1\leq i,j\leq N}\), we call \((B_{ij})_{i,j\in\mathcal{I}}\) the matrix \(B\) with indices restricted on \(\mathcal{I}\).
**Theorem 8.4**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying_ (**H**1) _and_ (**H**2)_. Suppose \(\nu\) attains its maximum \(\sigma_{T}^{2}\) only at \(t^{*}\in K\in\partial_{k}T\) such that \(\mathcal{I}(t^{*})\setminus\tau(K)\) contains \(m\geq 1\) indices and \((\nu_{ii^{\prime}}(t^{*}))_{i,i^{\prime}\in\mathcal{I}(t^{*})}\prec 0\). Then, as \(u\to\infty\),_
\[\begin{split}&\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\\ &=\sum_{J}\sqrt{\frac{\det(\Sigma_{J}(t^{*}))}{\det(\Lambda_{J}(t^ {*})+\Sigma_{J}(t^{*}))}}\mathbb{P}\{(Z_{J_{1}^{\prime}},\ldots,Z_{J_{j-k}^{ \prime}})\in E^{\prime}(J)\}\\ &\quad\times\mathbb{P}\big{\{}(X_{J_{1}}(t^{*}),\ldots,X_{J_{k+m- j}}(t^{*}))\in E(J)\big{|}\nabla X_{|J}(t^{*})=0\big{\}}\Psi\left(\frac{u}{ \sigma_{T}}\right)(1+o(1)),\end{split} \tag{8.8}\]
_where the sum is taken over all faces \(J\) such that \(t^{*}\in\bar{J}\) and \(\tau(J)\subset\mathcal{I}(t^{*})\), \(j=\dim(J)\),_
\[\begin{split}&(J_{1},\ldots,J_{k+m-j})=\mathcal{I}(t^{*}) \setminus\tau(J),\quad(J_{1}^{\prime},\ldots,J_{j-k}^{\prime})=\tau(J) \setminus\tau(K),\\ & E(J)=\{(y_{J_{1}},\ldots,y_{J_{k+m-j}})\in\mathbb{R}^{k+m-j}: \varepsilon^{*}_{J_{\ell}}(J)y_{J_{\ell}}\geq 0,\,\forall\ell=1,\ldots,k+m-j\},\\ & E^{\prime}(J)=\{(y_{J_{1}^{\prime}},\ldots,y_{J_{j-k}^{\prime}}) \in\mathbb{R}^{j-k}:\varepsilon^{*}_{J_{\ell}^{\prime}}(K)y_{J_{\ell}^{\prime }}\geq 0,\,\forall\ell=1,\ldots,j-k\},\end{split}\]
\(\varepsilon^{*}_{J_{\ell}}(J)\) _and \(\varepsilon^{*}_{J_{\ell}^{\prime}}(K)\) are the \(\varepsilon^{*}\) numbers for faces \(J\) and \(K\) respectively, \((Z_{J_{1}^{\prime}},\ldots,Z_{J_{j-k}^{\prime}})\) is a centered Gaussian vector having covariance matrix \(\Sigma(t^{*})+\Sigma(t^{*})\Lambda^{-1}(t^{*})\Sigma(t^{*})\) with indices restricted on \(\tau(J)\setminus\tau(K)\), and \(\Lambda_{J}(t^{*})\) and \(\Sigma_{J}(t^{*})\) are defined in (2.3). In particular, for \(k=0\), the term inside the sum in (8.8) with \(J=K=\{t^{*}\}\) is given by_
\[\mathbb{P}\{(X_{J_{1}}(t^{*}),\ldots,X_{J_{m}}(t^{*}))\in E(J)\}\Psi\left( \frac{u}{\sigma_{T}}\right).\]
Proof.: We first prove the case when \(k\geq 1\). By Theorem 3.3, we have
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\sum_{J}(-1)^{j}I(u,J)+o\left( \exp\left\{-\frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right), \tag{8.9}\]
where \(j=\dim(J)\), the sum is taken over all faces \(J\) such that \(t^{*}\in\bar{J}\) and \(\tau(J)\subset\mathcal{I}(t^{*})\), and
\[I(u,J) =\int_{J}\mathbb{E}\big{\{}\mathrm{det}\nabla^{2}X_{|J}(t)\mathbbm {1}_{\{X(t)\geq u\}}\mathbbm{1}_{\{\varepsilon_{\ell}^{*}X_{\ell}(t)\geq 0,\,\forall \ell\in\mathcal{I}(t^{*})\setminus\tau(J)\}}\big{|}\nabla X_{|J}(t)=0\big{\}} p_{\nabla X_{|J}(t)}(0)dt\] \[=\int_{u}^{\infty}\int_{J}\frac{(2\pi)^{-(j+1)/2}}{\sqrt{\tilde{ \nu}_{|J}(t)\mathrm{det}\left(\Lambda_{J}(t)\right)}}\mathbb{E}\big{\{} \mathrm{det}\nabla^{2}X_{|K}(t)\mathbbm{1}_{\{\varepsilon_{\ell}^{*}X_{\ell}( t)\geq 0,\,\forall\ell\in\mathcal{I}(t^{*})\setminus\tau(J)\}}\big{|}X(t)=x,\] \[\nabla X_{|J}(t)=0\big{\}}e^{-\frac{x^{2}}{2\tilde{\nu}_{|J}(t)} }dtdx.\]
Applying the Laplace method in Lemma 8.2 with
\[g(t) =\frac{1}{\sqrt{\tilde{\nu}_{|J}(t)\mathrm{det}\left(\Lambda_{J}( t)\right)}}\mathbb{E}\big{\{}\mathrm{det}\nabla^{2}X_{|J}(t)\mathbbm{1}_{\{ \varepsilon_{\ell}^{*}X_{\ell}(t)\geq 0,\,\forall\ell\in\mathcal{I}(t^{*}) \setminus\tau(J)\}}\big{|}X(t)=x,\nabla X_{|J}(t)=0\big{\}},\] \[h(t) =\frac{1}{2\tilde{\nu}_{|J}(t)},\quad u=x^{2},\]
we obtain
\[I(u,J)=\frac{(2\sigma_{T}^{4})^{j/2}\mathbb{P}\{(Z_{J_{1}^{\prime}},\ldots,Z_{ J_{j-k}^{\prime}})\in E^{\prime}(J)\}}{\sqrt{2\pi\sigma_{T}^{2}\mathrm{det} \left(\Lambda_{J}(t^{*})\right)}\sqrt{|\mathrm{det}\nabla^{2}\tilde{\nu}_{|J} (t^{*})|}}I(u)(1+o(1)),\]
where \((Z_{J_{1}^{\prime}},\ldots,Z_{J_{j-k}^{\prime}})\) is a centered \((j-k)\)-dimensional Gaussian vector having covariance matrix \(\nabla^{2}h(t^{*})\) with indices restricted on \(\tau(J)\setminus\tau(K)\), and
\[I(u) =\int_{u}^{\infty}\mathbb{E}\big{\{}\mathrm{det}\nabla^{2}X_{|J}( t^{*})\mathbbm{1}_{\{\varepsilon_{\ell}^{*}X_{\ell}(t^{*})\geq 0,\,\forall\ell\in \mathcal{I}(t^{*})\setminus\tau(J)\}}\big{|}X(t^{*})=x,\nabla X_{|J}(t^{*})=0 \big{\}}x^{-j}e^{-\frac{x^{2}}{2\sigma_{T}^{2}}}\,dx\] \[=\mathrm{det}(\Sigma_{J}(t^{*}))\int_{u}^{\infty}\int_{E(J)} \mathbb{E}\big{\{}\mathrm{det}(Q\nabla^{2}X_{|J}(t^{*})Q)\big{|}X(t^{*})=x, \nabla X_{|J}(t^{*})=0,X_{J_{1}}(t^{*})=y_{J_{1}},\] \[\ldots,X_{J_{k+m-j}}(t^{*})=y_{J_{k+m-j}}\big{\}}p(y_{J_{1}}, \ldots,y_{J_{k+m-j}}|x,0)x^{-j}e^{-\frac{x^{2}}{2\sigma_{T}^{2}}}dy_{J_{1}} \cdots dy_{J_{k+m-j}}dx. \tag{8.10}\]
Here \(p(y_{J_{1}},\ldots,y_{J_{k+m-j}}|x,0)\) is the pdf of \((X_{J_{1}}(t^{*}),\ldots,X_{J_{k+m-j}}(t^{*})|X(t^{*})=x,\nabla X_{|J}(t^{*})=0)\), and \(Q\) is a \(j\times j\) positive definite matrix such that \(Q(-\Sigma_{J}(t^{*}))Q=I_{j}\). Then, similarly to the arguments in the proof of Theorem 8.3, one can write the last expectation in (8.10) as
\[\mathbb{E}\{\mathrm{det}(\Delta(t^{*},y_{J_{1}},\ldots,y_{J_{k+m-j}})-(x/\sigma _{T}^{2})I_{k})\},\]
where \(\Delta(t^{*},y_{J_{1}},\ldots,y_{J_{k+m-j}})\) is a centered Gaussian random matrix with covariance independent of \(x\), and hence the highest-order term in \(x\) is \((-1)^{j}x^{j}/\sigma_{T}^{2j}\). Noting that
\(0\) for all \(i\in\mathcal{I}(t^{*})\) and following similar arguments in the proof of Theorem 8.3, we obtain
\[I(u,J) =(-1)^{j}\sqrt{\frac{\det(\Sigma_{J}(t^{*}))}{\det(\Lambda_{J}(t^{* })+\Sigma_{J}(t^{*}))}}\mathbb{P}\{(Z_{J^{\prime}_{1}},\ldots,Z_{J^{\prime}_{j- k}})\in E^{\prime}(J)\}\] \[\quad\times\mathbb{P}\{(X_{J_{1}}(t^{*}),\ldots,X_{J_{k+m-j}}(t^{ *}))\in E(J)|\nabla X_{|J}(t^{*})=0\}\Psi\left(\frac{u}{\sigma_{T}}\right)(1+o(1 )),\]
which yields the desired result together with (8.9). In particular, by (8.7), one can treat \((Z_{J^{\prime}_{1}},\ldots,Z_{J^{\prime}_{j-k}})\) having covariance \(\Sigma(t^{*})+\Sigma(t^{*})\Lambda^{-1}(t^{*})\Sigma(t^{*})\) with indices restricted on \(\tau(J)\setminus\tau(K)\) while not changing the probability that it falls in \(E(J)\). Lastly, the case when \(k=0\) can be shown similarly.
Now we apply Theorem 8.4 to the 1D case when \(T=[a,b]\). Without loss of generality, assume \(t^{*}=b\) and \(\nu^{\prime}(t^{*})=0\). Then it follows from Theorem 8.4 that
\[\mathbb{P}\left\{\sup_{t\in[a,b]}X(t)\geq u\right\}\] \[=\left(\mathbb{P}\{X^{\prime}(t^{*})>0\}+\sqrt{\frac{\mathbb{E}\{ X(t^{*})X^{\prime\prime}(t^{*})\}}{\text{Var}(X^{\prime}(t^{*}))+\mathbb{E}\{X(t^{*})X^ {\prime\prime}(t^{*})\}}}\mathbb{P}\{Z>0\}\right)\Psi\left(\frac{u}{\sigma_{T }}\right)(1+o(1))\] \[=\frac{1}{2}\left(1+\sqrt{\frac{\mathbb{E}\{X(t^{*})X^{\prime \prime}(t^{*})\}}{\text{Var}(X^{\prime}(t^{*}))+\mathbb{E}\{X(t^{*})X^{\prime \prime}(t^{*})\}}}\right)\Psi\left(\frac{u}{\sigma_{T}}\right)(1+o(1)),\]
where \(Z\) is a centered Gaussian variable.
Denote by \(\mathbb{R}^{n}_{+}=(0,\infty)^{n}\). To simplify the statement in Theorem 8.4, we present below another version with less notations on faces.
**Corollary 8.5**.: _Let \(\{X(t),\,t\in T\}\) be a centered Gaussian random field satisfying \((\mathbf{H}1)\), \((\mathbf{H}2)\) and \((\mathbf{H}3)\). Suppose \(\nu\) attains its maximum \(\sigma_{T}^{2}\) only at \(t^{*}\in K\in\partial_{k}T\) with \(\tau(K)=\{1,\ldots,k\}\) such that \(\mathcal{I}(t^{*})=\{1,\ldots,k,k+1,\ldots,k+m\}\) with \(m\geq 1\) and \((\nu_{ii^{\prime}}(t^{*}))_{1\leq i,i^{\prime}\leq k+m}\prec 0\). Then, as \(u\to\infty\),_
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\] \[=\sum_{j=k}^{k+m}\sum_{J\in\partial_{j}T:\,t^{*}\in J}\sqrt{\frac {\det(\Sigma_{J}(t^{*}))}{\det(\Lambda_{J}(t^{*})+\Sigma_{J}(t^{*}))}} \mathbb{P}\{(Z_{1},\ldots,Z_{j-k})\in\mathbb{R}^{j-k}_{+}\} \tag{8.11}\] \[\quad\times\mathbb{P}\{(X_{j+1}(t^{*}),\ldots,X_{k+m}(t^{*}))\in \mathbb{R}^{k+m-j}_{+}\big{|}\nabla X_{|J}(t^{*})=0\}\Psi\left(\frac{u}{\sigma _{T}}\right)(1+o(1)),\]
_where \((Z_{1},\ldots,Z_{j-k})\) is a centered Gaussian vector having covariance \(\Sigma(t^{*})+\Sigma(t^{*})\Lambda^{-1}(t^{*})\Sigma(t^{*})\) with indices restricted on \(\{k+1,\ldots,j\}\), and \(\Lambda_{J}(t^{*})\) and \(\Sigma_{J}(t^{*})\) are defined in (2.3). In particular, for \(k=0\), the term inside the sum in (8.11) with \(J=K=\{t^{*}\}\) is_
\[\mathbb{P}\{(X_{1}(t^{*}),\ldots,X_{m}(t^{*}))\in\mathbb{R}^{m}_{+}\}\Psi\left( \frac{u}{\sigma_{T}}\right).\]
Examples
Throughout this section, we consider a centered Gaussian random field \(\{X(t),\,t\in T\}\) satisfying (**H**1), (**H**2) and (**H**3), where \(T=[a_{1},b_{1}]\times[a_{2},b_{2}]\subset\mathbb{R}^{2}\).
### Examples with a unique maximum point of the variance
Suppose \(\nu(t_{1},t_{2})\) attains the maximum \(\sigma_{T}^{2}\) only at a single point \(t^{*}=(t_{1}^{*},t_{2}^{*})\); and the assumptions in Theorems 8.3 or 8.4 are satisfied.
**Case 1: \(t^{*}=(b_{1},b_{2})\) and \(\nu_{1}(t^{*})\nu_{2}(t^{*})\neq 0\).** It follows directly from Theorem 8.3 that
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\Psi\left(\frac{u}{\sigma_{T} }\right)+o\left(\exp\left\{-\frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\} \right).\]
**Case 2: \(t^{*}=(b_{1},b_{2})\), \(\nu_{1}(t^{*})=0\) and \(\nu_{2}(t^{*})\neq 0\).** It follows from Corollary 8.5 that
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\] \[=\left(\mathbb{P}\{X_{1}(t^{*})>0\}+\sqrt{\frac{\mathbb{E}\{X(t^ {*})X_{11}(t^{*})\}}{\text{Var}(X_{1}(t^{*}))+\mathbb{E}\{X(t^{*})X_{11}(t^{*} )\}}}\mathbb{P}\{Z>0\}\right)\Psi\left(\frac{u}{\sigma_{T}}\right)(1+o(1))\] \[=\frac{1}{2}\left(1+\sqrt{\frac{\mathbb{E}\{X(t^{*})X_{11}(t^{*} )\}}{\text{Var}(X_{1}(t^{*}))+\mathbb{E}\{X(t^{*})X_{11}(t^{*})\}}}\right) \Psi\left(\frac{u}{\sigma_{T}}\right)(1+o(1)),\]
where \(Z\) is a centered Gaussian variable.
**Case 3: \(t^{*}=(b_{1},b_{2})\) and \(\nu_{1}(t^{*})=\nu_{2}(t^{*})=0\).** Applying Corollary 8.5 and noting the calculations in Case 2 above, we obtain
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\] \[=\left(\mathbb{P}\{X_{1}(t^{*})>0,X_{2}(t^{*})>0\}+\frac{1}{2} \sqrt{\frac{\mathbb{E}\{X(t^{*})X_{11}(t^{*})\}}{\text{Var}(X_{1}(t^{*}))+ \mathbb{E}\{X(t^{*})X_{11}(t^{*})\}}}\right.\] \[\qquad+\frac{1}{2}\sqrt{\frac{\mathbb{E}\{X(t^{*})X_{22}(t^{*})\} }{\text{Var}(X_{2}(t^{*}))+\mathbb{E}\{X(t^{*})X_{22}(t^{*})\}}}\] \[\qquad+\mathbb{P}\{Z_{1}>0,Z_{2}>0\}\sqrt{\frac{\det(\Sigma(t^{*} ))}{\det(\Lambda(t^{*})+\Sigma(t^{*}))}}\Psi\left(\frac{u}{\sigma_{T}}\right)( 1+o(1)),\]
where \((Z_{1},Z_{2})\) is a centered Gaussian vector with covariance \(\Sigma(t^{*})+\Sigma(t^{*})\Lambda^{-1}(t^{*})\Sigma(t^{*})\).
**Case 4: \(t^{*}=(t_{1}^{*},b_{2})\), where \(t_{1}^{*}\in(a_{1},b_{1})\) and \(\nu_{2}(t^{*})\neq 0\).** It follows directly from Theorem 8.3 that
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\sqrt{\frac{\mathbb{E}\{X(t^ {*})X_{11}(t^{*})\}}{\text{Var}(X_{1}(t^{*}))+\mathbb{E}\{X(t^{*})X_{11}(t^{* })\}}}\Psi\left(\frac{u}{\sigma_{T}}\right)(1+o(1).\]
**Case 5: \(t^{*}=(t_{1}^{*},b_{2})\), where \(t_{1}^{*}\in(a_{1},b_{1})\) and \(\nu_{2}(t^{*})=0\).** Applying Corollary 8.5 and noting the calculations in Case 2 above, we obtain
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}\] \[=\frac{1}{2}\Bigg{(}\sqrt{\frac{\mathbb{E}\{X(t^{*})X_{11}(t^{*}) \}}{\text{Var}(X_{1}(t^{*}))+\mathbb{E}\{X(t^{*})X_{11}(t^{*})\}}}+\sqrt{\frac {\det(\Sigma(t^{*}))}{\det(\Lambda(t^{*})+\Sigma(t^{*}))}}\Bigg{)}\Psi\left( \frac{u}{\sigma_{T}}\right)(1+o(1)).\]
**Case 6: \(a_{1}<t_{1}^{*}<b_{1}\) and \(a_{2}<t_{2}^{*}<b_{2}\).** It follows directly from Theorem 8.3 that
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\sqrt{\frac{\det(\Sigma(t^{ *}))}{\det(\Lambda(t^{*})+\Sigma(t^{*}))}}\Psi\left(\frac{u}{\sigma_{T}} \right)(1+o(1).\]
### Examples with the maximum of the variance achieved on a line
Consider the Gaussian random field \(X(t)\) defined as:
\[X(t)=\xi_{1}\cos t_{1}+\xi_{1}^{\prime}\sin t_{1}+t_{2}(\xi_{2}\cos t_{2}+\xi _{2}^{\prime}\sin t_{2}),\]
where \(t=(t_{1},t_{2})\in T=[a_{1},b_{1}]\times[a_{2},b_{2}]\subset(0,2\pi)^{2}\), and \(\xi_{1},\xi_{1}^{\prime},\xi_{2},\xi_{2}^{\prime}\) are independent standard Gaussian random variables. This is a Gaussian random field on \(\mathbb{R}^{2}\) generated from the cosine field, with an additional product of \(t_{2}\) along the vertical direction. The constraint on the parameter space within \((0,2\pi)^{2}\) is imposed to prevent degeneracy in derivatives. For this field, we have \(\nu(t)=1+t_{2}^{2}\), which reaches the maximum \(\sigma_{T}^{2}=1+b_{2}^{2}\) on the entire real line \(L:=\{(t_{1},b_{2}):a_{1}\leq t_{1}\leq b_{1}\}\). Furthermore,
\[\nu_{1}(t)|_{t\in L}=0,\quad\nu_{2}(t)|_{t\in L}=2b_{2}>0,\quad\forall t\in L.\]
By employing similar reasoning in the proofs of Theorems 3.1 and 3.2, we see that, in the EEC approximation \(\mathbb{E}\{\chi(A_{u})\}\), all integrals (derived from the Kac-Rice formula) over faces not contained within \(\bar{L}\) are super-exponentially small. Thus, there exists \(\alpha>0\) such that as \(u\to\infty\),
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\} =\mathbb{P}\{X(a_{1},b_{2})\geq u,X_{1}(a_{1},b_{2})<0\}+\mathbb{ P}\{X(b_{1},b_{2})\geq u,X_{1}(b_{1},b_{2})>0\}\] \[\quad+I(u)+o\left(\exp\left\{-\frac{u^{2}}{2\sigma_{T}^{2}}- \alpha u^{2}\right\}\right)\] \[=\Psi\left(\frac{u}{\sqrt{1+b_{2}^{2}}}\right)+I(u)+o\left(\exp \left\{-\frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u^{2}\right\}\right),\]
where
\[I(u)=-\int_{a_{1}}^{b_{1}}\mathbb{E}\big{\{}X_{11}(t_{1},b_{2})\mathbb{1}_{\{ X(t_{1},b_{2})\geq u\}}\big{|}X_{1}(t_{1},b_{2})=0\big{\}}p_{X_{1}(t_{1},b_{2})}(0)dt_{1}.\]
Since \(X_{1}(t_{1},b_{2})=-\xi_{1}\sin t_{1}+\xi_{1}^{\prime}\cos t_{1}\) and \(X_{11}(t_{1},b_{2})=-\xi_{1}\cos t_{1}-\xi_{1}^{\prime}\sin t_{1}\), one has
\[\operatorname{Cov}(X(t_{1},b_{2}),X_{1}(t_{1},b_{2}),X_{11}(t_{1},b_{2}))= \begin{pmatrix}1+b_{2}^{2}&0&-1\\ 0&1&0\\ -1&0&1\end{pmatrix},\]
which does not depend on \(t_{1}\). Particularly, \(X_{1}(t_{1},b_{2})\) is independent of both \(X(t_{1},b_{2})\) and \(X_{11}(t_{1},b_{2})\). Thus
\[I(u) =-\frac{b_{1}-a_{1}}{\sqrt{2\pi}}\mathbb{E}\big{\{}X_{11}(t_{1},b _{2})\mathbbm{1}_{\{X(t_{1},b_{2})\geq u\}}\big{\}}\] \[=-\frac{b_{1}-a_{1}}{\sqrt{2\pi}}\int_{u}^{\infty}\mathbb{E}\{X_{ 11}(t_{1},b_{2})|X(t_{1},b_{2})=x\}\phi\left(\frac{x}{\sqrt{1+b_{2}^{2}}} \right)dx\] \[=\frac{b_{1}-a_{1}}{\sqrt{2\pi}}\int_{u}^{\infty}\frac{x}{1+b_{2} ^{2}}\phi\left(\frac{x}{\sqrt{1+b_{2}^{2}}}\right)dx\] \[=\frac{b_{1}-a_{1}}{\sqrt{2\pi}}\phi\left(\frac{u}{\sqrt{1+b_{2}^ {2}}}\right).\]
Substituting this expression into (9.1), we arrive at the following refined approximation:
\[\mathbb{P}\left\{\sup_{t\in T}X(t)\geq u\right\}=\Psi\left(\frac{u}{\sqrt{1+b _{2}^{2}}}\right)+\frac{b_{1}-a_{1}}{\sqrt{2\pi}}\phi\left(\frac{u}{\sqrt{1+b _{2}^{2}}}\right)+o\left(\exp\left\{-\frac{u^{2}}{2\sigma_{T}^{2}}-\alpha u^ {2}\right\}\right),\]
which has a super-exponentially small error.
## Acknowledgments
The author acknowledges support from NSF Grants DMS-1902432 and DMS-2220523, as well as the Simons Foundation Collaboration Grant 854127.
|
2309.11652 | Selection of powerful radio galaxies with machine learning | We developed and trained a pipeline of three machine learning (ML) models
than can predict which sources are more likely to be an AGN and to be detected
in specific radio surveys. Also, it can estimate redshift values for predicted
radio-detectable AGNs. These models, which combine predictions from tree-based
and gradient-boosting algorithms, have been trained with multi-wavelength data
from near-infrared-selected sources in the Hobby-Eberly Telescope Dark Energy
Experiment (HETDEX) Spring field. Training, testing, calibration, and
validation were carried out in the HETDEX field. Further validation was
performed on near-infrared-selected sources in the Stripe 82 field. In the
HETDEX validation subset, our pipeline recovers 96% of the initially labelled
AGNs and, from AGNs candidates, we recover 50% of previously detected radio
sources. For Stripe 82, these numbers are 94% and 55%. Compared to random
selection, these rates are two and four times better for HETDEX, and 1.2 and 12
times better for Stripe 82. The pipeline can also recover the redshift
distribution of these sources with $\sigma_{\mathrm{NMAD}}$ = 0.07 for HETDEX
($\sigma_{\mathrm{NMAD}}$ = 0.09 for Stripe 82) and an outlier fraction of 19%
(25% for Stripe 82), compatible with previous results based on broad-band
photometry. Feature importance analysis stresses the relevance of near- and
mid-infrared colours to select AGNs and identify their radio and redshift
nature. Combining different algorithms in ML models shows an improvement in the
prediction power of our pipeline over a random selection of sources. Tree-based
ML models (in contrast to deep learning techniques) facilitate the analysis of
the impact that features have on the predictions. This prediction can give
insight into the potential physical interplay between the properties of radio
AGNs (e.g. mass of black hole and accretion rate). | R. Carvajal, I. Matute, J. Afonso, R. P. Norris, K. J. Luken, P. Sánchez-Sáez, P. A. C. Cunha, A. Humphrey, H. Messias, S. Amarantidis, D. Barbosa, H. A. Cruz, H. Miranda, A. Paulino-Afonso, C. Pappalardo | 2023-09-20T21:33:17Z | http://arxiv.org/abs/2309.11652v2 | # Selection of powerful radio galaxies with machine learning
###### Abstract
Context:The study of active galactic nuclei (AGNs) is fundamental to discern the formation and growth of supermassive black holes (SMBHs) and their connection with star formation and galaxy evolution. Due to the significant kinetic and radiative energy emitted by powerful AGNs, they are prime candidates to observe the interplay between SMBH and stellar growth in galaxies.
Aims:We aim to develop a method to predict the AGN nature of a source, its radio detectability, and redshift purely based on photometry. The use of such a method will increase the number of radio AGNs, allowing us to improve our knowledge of accretion power into an SMBH, the origin and triggers of radio emission, and its impact on galaxy evolution.
Methods:We developed and trained a pipeline of three machine learning (ML) models than can predict which sources are more likely to be an AGN and to be detected in specific radio surveys. Also, it can estimate redshift values for predicted radio-detectable AGNs. These models, which combine predictions from tree-based and gradient-boosting algorithms, have been trained with multi-wavelength data from near-infrared-selected sources in the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) Spring field. Training, testing, calibration, and validation were carried out in the HETDEX field. Further validation was performed on near-infrared-selected sources in the Stripe 82 field.
Results:In the HETDEX validation subset, our pipeline recovers 96% of the initially labelled AGNs and, from AGNs candidates, we recover 50% of previously detected radio sources. For Stripe 82, these numbers are 94% and 55%. Compared to random selection, these rates are two and four times better for HETDEX, and 1.2 and 12 times better for Stripe 82. The pipeline can also recover the redshift distribution of these sources with \(\sigma_{\rm{NAAD}}\)=0.07 for HETDEX (\(\sigma_{\rm{NAAD}}\)=0.09 for Stripe 82) and an outlier fraction of 19% (25% for Stripe 82), compatible with previous results based on broad-band photometry. Feature importance analysis stresses the relevance of near- and mid-infrared colours to select AGNs and identify their radio and redshift nature.
Conclusions:Combining different algorithms in ML models shows an improvement in the prediction power of our pipeline over a random selection of sources. Tree-based ML models (in contrast to deep learning techniques) facilitate the analysis of the impact that features have on the predictions. This prediction can give insight into the potential physical interplay between the properties of radio AGNs (e.g. mass of black hole and accretion rate).
## 1 Introduction
Active galactic nuclei (AGNs) are instrumental in determining the nature, growth, and evolution of supermassive black holes (SMBHs). Their strong emission allows us to study the close environment within the hosting galaxies and, at a larger scale, the intergalactic medium (e.g. Padovani et al., 2017; Bianchi et al., 2022). Feedback due to AGN energetics, which most prominently manifest in the form of jetted radio emission, might play a fundamental role in regulating stellar growth and the overall evolution of hosts and their environments (Alatalo et al., 2015; Villar-Martin et al., 2017; Hardcastle and Croston, 2020).
Although radio emission can trace high star formation (SF) in galaxies, above certain luminosities (e.g. \(\log\)\(L_{\rm{1.4GHz}}\)\(>\)25 W Hz\({}^{-1}\), Jarvis et al., 2021), it is a prime tracer of the powerful jet emission triggered by the SMBH in AGNs (radio galaxies, Heckman and Best, 2014). Traditionally, these powerful radio galaxies (RGs) were used to pinpoint AGN activity, but they have been superseded in the last decades by optical,
near-infrared (NIR), and X-ray surveys. In fact, RGs in the high redshift Universe (\(z>2\)) have been identified and studied mostly through the follow-up of AGNs selected at shorter wavelengths (optical, NIR, millimetre, and X-rays, e.g. McGreer et al., 2006; Pensabene et al., 2020; Delhaize et al., 2021). The landscape is quickly changing and the advent of new radio instruments and surveys has allowed the detection of larger numbers of RGs (e.g. Williams et al., 2018; Capetti et al., 2020). Some of these surveys are: the National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) Sky Survey (NVSS; Condon et al., 1998), the Faint Images of the Radio Sky at Twenty-Centimetres (FIRST; Helfand et al., 2015), the Evolutionary Map of the Universe (EMU; Norris et al., 2011), the Very Large Array Sky Survey (VLASS; Gordon et al., 2020), and the Low Frequency Array (LOFAR) Two-metre Sky Survey (LoTSS; Shimwell et al., 2019).
One of the ultimate goals is to detect powerful RGs in the Epoch of Reionisation (EoR), which could be used to trace the neutral gas distribution during this critical phase of the Universe (e.g. Carilli et al., 2004; Jensen et al., 2013). Simulations have shown that as much as a few hundreds of RGs per deg\({}^{2}\) could be present in the EoR (Amarantidis et al., 2019; Bonaldi et al., 2019; Thomas et al., 2021) and detectable with present and future deep observations, for example the Square Kilometre Array (SKA), which is projected to have \(\mu\)Jy point-source sensitivity levels (SKA1-Mid is expected to reach close to \(2\,\mu\)Jy in 1-hour continuum observations at \(\nu\,\raise 1.29pt\hbox{$>$}\kern-7.5pt\lower 2.795pt\hbox{$\sim$}\,1\) GHz; Prandoni & Seymour, 2015; Braun et al., 2019). Most recent observational compilations (e.g. Inayoshi et al., 2020; Ross & Cross, 2020; Bosman, 2022; Fan et al., 2023) show that around 300 AGNs have been confirmed to exist at redshifts higher than \(z\)\(\sim\)6 over thousands of square degrees. This disagreement highlights the uncertainties present in simulations, mainly due to our lack of knowledge of the triggering mechanisms and duty cycle for jetted emission in AGNs (Afonso et al., 2015; Pierce et al., 2022).
The selection of AGN candidates has had success in the X-rays and radio wavebands as they dominate the emission above certain luminosities. Unfortunately, deep X-ray surveys are limited in area and only of the order of 10% of AGNs have strong radio emissions linked to jets (i.e. radio-loud sources) at any given time with variations, going from \(\sim 6\%\) up to \(\sim 30\%\), correlated to optical and X-ray luminosities, as well as with redshift (e.g. Padovani, 1993; della Ceca et al., 1994; Jiang et al., 2007; Storchi-Bergmann & Schnorr-Muller, 2019; Gurkan et al., 2019; Macfarlane et al., 2021; Gloudemans et al., 2021, 2022; Best et al., 2023).1
Footnote 1: Depending on the dataset, a random selection of AGNs would lead to a rate of radio-detectable AGNs in the range \(6-30\%\). We call this random choice a ’no-skill’ selection.
The largest number of AGN candidates has been selected through the compilation of multi-wavelength spectral energy distributions (SED) for millions of sources (Hickox & Alexander, 2018; Pouliasis, 2020). Of particular relevance for AGNs are the mid-infrared (mid-IR) colours where _Spitzer_(Werner et al., 2004) and especially the Wide-field Infrared Survey Explorer (WISE; Wright et al., 2010) have opened a window for the detection of AGNs over the whole sky, including the elusive fraction of heavily obscured ones (e.g. Stern et al., 2012; Mateos et al., 2012; Jarrett et al., 2017; Assef et al., 2018; Barrows et al., 2021).
Currently, extensive spectroscopic follow-up measurements have allowed the confirmation of the estimated redshifts for more than 800 000 AGNs over large areas of the sky (Flesch, 2021). Spectroscopic surveys have also contributed to the detection of AGN activity through the analysis of the line ratio as is the case of the Baldwin-Phillips-Terlevich (BPT) diagram (Baldwin, Phillips, & Terlevich, 1981). However, their determination can take long integration times and require high-quality observations, rendering them ill-suited for most sources in large-sky catalogues. Photometric classification and redshifts (photo-\(z\)) are a viable option to understand the source nature and distribution across cosmic time (Baum, 1957; Salvato et al., 2019). Photometric redshift estimations have been obtained for galaxies (e.g. Hernan-Caballero et al., 2021) and AGNs (e.g. Ananna et al., 2017). Template-fitting photo-\(z\) estimations are computationally expensive and require high-performance computing facilities for large catalogues (\(\,\raise 1.29pt\hbox{$>$}\kern-7.5pt\lower 2.795pt\hbox{$\sim$}\,10^{7}\) elements, Gilda et al., 2021). At the expense of redshift precision, the use of drop-out techniques offer a more computationally efficient solution to generate and study high-redshift sources or candidates that, otherwise, would not have enough information to produce a precise redshift value (e.g. Bouwens et al., 2020; Carvajal et al., 2020; Shobhana et al., 2023).
Alternative statistical and computational methods can analyse a large number of elements and find relevant trends among their properties. One branch of these techniques is machine learning (ML; Samuel, 1959), which can, using previously modelled data, predict the behaviour that new data will have, that is, the values of their properties. In astronomy, ML has been used with much success in a wide range of subjects, such as redshift determination, morphological classification, emission prediction, anomaly detection, observations planning, and more (e.g. Ball & Brunner, 2010; Baron, 2019). Traditional ML models are, in general, only fed with measurements and not with physical assumptions (Desai & Strachan, 2021), and they do not need to check the consistency of the predictions or the results they provide. As a consequence, prediction times of traditional ML methods are typically less than those from physically based methods.
Despite the large number of applications it might have, one important criticism that ML has received is related to the lack of interpretability -or 'explainability', as it is called in ML jargon- of the derived models, trends, and correlations. Most ML models, after taking a series of measurements and properties as input, deliver a prediction of a different property. But they cannot provide coefficients or an analytical expression that might allow one to find an equation for future predictions (Goebel et al., 2018). An important counterexample of this fact is the use of symbolic regression (e.g. Cranmer et al., 2020; Villaescusa-Navarro et al., 2021; Cranmer, 2023). This implies that, for most ML models, it is not a simple task to understand which properties, and to what extent they help predict and interpret another attribute. This fact hinders our capability of understanding the results in physical terms.
Recent work has been done to overcome the lack of explainability in ML models. The most widely used assessment is done with feature importance (Casalicchio et al., 2019; Roscher et al., 2020), both global and local (Saarela & Jauhiainen, 2021). Game-theory-based analyses, such as the Shapley analysis (Shapley, 1953), have also been used to understand the importance of features in astrophysics (e.g. Machado Poletti Valle et al., 2021; Carvajal et al., 2021; Dey et al., 2022; Anabajagane et al., 2022; Alegre et al., 2022).
A further complication is that astronomical data can be very heterogeneous. Surveys and instruments gather data from many different areas in the sky with very different sensitivities and observational properties. This heterogeneity severely complicates most astronomical analyses, but in particular ML methods, as they are completely driven by data most of the time. This issue
can be alleviated using observations in large, and homogeneous surveys. Currently, among others, VLA, LOFAR, and the Giant Metrewave Radio Telescope (GMRT) allow such measurements to be obtained. Next-generation observatories and surveys -such as SKA and the Vera C. Rubin Observatory- will also help in this regard, where observations will be carried out homogeneously over very large areas.
From a pure ML-based standpoint, several techniques used to lessen the effect of data heterogeneity have been developed (i.e. data cleansing and homogenisation). Some of them include discarding sources that add noise to the overall data distribution (Ilyas & Rekatsinas 2022). This can be extended to vetoing sources from specific areas in the sky (due to, for example, bad data reduction). Opposite to that, and when possible, previously mentioned techniques can be combined by increasing the survey area as a way to reduce possible biases. After selecting the data sample to be used for modelling, it is also possible to homogenise the measured ranges of observed properties. This procedure implies, for instance, that normalising or standardising measured values can help ML models extract trends and connections among features more easily (Singh & Singh 2020).
Future observatories and surveys will deliver immense datasets. One option to analyse such observations and confirm their radio-AGN nature is through visual inspection (e.g. Banfield et al. 2015). The use of such a technique over large areas can have a very high cost. An alternative is using already-available multi-wavelength data and template-fitting tools to determine the likelihood of an AGN of being detected in radio wavelengths (see, for instance, Pacifici et al. 2023). With the use of existing data, ML can help to speed this process up via the training of models that can detect counterparts in large radio surveys (see, for example, the efforts made to achieve this goal, Hopkins et al. 2015; Bonaldi et al. 2021).
Building upon the work presented by Carvajal et al. (2021), we aim to identify candidates of high-redshift radio-detectable AGNs that can be extracted from heterogeneous large-area surveys. We developed a series of ML models to predict, separately, the detection of AGNs, the detection of the radio signal from AGNs, and the redshift values of radio-detectable AGNs using non-radio photometric data. In this way, it might be possible to avoid the direct analysis of large numbers of radio detections. Furthermore, we tested the performance of these models without applying a large number of previous cleaning steps, which might reduce the size of the training sets considerably. The compiled catalogue of candidates can help to use data from future large-sky surveys more efficiently, as observational and analytical efforts can be focussed on the areas in which AGNs have been predicted to exist. We seek, therefore, to test the generalisation power of such models by applying them in a different area from the training field with data that are not necessarily of the same quality.
The structure of this article is as follows. In Sect. 2, we present the data and its preparation for ML training. The selection of models and the metrics used to assess their results are shown in Sect. 3. In Sect. 4, the results of model training and validation are provided as well as the predictions using the ML pipeline for radio AGN detections. We present the discussion of our results in Sect. 5. Finally, in Sect. 6, we summarise our work.
## 2 Data
A large area with deep and homogeneous quality radio observations is needed to train and validate our models and predictions for RGs with already existent observations. As training field we selected the area of the Hobby-Eberly Telescope Dark Energy Experiment Spring field (HETDEX; Hill et al. 2008) covered by the first data release of the LOFAR Two-metre Sky Survey (LoTSS-DR1; Shimwell et al. 2019). The LoTSS-DR1 survey covers \(424\deg^{2}\) in the HETDEX Spring field (hereafter, HETDEX field) with LOFAR (van Haarlem et al. 2013) \(150\,\mathrm{MHz}\) observations that have a median sensitivity of \(71\,\mu\mathrm{Jy}\)/beam and an angular resolution of \(6\,\arcsec\). HETDEX provides as well multi-wavelength homogeneous coverage as described below.
In order to test the performance of the models when applied to different areas of the sky, and with different coverages from radio surveys, we selected the Sloan Digital Sky Survey (SDSS, York et al. 2000) Stripe 82 field (S82, Annis et al. 2014; Jiang et al. 2014). For S82, we collected data from the same surveys as with the HETDEX (see the following section) field but with one important caveat: no LoTSS-DR1 data is available in the field and, thus, we gathered the radio information from the VLA SDSS Stripe 82 Survey (VLAS82; Hodge et al. 2011). VLAS82 covers an area of \(92\deg\) with a median rms noise of \(52\,\mu\mathrm{Jy}\)/beam at \(1.4\,\mathrm{GHz}\) with an angular resolution of \(1\aas@@fstack{\prime}8\). We selected the S82 field (and, in particular, the area covered by VLAS82) given that it presents deep radio observations but taken with a different instrument than LOFAR. This difference allows us to test the suitability of our models and procedures in conditions that are different from the training circumstances.
### Data collection
The base survey from which all the studied sources have been drawn is the CatWISE2020 catalogue (CW; Marocco et al. 2021). It lists NIR-detected elements selected from WISE (Wright et al. 2010) and the Near-Earth Object Wide-field Infrared Survey Explorer Reactivation Mission (NEOWISE; Mainzer et al. 2011, 2014) over the entire sky at 3.4 and 4.6 \(\mu\)m (W1 and W2 bands, respectively). This catalogue includes sources detected at \(5\sigma\) in either of the used bands (i.e. W1\(\sim\)17.43 and W2\(\sim\)16.47 mag\({}_{\mathrm{Vega}}\) respectively). The HETDEX field contains \(15\,136\,878\) sources listed in CW. Conversely, in the S82 field, there are \(3\,590\,306\) of them.
Multi-wavelength counterparts for CW sources were found on other catalogues applying a \(5\arcsec\)search criteria. These catalogues include the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS DR1; Chambers et al. 2016; Flewelling et al. 2020, hereafter, PS1), the Two Micron All-Sky Survey (2MASS All-Sky; Skrutskie et al. 2006; Cutri et al. 2003a,b, hereafter, 2M), and AllWISE (AW; Cutri et al. 2013)2. The adopted search radius corresponds to the distance that has been used by Wright et al. (2010) to match radio sources to Pan-STARRS and WISE observations. Nevertheless, the source density of the radio (LOFAR and VLA) and 2MASS catalogues imply a low statistical (\(<1\%\)) spurious counterpart association, this is not the case for PS1, where the source density is higher. For this reason, and to maintain a statistically low spurious association between CW and PS1, we limited our search radius to \(1\aas@@fstack{\prime\prime}1\). This distance corresponds to the smallest point-spread function (PSF) size of the bands included in PS1 (Chambers et al. 2016).
Footnote 2: For the purposes of the analyses, and except when clearly stated otherwise, photometric measurements are converted to AB magnitudes.
For the purposes of this work, observations in LoTSS and VLAS82 are only used to determine whether a source is radio detected, or not. In particular, no check has been performed on whether a selected source is extended or not in any of the radio surveys. A single Boolean feature is created from the radio mea
surements (see Sect. 2.2) and no further analyses were performed regarding the detection levels that might be found.
Additionally, we discarded the measurement errors of all bands. Traditionally, ML algorithms cannot incorporate uncertainties in a straightforward way and, thus, we opted to avoid attempting to use them for training. One significant counterexample corresponds to Gaussian processes (GPs; Rasmussen & Williams 2006), where measurement uncertainties are needed by the algorithm to generate predictions. Additionally, the astronomical community has attempted to modify existing techniques to include uncertainties in their ML studies. Some examples include the works by Ball et al. (2008); Reis et al. (2019); Shy et al. (2022). Furthermore, Euclid Collaboration et al. (2023b) have shown that, in specific cases, the inclusion of measurement errors does not add new information to the training of the models and can be even detrimental to the prediction metrics. The degradation of the model by including uncertainties can likely be related to the fact that, by virtue of the large number of sources included in the training stages, the uncertainties are already encoded in the dataset in the form of scatter.
Following the same argument of measurement errors, upper limit values have been removed and a missing value is assumed instead. In general, ML methods (and their underlying statistical methods) cannot work with catalogues that have empty entries (Allison 2001). For that reason, we used single imputation (a review on the use of this method, which is part of data cleansing, in astronomy can be seen in Chattopadhyay 2017) to replace these missing values, and those fainter than 5\(-\sigma\) limits, with meaningful quantities that represent the lack of a measurement. We opted for the inclusion of the same 5\(-\sigma\) limiting magnitudes as the value to impute with. This method of imputation with some variations, has been successfully applied and tested, recently, by Arsioli & Dedin (2020); Carvajal et al. (2021); Curran (2022), and Curran et al. (2022). In particular, Curran (2022) tested several data imputation methods. Among those which replaced all missing values in a wavelength band with a single, constant value, using the 5\(-\sigma\) limiting magnitudes showed the best performance.
In this way, observations from 12 non-radio bands were gathered (as listed in Table 1). The magnitude density distribution for the sample from the HETDEX and S82 fields, without any imputation, is shown in Fig. 1. After imputation, the distribution of magnitudes changes, as shown in Fig. 2. Each panel of the figure shows the number of sources which have a measurement above its 5\(-\sigma\) limit in such band. Additionally, a representation of the observational 5\(-\sigma\) limits of the bands and surveys used in this work is presented in Fig. 3. It is worth noting the depth difference between VLASS2 and LoTSS-DR1 is \(\sim\)1.5 mag for a typical synchrotron emitting source (\(F_{\nu}\propto\nu^{\alpha}\) with \(\alpha\)= \(-\) 0.8), allowing the latter survey to reach fainter sources.
AGN labels and redshift information were obtained by cross-matching (with a 1\(\aas@@fstack{\prime\prime}\)1 search radius) the catalogue with the Million Quasar Catalog3 (MQC, v7.4d; Flesch 2021), which lists information from more than 1 500 000 objects that have been classified as optical quasi-stellar objects (QSOs), AGNs, or Blazars. Sources listed in the MQC may have additional counterpart information, including radio or X-ray associations. For the purposes of this work, only sources with secure spectroscopic redshifts were used. The matching yielded 50 538 spectroscopically confirmed AGNs in HETDEX and 17 743 confirmed AGNs in S82.
Footnote 3: [http://quasars.org/milliquas.htm](http://quasars.org/milliquas.htm)
Similarly, the sources in our parent catalogue were cross-matched with the Sloan Digital Sky Survey Data Release 16 (SDSS-DR16; Ahumada et al. 2020). This cross-match was done solely to determine which sources have been spectroscopically classified as galaxies (spClass == GALAXY). For most of these galaxies, SDSS-DR16 lists a spectroscopic redshift value, which will be used in some stages of this work. In the HETDEX field, SDSS-DR16 provides 68 196 spectroscopically confirmed galaxies. In the S82 field, SDSS-DR16 identifies 4 085 galaxies spectroscopically. Given that MQC has access to more AGN detection methods than SDSS, when sources were identified as both galaxies (in SDSS-DR16) and AGNs (in the MQC), a final label of AGN was given. A description of the number of elements in each field and the multi-wavelength counterparts found for them is presented in Table 2. From Table 2, it is possible to see that the numbers and ratios of AGNs and galaxies in both fields are dissimilar. S82 has been subject to a larger number of observations, which have allowed the detection of a larger fraction of AGNs than in the HETDEX field (see, for instance, Lyke et al. 2020), which does not have such number of dedicated studies.
trained with features spanning a wide range of values (i.e. several orders of magnitude). For this reason, it is customary to re-scale the available values to either be contained within the range \([0,1]\) or to have similar distributions. We applied a version of the latter transformation to our features (not the targets) as to have a mean value of \(\mu=0\) and a standard deviation of \(\sigma=1\) for each feature. Additionally, these new values were power-transformed to resemble a Gaussian distribution. This transformation helps the models avoid using the distribution of values as additional information for the training. For this work, a Yeo-Johnson transformation (Yeo and Johnson 2000) was applied.
### Feature pool
The initial pool of features that have been selected or engineered to use in our analysis is briefly described here. A full list of the features created for this work and their representation in the code and in some of the figures is presented in Table 1.
Most of the features used in this work come from photometry, both measured and imputed, in the form of AB magnitudes for a total of 12 bands. Also, all available colours from measured and imputed magnitudes were considered. In total, there are 66 colours, resulting from all available combinations of two magnitudes between the 12 selected bands. These colours are labelled in the form X_Y where X and Y are the respective magnitudes.
Additionally, the number of non-radio bands in which a source has valid measurements (band_num) has been used. This feature could be, very loosely, attributed to the total flux a source can display. A higher band_num will imply that such source can be detected in more bands, hinting a higher flux (regardless of redshift). The use of features with counting or aggregation of elements in the studied dataset is well established in ML (see, for example, Zheng and Casari 2018; Duboue 2020; Sanchez-Saez et al. 2021; Euclid Collaboration et al. 2023b).
Finally, as categorical features, we included an AGN-galaxy classification Boolean flag named class and a radio Boolean flag LOFAR_detect. This feature flags whether sources have counterparts in the radio catalogues (LoTSS or VLAS82).
## 3 Machine learning training
In an attempt to extract the largest available amount of information from the data, and let ML algorithms improve their predictions, we decided to perform our training and predictions through a series of sequential steps, which we refer to as'models' henceforth. We started with the training and prediction of the class of sources (AGNs or galaxies). The next model predicts whether an AGN could be detected in radio at the depth used during training (LoTSS). A final model will predict the redshift values of radio-predicted AGNs. A visual representation of this process can be seen in Fig. 4. Creating separate models gives us the opportunity to select the best subset of features for training as well as the best combination of ML algorithms for training in each step.
In broad terms, our goal with the classification models is to recover the largest number of elements from the positive classes (i.e. class = 1 and LOFAR_detect = 1). For the regression model, we aim to retrieve predictions as close as the originally fed redshift values.
In general, classification models provide a final score in the range \([0,1]\), which can only be associated with a true probability after a careful calibration (Kull et al. 2017a,b). Calibration of these scores can be done by applying a transformation to their values. For our work, we decided to apply a Beta transformation4. This type of transformation allows us to re-distribute the scores of an uncalibrated classifier allowing them to get closer
\begin{table}
\begin{tabular}{c c c} \hline \hline & HETDEX & S82 \\ \hline Survey & & 3500.306 \\ \hline AllWISE2030 & 15 136 878 & 3 590.306 \\ AllWISE & 5 955 123 & 1424 576 \\ Pan-STARRS & 4 837 580 & 1 346 915 \\
2MASS & 566 273 & 214 445 \\ LGSS & 187 573 &... \\ VLA82 &... & 8 747 \\ MOC (AGNs) & 50538 & 17 743 \\ SDSS (galaxies) & 68 196 & 4 085 \\ \hline \end{tabular}
\end{table}
Table 2: Composition of initial catalogue and number of cross matches with additional surveys and catalogues.
Figure 3: Flux and magnitude depths (5–\(\sigma\)) from the surveys and bands used in this work. Limiting magnitudes and fluxes were obtained from the description of the surveys, as referenced in Sect. 2.1. In purple, rest-frame SED from Mrk231 (\(z=0.0422\), Brown et al. 2019) is displayed as an example AGN. Redshifted (from \(z\)=0.001 to \(z\)=7) versions of this SED are shown in dashed grey lines.
Figure 2: Histograms of base collected non-radio bands for HETDEX (clean, background histograms) and S82 (empty, brown histograms) fields. Description as in Fig. 1. The number in the upper right corner of each panel shows number of sources with magnitudes originally measured above the 5–\(\sigma\) limit included in their corresponding histogram for each field (i.e. sources that have not been imputed or replaced).
to the definition of probability. Further details of the calibration process are given in the Appendix C.
Given that we need to be able to compare the results from the training and application of the ML models with values obtained independently (i.e. ground truth), we divided our dataset into labelled and unlabelled sources. Labelled sources are all elements of our catalogue that have been classified as either AGNs or galaxies. Unlabelled sources are those which lack such classification and that will only be subject to the prediction of our models, not taking part in any training step.
Before any calculation or transformation is applied to the data from the HETDEX field, we split the labelled dataset into training, validation, calibration, and testing subsets. The early creation of these subsets helps avoid information leakage from the test subset into the models. Initially, a 20% of the dataset has been reserved as testing data. Of the remaining elements, an 80% of them have been used for training, and the rest of the data has been divided equally between calibration and validation subsets (i.e. a 10% each). The splitting process and the number of elements for each subset are shown in Fig. 5. Depending on the model, the needed sources are selected from each of the subsets that have been already created. The training set will be used to select algorithms for each step and to optimise their hyperparameters. The inclusion of the validation subset helps in the parameter optimisation of the models. The probability calibration of the trained model is performed over the calibration subset and, finally, the completed models are tested on the test subset. The use of these subsets will be expanded in Sects. 3.3 and 3.4.
All the following transformations (feature selection, standardisation, and power transform of features) were applied to the training and validation subsets before the training of the algorithms and models. The calibration and testing subsets were subject to the same transformations after the modelling stage.
### Feature selection
Machine learning algorithms, as with most data analysis tools, require execution times which increase at least linearly with the size of the datasets. In order to reduce training times without losing relevant information for the model, the most important features were selected at each step through a process called feature selection.
To avoid redundancy, the process starts discarding features that have a high correlation with another property of the dataset. For discarding features, we calculated Pearson's correlation matrix for the full train+validation dataset only and selected the pairs of features that showed a correlation factor higher than \(\rho=0.75\), in absolute values5. From each pair, we discarded the feature with the lowest relative standard deviation (RSD; Johnson & Leone 1964). The RSD is defined as the ratio between the standard deviation of a set and its mean value. A feature which covers a small portion of its probable values (i.e. low coverage of parameter space, and lower RSD) will give less information to a model than one with largely spread values.
Footnote 5: A value of \(\rho=0.75\) is a compromise between stringent thresholds (e.g. \(\rho=0.5\)) and more relaxed values (e.g. \(\rho\approx 0.9\)). For an explanation on the selection of correlation values, see, for instance Ratner (2009).
For each model, the process of feature selection begins with 79 base features and three targets (class, LOFAR_detect, and Z). Feature selection is run, independently, for each trained model (i.e. AGN-galaxy classification, radio detection, and redshift predictions), delivering three different sets of features.
### Metrics
A set of metrics will be used to understand the reliability of the results and put them in context with results in the literature. Since our work includes the use of classification and regression models, we briefly discuss the appropriate metrics in the following sections.
#### 3.2.1 Classification metrics
The main tool to assess the performance of classification methods is the confusion (or error) matrix. It is a two-dimension (pre
Figure 4: Flowchart representing the prediction pipeline used to predict the presence of radio-detectable AGNs and their redshift values. At the beginning of each model step, the most relevant features are selected as described in Sect. 3.1.
Figure 5: Composition of datasets used for the different steps of this work. (a) HETDEX field. (b) S82.
dicted vs. true) matrix where the true and predicted class(es) are compared and results stored in cells with the rate of true positives (TP), true negative (TN), false positives (FP), and false negatives (FN). As mentioned earlier in Sect. 3, we seek to maximise the number of positive-class sources that are recovered as such. Using the elements of the confusion matrix, this aim can be translated into the maximisation of TP and, consequently, the minimisation of FN.
From the elements of the confusion matrix, we can obtain additional metrics, such as the F1 and F\({}_{\beta}\) scores (Dice 1945; Sorenson 1948; van Rijsbergen 1979), and the Matthews correlation coefficient (MCC; Yule 1912; Cramer 1946; Matthews 1975) which are better suited for unbalanced data as they take into account the behaviour and correlations among all elements of the confusion matrix. As such, the F1 coefficient is defined as the following:
\[\mathrm{F1}=\frac{2\mathrm{TP}}{2\mathrm{TP}+\mathrm{FN}+\mathrm{FP}}\,. \tag{1}\]
The values of F1 can go from 0 (no prediction of positive instances) to 1 (perfect prediction of elements with positive labels). This definition assigns equal weight (importance) to both the number of FN and FP. An extension to the F1 score, which adds a non-negative parameter, \(\beta\), to increase, the importance given to each one of them is the F score (F\({}_{\beta}\)), defined as follows:
\[\mathrm{F}_{\beta}=\frac{(1+\beta^{2})\times\mathrm{TP}}{(1+\beta^{2})\times \mathrm{TP}+\beta^{2}\times\mathrm{FN}+\mathrm{FP}}\,. \tag{2}\]
Using \(\beta>1\), more relevance is given to the optimisation of FN. When \(0\leq\beta<1\), the optimisation of FP is more relevant. If \(\beta=1\), the initial definition of F1 is recovered. As with F1, F\({}_{\beta}\) values can be in the range \([0,1]\). As we seek to minimise the number of FN detection, we adopt a conservative value of \(\beta=1.1\), giving more significance to their reduction without removing the aim for FP. Also, this value is close enough to \(\beta=1\), which will allow us to compare our scores to those produced in previous works.
The MCC metric is defined as
\[\mathrm{MCC}=\frac{\mathrm{TP}\times\mathrm{TN}-\mathrm{FP}\times\mathrm{FN}}{ \sqrt{(\mathrm{TP}+\mathrm{FP})(\mathrm{TP}+\mathrm{FN})(\mathrm{TN}+ \mathrm{FP})(\mathrm{TN}+\mathrm{FN})}}\,, \tag{3}\]
which includes also the information about the TN elements. The values of MCC can range from \(-1\) (total disagreement between true and predicted values) to \(+1\) (perfect prediction) with 0 representing a prediction analogous to a random guess.
The recall, or true positive rate (TPR, also called completeness, or sensitivity; Yerushalmy 1947) corresponds to the rate of relevant, or correct, elements that have been recovered by a process. Using the elements from the confusion matrix, it can be defined as the following:
\[\mathrm{recall}=\mathrm{TPR}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}\,, \tag{4}\]
and it can go from 0 to 1, with a value of 1 meaning that the model can recover all the true instances.
The last metric used is precision (also known as purity), which can be defined as the ratio between the number of correctly classified elements and the number of sources in the positive class (AGN or radio detectable):
\[\mathrm{precision}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}\,, \tag{5}\]
and their values can range from 0 to 1 where higher values show that more real positive instances of the studied set were retrieved as such by the model.
In order to establish a baseline from which the aforementioned metrics can be assessed, it is possible to obtain them in the case of a random, or no-skill prediction. Following, for instance, the derivations and notation from Poisot (2023), no-skill versions of classification metrics (Eqs. 2-5) are the following:
\[\mathrm{F}_{\beta}^{\mathrm{no-skill}} =p\,, \tag{6}\] \[\mathrm{MCC}^{\mathrm{no-skill}} =0\,,\] (7) \[\mathrm{recall}^{\mathrm{no-skill}} =p\,,\] (8) \[\mathrm{precision}^{\mathrm{no-skill}} =p\,, \tag{9}\]
where \(p\) corresponds to the ratio between the elements of the positive class and the total number of elements involved in the prediction.
#### 3.2.2 Regression metrics
For the case of individual redshift value determination, two commonly used metrics are the difference between predicted and true redshift,
\[\Delta z=z_{\mathrm{True}}-z_{\mathrm{Predicted}}\,, \tag{10}\]
and its normalised difference,
\[\Delta z^{\mathrm{N}}=\frac{z_{\mathrm{True}}-z_{\mathrm{Predicted}}}{1+z_{ \mathrm{True}}}\,. \tag{11}\]
If the comparison is made over a larger sample of elements, the bias of the redshift is used (Dahlen et al. 2013), with the median of the quantities instead of its mean to avoid the strong influence of extreme values. This bias can be written as
\[\Delta z_{\mathrm{Total}}=\mathrm{median}\left(z_{\mathrm{True}}-z_{ \mathrm{Predicted}}\right)=\mathrm{median}(\Delta z)\,, \tag{12}\] \[\Delta z_{\mathrm{Total}}^{\mathrm{N}}=\mathrm{median}\left(\frac{ z_{\mathrm{True}}-z_{\mathrm{Predicted}}}{1+z_{\mathrm{True}}}\right)=\mathrm{ median}(\Delta z^{\mathrm{N}})\,. \tag{13}\]
Using the previous definitions, four additional metrics can be calculated. These are the median absolute deviation (MAD, \(\sigma_{\mathrm{MAD}}\)) and normalised median absolute deviation (NMAD, \(\sigma_{\mathrm{MAD}}\); Hoaglin et al. 1983; Ilbert et al. 2009), which are less sensitive to outliers. Also, the standard deviation of the predictions, \(\sigma_{z}\), and its normalised version, \(\sigma_{z}^{\mathrm{N}}\) are typically used. They are defined as
\[\sigma_{\mathrm{MAD}} =1.48\times\mathrm{median}\left(\left|\Delta z\right|\right)\,, \tag{14}\] \[\sigma_{\mathrm{NMAD}} =1.48\times\mathrm{median}\left(\left|\Delta z^{\mathrm{N}} \right|\right)\,,\] (15) \[\sigma_{z} =\sqrt{\frac{1}{\mathrm{d}}\sum_{l}^{\mathrm{d}}\left(\Delta z \right)^{2}}\,,\] (16) \[\sigma_{z}^{\mathrm{N}} =\sqrt{\frac{1}{\mathrm{d}}\sum_{l}^{\mathrm{d}}\left(\Delta z^{ \mathrm{N}}\right)^{2}}\,, \tag{17}\]
with d being the number of elements in the studied sample (i.e. its size).
Also, the outlier fraction (\(\eta\), as used in Dahlen et al. 2013; Lima et al. 2022) is considered, which is defined as the fraction sources with a predicted redshift difference (\(\left|\Delta z^{\mathrm{N}}\right|\), Eq. 11) larger than a previously set value. Taking the results from Ilbert et al. (2009) and Hildebrandt et al. (2010), we selected this threshold to be 0.15, leaving the definition of the outlier fraction as follows:
\[\eta=\frac{\#\left(\left|\Delta z^{\mathrm{N}}\right|>0.15\right)}{d}\,, \tag{18}\]
where \(\#\) symbolises the number of sources fulfilling the described relation, and \(d\) corresponds to the size of the selected sample.
#### 3.2.3 Calibration metrics
One of the most used analytical metrics to assess calibration of a model is the Brier score (BS; Brier 1950). It measures the mean square difference between the predicted probability of an element and its true class. If the total number of elements in the studied sample is \(d\), the BS can be written (for binary classification problems, as the ones studied in this work) as
\[\mathrm{BS}=\frac{1}{d}\sum_{i}^{d}(\mathbb{C}-\mathtt{class})^{2}\,, \tag{19}\]
where \(\mathbb{C}\) is the predicted class and class corresponds the true class of each of the elements in the sample (0 or 1). The BS can range between 0 and 1 with 0 representing a model that is completely reliable in its predictions. Additionally, the BS can be used to compare the reliability (or calibration) between a model and a reference using the Brier skill score (BSS; e.g. Glahn & Jorgensen 1970), which can be defined as the following:
\[\mathrm{BSS}=1-\frac{\mathrm{BS}}{\mathrm{BS}_{\mathrm{ref}}}\,. \tag{20}\]
In our case, \(\mathrm{BS}_{\mathrm{ref}}\) corresponds to the value calculated from the uncalibrated model. The BSS can take values between \(-1\) and \(+1\). The closer the BSS gets to 1, the more reliable the analysed model is. These values include the case where BSS\(\approx\)0, in which both models perform similarly in terms of calibration.
For our pipeline, after a model has been fully trained, a calibrated version of their scores will be obtained. With both of them, the BSS will be calculated and, if it is not much lower than 0, that calibrated transformation will be used as the final scores from the prediction.
### Model selection
By design, each ML algorithm has been developed and tuned to work better with certain data conditions. For instance, balance of target categories and ranges of base features. The predicting power of different algorithms can be combined with the use of meta learners (Vanschoren 2019). Meta learners use the properties or predictions from other algorithms (base learners) as additional information during their training stages. A simple implementation of this procedure is called generalised stacking (Wolpert 1992) which can be interpreted as the addition of priors to the model training stage. Generalised stacking has been applied in several astrophysical problems. That is the case of Zitlau et al. (2016), Cunha & Humphrey (2022), and Euclid Collaboration et al. (2023a), Euclid Collaboration et al. (2023b).
Base and meta learners have been selected based upon the metrics described in Sect. 3.2. We trained five algorithms with the training subset and calculated the metrics for all of them using a 10-fold cross-validation approach (e.g. Stone 1974; Allen 1974) over the same training subset. For each metric, the learners have been given a rank (from 1 to 5) and a mean value has been obtained from them. Out of the analysed algorithms, the one with the best overall performance (i.e. best mean rank) is selected to be the meta learner while the remaining four are used as base learners.
For the AGN-galaxy classification and radio detection problems, we tested five classification algorithms: Random Forest (RF; Breiman 2001), Gradient Boosting Classifier (GBC; Friedman 2001), Extra Trees (ET; Geurts et al. 2006), Extreme Gradient Boosting (XGBoost, v1.5.1; Chen & Guestrin 2016), and Category Boosting (CatBoost, v1.0.5; Prokhorenkova et al. 2018; Dorogush et al. 2018). For the redshift prediction problem, we tested five regressors as well: RF, ET, XGBoost, CatBoost, and Gradient Boosting Regressor (GBR; Friedman 2001). We used the Python implementations of these algorithms and, in particular for RF, ET, GBC, and GBR, the versions offered by the package scikit-learn6(v0.23.2; Pedregosa et al. 2011). These algorithms were selected given that they offer tools to interpret the global and local influence of the input features in the training and predictions (cf. Sects. 1 and 5.3).
Footnote 6: [https://scikit-learn.org](https://scikit-learn.org)
All the algorithms selected for this work fall into the broad family of tree-based models. Forest models (RF and ET) rely on a collection of decision trees to, after applying a majority vote, predict either a class or a continuum value. Each of these decision trees uses a different, randomly selected subset of features to make a decision on the training set (Breiman 2001). Opposite to forests, gradient boosting models (GBC, GBR, XGBoost and CatBoost) apply decision trees sequentially to improve the quality of the previous predictions (Friedman 2001, 2002).
### Training of models
The procedure described in Sect. 3.3 includes an initial fit of the selected algorithms to the training data (including the selected features) to optimise their parameters. The stacking step includes a new optimisation of the parameters of the meta learner using 10-fold cross-validation on the training data with the addition of the output from the base learners, which are treated as regular features. Then, the hyperparameters of the stacked models are optimised over the training subset (a brief description of this step is presented in Appendix D).
The final step involves a last parameter fitting instance but using, this time, the combined train+validation subset, which includes the output of the base algorithms, to ensure wider coverage of the parameter space and better-performing models. Consequently, only the testing set is available for assessing the quality of the predictions made by the models.
### Probability calibration
The calibration procedure was performed in the calibration subset. In this way, we avoid influencing the process with informa
tion from the training and validation steps. A broader description of the calibration process and the results obtained for our models are presented in Appendix C. Thus, from this point onwards, and with the sole exception of some of the outcomes shown in Sect. 5.3, all results from classifications will be based on the calibrated probabilities.
### Optimisation of classification thresholds
As mentioned in the first paragraphs of Sect. 3, classification models deliver a range of probabilities for which a threshold is needed to separate their predictions between negative and positive classes. By default, these models set a threshold at 0.5 in score7 but, in principle, and given the characteristics of the problem, a different optimal threshold might be needed.
Footnote 7: Throughout this work, we call this a naive threshold.
In our case, we want to optimise (increase) the number of recovered elements in each model (i.e. AGNs or radio-detectable sources). This maximisation corresponds to obtaining thresholds that optimise the recall given a specific precision limit. We did that with the use of the statistical tool called precision-recall (PR) curve. A deeper description of this method and the results obtained from our work are presented in Appendix E8.
Footnote 8: Thresholds derived from the PR curves are labelled as PR.
## 4 Results
In the present section, we report the results from the training of the different models in the HETDEX field. All metrics are evaluated using the testing subset. The metrics are also computed on labelled AGNs in the S82 field. As no training is done on S82 data, it offers a way to test the validity of the pipeline on data that, despite having similar optical-to-NIR photometric properties, presents distinct radio information and location in the sky.
The three models are chained afterwards in sequential mode to create a pipeline, and related metrics, for the prediction of radio-AGN activity. Novel predictions were obtained from the application of such pipeline to unlabelled sources from both the HETDEX and S82 fields.
### AGN-galaxy classification
Feature selection was applied to the train+validation subset with 85 488 confirmed elements (galaxies from SDSS DR16 and AGNs from MQC, i.e. class == 0 or class == 1). After the selection procedure described in Sect. 3.1, 18 features were selected for training: band_num, W4mag, g_r, r_i, r_j, i_z, i_y, z_w2, y_y, y_W1, y_W2, J_H, H_K, H_W3, W1_W2, W1_W3, and W3_W4. The target feature is class.
The results of model testing for the AGN-galaxy classification are reported in Table 3. The CatBoost algorithm provides the best metric values (highest mean rank) and is therefore selected as the meta model. XGBoost, RF, ET, and GBC were used as base learners.
The optimisation of the PR curve for the calibrated predictor provides an optimal threshold for this algorithm of 0.34895. This value was used for the AGN-galaxy model throughout this work.
The results of the application of the stacked and calibrated model for the testing subset and the labelled sources in S82 are presented in Table 4. The metrics are shown for the use of two different thresholds, the naive value of 0.5 and the PR-derived value of 0.34895. The confusion matrix (calculated on the testing dataset) is shown in the upper left panel of Fig. 6.
Overall, the model is able to separate AGNs from galaxies with a very high (recall \(\geq\)94%) success rate. A comparison with traditional colour-colour criteria for AGNs selection is presented in Sect. 5.1.1. In particular, Table 15 displays metrics for such criteria. Our classification model can recover, in the HETDEX field, 15% and 59% more AGNs than said formulae. In the S82 field, these differences range between 17% and 61%. Such differences highlight the fact that most of the information that separates AGNs from galaxies is traced by the selected features (mostly colours). Also, the increase in the recovery rate underlines the importance of using photometric information from several bands for such task, as opposed to traditional colour-colour criteria.
### Radio detection
Training of the radio detection model was applied only to sources confirmed to be AGN (class == 1). Feature selection was applied to the train+validation subset, with 36 387 confirmed AGNs. The target feature is LOFAR_detect and the base of selected features are: band_num, W4mag, g_r, g_i, r_i, r_z, z_y, z_W1, y_y, y_W1, J_H, L_K, K_W3, K_W4, W1_W2, and W2_W3.
The performance of the tested algorithms is shown in Table 5. In this case, GBC shows the highest mean rank. For this reason, we used it as the meta learner and XGBoost, CatBoost, RF, and ET were selected as base learners. The optimal threshold for this model is found to be \(\sim\)0.20460. Finally, the stacked model metrics and confusion matrix are shown in Table 6, for PR-optimised and naive thresholds, and in Fig. 6 respectively.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Subset & Threshold & F\({}_{B}\) & MCC & Precision & Recall \\ & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) \\ \hline \multirow{3}{*}{HETDEX-test} & Naive & 95.37\(\pm\)0.36 & 91.81\(\pm\)0.67 & 97.47\(\pm\)0.69 & 95.89\(\pm\)2.27 \\ & PR & 95.42\(\pm\)0.38 & 91.85\(\pm\)0.70 & 94.90\(\pm\)0.65 & 96.21\(\pm\)0.43 \\ \multirow{3}{*}{S82-label} & Naive & 94.15\(\pm\)0.44 & 70.54\(\pm\)2.02 & 95.16\(\pm\)0.41 & 93.33\(\pm\)0.66 \\ & PR & 94.37\(\pm\)0.36 & 70.67\(\pm\)1.72 & 94.81\(\pm\)0.40 & 94.01\(\pm\)0.59 \\ \multirow{3}{*}{HETDEX-pipe} & Naive & 95.37\(\pm\)0.36 & 91.81\(\pm\)0.67 & 97.47\(\pm\)0.69 & 95.89\(\pm\)2.27 \\ & PR & 95.42\(\pm\)0.38 & 91.85\(\pm\)0.70 & 94.49\(\pm\)0.65 & 96.21\(\pm\)0.43 \\ \multirow{3}{*}{S82-pipe} & Naive & 94.15\(\pm\)0.44 & 70.54\(\pm\)2.02 & 95.16\(\pm\)0.41 & 93.33\(\pm\)0.66 \\ & PR & 94.37\(\pm\)0.36 & 70.67\(\pm\)1.72 & 94.81\(\pm\)0.40 & 94.01\(\pm\)0.59 \\ \hline \end{tabular}
\end{table}
Table 4: Resulting metrics of AGN-galaxy classification model for the test subset and the labelled sources in S82 using two different threshold values, as described in Sect. 4.1. HETDEX and S82 pipeline results are described in Sect. 4.4.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & F\({}_{B}\) & MCC & Precision & Recall & Rank \\ & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) \\ \hline CatBoost & 95.70\(\pm\)0.28 & 92.46\(\pm\)0.48 & 95.45\(\pm\)0.32 & 95.91\(\pm\)0.37 & 1.00 \\ XGBoost & 95.67\(\pm\)0.27 & 92.40\(\pm\)0.48 & 95.41\(\pm\)0.39 & 95.88\(\pm\)0.34 & 2.00 \\ RF & 95.52\(\pm\)0.36 & 92.14\(\pm\)0.59 & 95.28\(\pm\)0.46 & 95.71\(\pm\)0.40 & 3.00 \\ ET & 95.40\(\pm\)0.49 & 91.94\(\pm\)0.69 & 95.13\(\pm\)0.43 & 95.63\(\pm\)0.47 & 4.00 \\ GBC & 95.26\(\pm\)0.31 & 91.66\(\pm\)0.54 & 94.82\(\pm\)0.41 & 95.63\(\pm\)0.35 & 5.00 \\ \hline \end{tabular} 1
\end{table}
Table 3: Best performing models for the AGN-galaxy classification
### Redshift predictions
The redshift value prediction model was applied to sources confirmed to be radio-detected AGN (i.e. class == 1 and radio_detect == 1). Feature selection (cf. Sect. 3.1) was applied to the train+validation subset, with 4 612 sources, leading to the selection of 17 features. The target feature is Z and the selected base features are band_num, W4mag, g_r_g_W3, r_i, r_z, i_z, i_y, z_y, y_j, y_W1, J_H, H_K, K_W3, K_W4, W1_W2, and W2_W3.
For the redshift prediction, the tested algorithms performed as shown in Table 7. Based on their mean rank values, RF, CatBoost, XGBoost, and GBR were selected as base learners and ET (which shows the best \(\sigma_{\text{MAD}}\) value of the two models with the best rank) was used as meta learner. The redshift regression metrics of the stacked model are presented in Table 8. Likewise, the comparison between the original and predicted redshifts is shown in the lower panel of Fig. 6.
The application of the prediction pipeline to the unlabelled sources from the HETDEX field led to 9 974 990 predicted AGNs, from which 68 252 were predicted to be radio detectable. The pipeline predicts, as well, 2 073 997 AGNs in the unlabelled data from S82, being 22 445 of them candidates to be detected in the radio (to the detection level of LoTSS). The distribution of the predicted redshifts for radio AGNs in HETDEX and S82 is presented in Fig. 7. The pipeline outputs for a small sample of the predicted radio AGNs are presented in Tables 12 and 13 for HETDEX and S82 respectively. Section 5 explores the comparison of these results with previous works in the literature and discusses the main drivers (i.e. features) for the detection of these radio AGNs.
### No-skill classification
As presented in Sect. 3.2.1, Eqs. 6-9 show the base results for a classification with no skill. Table 14 presents the scores generated by using this technique. These values are the base from which any improvement can be assessed.
Subsets and prediction modes displayed in Table 14 coincide with those exhibited in Tables 4, 6, and 9. For instance, in the test HETDEX sub-sample, \(\sim\)43% of sources are labelled as AGNs. From all AGNs, \(\sim\)13% of them have radio detections. This can be summarised stating that \(\sim\)6% of all sources in the test sub-sample are radio-detected AGNs.
## 5 Discussion
### Comparison with previous prediction or detection works
In this subsection, we provide a few examples of related published works as well as plausible explanations for observed discrepancies when these are present. This comparison attempts to be representative of the literature on the subject but does not intends to be complete in any way.
#### 5.1.1 AGN detection prediction
In order to understand the significance of our results and ways for future improvement, we separate the comparison with previous works in two parts. First, we present previously published results from traditional methodologies. In second place, we offer a comparison with ML methods.
Traditional AGN selection methods are based on the comparison of the measured SED photometry to a template library (Walcher et al. 2011). A recent example of its application is presented by Thorne et al. (2022) where best fit classifications were calculated for more than 700 000 galaxies in the D10 field of the Deep Extragalactic Visible Legacy Survey (DEVILS; Davies et al. 2018) and the Galaxy and Mass Assembly survey (GAMA; Driver et al. 2011; Liske et al. 2015). The 91% recovery rate of AGNs, selected through various means (X-ray measurements, narrow and broad emission lines, and mid-IR colours), is very much in line with our findings in S82, where our rate (recall) reaches 89%.
Traditional methods also encompass the colour-based selection of AGNs. While less precise, they provide access to a much larger base of candidates with a very low computational cost. We implemented some of the most common colour criteria on the data from S82. Of particular interest is the predicting power of the mid-IR colour selection due to its potential to detect hidden or heavily obscured AGN activity.
Based on WISE (Wright et al. 2010) data, Stern et al. (2012, S12) proposed a threshold at W1 - W2 \(\geq\) 0.8 to separate AGNs from non-AGNs using data from AGNs in the Cosmic Evolution Survey (COSMOS) field (Scoville et al. 2007). A more stringent criterion was developed by Mateos et al. (2012, M12), the AGN wedge, which can be defined by the sources located inside the region defined by the relations W1 - W2 \(<\) 0.315 \(\times\) (W2 - W3) + 0.791, W1 - W2 \(>\) 0.315 \(\times\) (W2 - W3) - 0.222, and W1 - W2 \(>-\)3.172 \(\times\) (W2 - W3 ) + 7.624. In order to define this wedge, they used data from X-ray selected AGNs over an area of 44.43 deg\({}^{2}\) in the northern sky. Mingo et al. (2016, M16) cross-correlated data from WISE observations with X-ray and radio surveys creating a sample of star-forming galaxies and AGNs in the northern sky. They developed individual relations to separate classes of galaxies and AGNs in the W1 - W2, W2 - W3 space and, for AGNs the criterion, the relation is W1 - W2 \(\geq\) 0.5 and W2 - W3 \(<\) 4.4. More recently, Blecha et al. (2018, B18) analysed the quality of mid-IR colour selection methods for the identification of obscured AGNs involved in mergers. Using hydrodynamic simulations for the evolution of AGNs in galaxy mergers, they developed a selection criterion from WISE colours which is shown to be able to separate, with high reliability, starburst galaxies from AGNs. The expressions have the form W1 - W2 \(>\) 0.5, W2 - W3 \(>\) 2.2, and W1 - W2 \(>\) 2\(\times\) ( W2 - W3) \(-\) 8.9. The results from the application of these criteria to our samples in the testing subset and in the labelled sources of S82 field are summarised in Table 15 and a graphical representation of the boundaries they create in their respective parameter spaces is presented in Fig. 9.
Table 15 shows that previous colour-colour criteria have been designed and calibrated to have very high precision values. Most of the sources deemed to be AGN by them are, indeed, of such class. Despite being tuned to maximise their recall (and F\({}_{\beta}\) to a lesser extent), our classifier, and the criterion derived from it, still show precision values compatible with those of such criteria. This result underlines the power of ML methods. They can be on a par with traditional colour-colour criteria and excel in additional metrics.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{Subset} & Threshold & F\({}_{\beta}\) & MCC & Precision & Recall \\ & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) \\ \hline \multirow{2}{*}{HETDEX-test} & Naive & 20.68\(\pm\)3.17 & 24.93\(\pm\)3.72 & 25.34\(\pm\)36.56 & 13.79\(\pm\)22.27 \\ & PR & 37.99\(\pm\)2.59 & 33.66\(\pm\)2.79 & 32.20\(\pm\)2.72 & 44.61\(\pm\)2.46 \\ \multirow{2}{*}{S82-label} & Naive & 24.08\(\pm\)3.44 & 21.43\(\pm\)3.53 & 25.44\(\pm\)3.64 & 23.07\(\pm\)3.72 \\ \multirow{2}{*}{} & PR & 19.42\(\pm\)2.31 & 17.23\(\pm\)3.08 & 11.33\(\pm\)1.32 & 47.36\(\pm\)6.22 \\ \hline \end{tabular} 1
\end{table}
Table 9: Results of application of radio AGN prediction pipeline to the labelled sources in the HETDEX and S82 fields.
Figure 7: Redshift density distribution of the predicted radio AGNs within the unlabelled sources (clean histograms) in HETDEX (ochre histograms) and S82 (blue histograms) and true redshifts from labelled radio AGNs (dashed histograms).
Figure 9 is constructed as a confusion matrix, plotting in each quadrant the whole WISE population in the background and in colour contours the corresponding fraction of the testing set (TP, TN, FP, and FN, see Fig. 5(a) and Sect. 3.2.1). As expected, our pipeline is able to separate with high confidence sources which are closer to the AGN or the galaxy loci (TP and TN) while sources in the FN and FP quadrant show a different situation. Active galactic nuclei predicted to be galaxies (FN, 1.6% of sources for HETDEX, and 4.9% for S82) are located in the galaxy region of the colour-colour diagram. On the opposite corner of the plot, galaxies predicted to be AGNs (FP, 2.4% of sources for HETDEX, and 4.2% for S82) cover the areas of AGNs and galaxies uniformly. False negative sources might be sources that are identified as AGNs by means not included in our feature set (e.g. X-ray, radio emission). Sources in the FP quadrant, alternatively, might be galaxies with extreme properties, similar to AGNs.
For the case of ML-based models for AGN-galaxy classification, several analyses have been published in recent years. An example of their application is provided in Clarke et al. (2020) where a random forest model for the classification of stars, galaxies and AGNs using photometric data was trained from more than 3 000 000 sources in the SDSS (DR15; Aguado et al. 2019) and WISE with associated spectroscopic observations. Close to 400 000 sources have a quasar spectroscopic label and from the application of their model to a validation subset, they obtain a recall of 0.929 and F1 score of 0.943 for the quasar classification. These scores are of the same order as the ones obtained when applying our AGN-galaxy model to the testing set (see Table 4). Thus, and despite using an order of magnitude fewer sources for the full training and validation process, our model can achieve equivalently good scores.
Expanding on Clarke et al. (2020), Cunha and Humphrey (2022) built a ML pipeline, SHEEP, for the classification of sources into stars, galaxies and QSOs. In contrast to Clarke et al. (2020) or the pipeline described here, the first step in their analysis is the redshift prediction, which is used as part of the training features by the subsequent classifiers. They extracted WISE and SDSS (DR15; Aguado et al. 2019) photometric data for almost 3 500 000 sources classified as stars, galaxies or QSOs. The application of their pipeline to sources predicted to be QSOs leads to a recall of 0.960 and an F1 score of 0.967. The improved scores in their pipeline might be a consequence not only of the slightly larger pool of sources, but also the inclusion of the coordinates of the sources (right ascension, declination) and the predicted redshift values as features in the training.
A test with a larger number of ML methods was performed by Poliszczuk et al. (2021). For training, they used optical and infrared data from close to 1 500 sources (galaxies and AGNs) located at the AKARI North Ecliptic Pole (NEP) Wide-field (Lee et al. 2009; Kim et al. 2012) covering a 5.4 deg\({}^{2}\) area. They tested LR, SVM, RF, ET, and XGBoost including the possibility of generalised stacking. In general, they obtain results with F1 scores between 0.60 - 0.70 and recall values in the range of 50% - 80%. These values, lower than the works described here, can be fully understood given the small size of their training sample. A larger photometric sample covers a wider range of the parameter space which significantly helps the metrics of any given model.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline ID & RA\_ICRS & DE\_ICRS & band\_num & class & Score\_AGN Prob\_AGN & Prod\_AGetect & Score\_radio & Prob\_radio & Score\_\_JAGN & Prob\_\_JAGN & \(z\) pred\_z \\ & (deg) & (deg) & & & & & & & & & & & \\ \hline
9898717 & 203.611953 & 55.518079 & 9 & 1.0 & 0.500082 & 0.954114 & 0 & 0.390861 & 0.375122 & 0.195462 & 0.357949 & 4.738 & 4.3679 \\
168686 & 164.769135 & 54.805202 & 8 & 1.0 & 0.500008 & 0.885187 & 0 & 0.450279 & 0.418719 & 0.225161 & 0.359296 & 4.893 & 4.1733 \\
14437074 & 2132.25517 & 54.236343 & 9 & 1.0 & 0.500009 & 0.956187 & 0 & 0.251632 & 0.263746 & 0.125539 & 0.245464 & 4.326 & 4.0475 \\
10408176 & 188.163651 & 52.880898 & 9 & 1.0 & 0.500012 & 0.622448 & 0 & 0.604838 & 0.526003 & 0.302426 & 0.327410 & 4.340 & 3.9553 \\
12612753 & 2272.16370 & 51.941029 & 9 & 1.0 & 0.500055 & 0.887909 & 0 & 0.364423 & 0.355080 & 0.182231 & 0.315278 & 3.795 & 3.8797 \\ \hline \end{tabular}
\end{table}
Table 10: Predicted and original properties for the 5 sources in testing subset with the highest redshift predicted radio AGNs. Sources are sorted by decreasing predicted redshift. A description of the columns is presented in Appendix G.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline ID & RA\_ICRS & DE\_ICRS & band\_num & class & Score\_AGN Prob\_AGN & Ind\_detect & Score\_radio & Prob\_radio & Score\_JAGN & Prob\_JAGN & \(z\) pred\_z \\ & (deg) & (deg) & & & & & & & & & & & \\ \hline
1406323 & 32.679744 & -0.30505 & 1.6 & 1.0 & 0.500050 & 0.886373 & 1 & 0.185842 & 0.204867 & 0.092900 & 0.174791 & 4.650 & 4.4986 \\
326139 & 33.580879 & -1.121398 & 8 & 1.0 & 0.500040 & 0.822622 & 0 & 0.208769 & 0.225946 & 0.104393 & 0.185868 & 4.600 & 4.3785 \\
633752 & 12.526446 & -0.888606 & 9 & 1.0 & 0.50003 & 0.793882 & 0 & 0.206162 & 0.223600 & 0.1030898 & 0.177512 & 4.310 & 4.2946 \\
283444 & 34.10440 & 0.789007 & 7 & 1.0 & 0.500062 & 0.909395 & 0 & 0.375735 & 0.365709 & 0.18791 & 0.303765 & 4.099 & 4.0635 \\
3191865 & 31.881712 & 1.063655 & 9 & 1.0 & 0.500087 & 0.962260 & 0 & 0.264210 & 0.274477 & 0.132128 & 0.264118 & 3.841 & 4.0509 \\ \hline \end{tabular}
\end{table}
Table 11: Predicted and original properties for the 5 sources in S82 with the highest predicted redshift on the labelled sources predicted to be radio AGNs. Sources are sorted by decreasing predicted redshift. A description of the columns is presented in Appendix G.
Figure 8: Combined confusion matrices and True/predicted redshift density plot for the full radio AGN detection prediction computed using the testing subset from HETDEX (panels (a) and (b)) and the known labelled sources from S82 (panels (c) and (d)).
#### 5.1.2 Radio detection prediction
We have not found in the literature any work attempting the prediction of AGN radio detection at any level and therefore this is the first attempt at doing so. In the literature we do find several correlations between the AGN radio emission (flux) and that at other wavelengths (e.g. with infrared emission, Helou et al. 1985; Condon 1992) and substantial effort has been done towards classifying RGs based upon their morphology (e.g. Aniyan & Thorat 2017; Wu et al. 2019) and its connection to environment (Miley & De Breuck 2008; Magliocchetti 2022). None of these extensive works has directly focussed on the a priori presence or absence of radio emission above a certain threshold. Therefore, the results presented here are the first attempt at such an effort.
The \(\sim\)2x success rate of the pipeline to identify radio emission in AGNs (\(\sim\)44.61% recall and \(\sim\)32.20% precision; see Table 9) with the respect to a no-skill selection (\(\la\)30%), provides the opportunity to understand what the model has learned from
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline ID & RA\_CRS & DE\_CRS & band\_num & Score\_AGN & Prob\_AGN & radio\_detect & Score\_radio & Prob\_radio & Score\_rAGN & Prob\_AGN & pred\_c \\ & (deg) & (deg) & & & & & & & & & & \\ \hline
3244450 & 26.276423 & 1.10405 & 7 & 0.500007 & 0.57804 & 0 & 0.351672 & 0.345250 & 0.175838 & 0.199832 & 4.7114 \\
1062270 & 11.744675 & 5.056242 & 7 & 0.500015 & 0.560248 & 0 & 0.231846 & 0.230529 & 0.106926 & 0.149901 & 4.5622 \\
3261269 & 28.882526 & 1.117103 & 7 & 0.500011 & 0.608660 & 0 & 0.354936 & 0.347777 & 0.177472 & 0.211678 & 4.3153 \\
1466227 & 18.157259 & -0.258997 & 5 & 0.500013 & 0.630968 & 0 & 0.456207 & 0.422973 & 0.228110 & 0.266882 & 4.3146 \\
1134866 & 11.304936 & -0.507943 & 7 & 0.500011 & 0.616439 & 0 & 0.226178 & 0.241539 & 0.113091 & 0.148894 & 4.3140 \\ \hline \end{tabular}
\end{table}
Table 13: Predicted and original properties for the 5 sources in S82 with the highest predicted redshift on the unlabelled sources predicted to be radio AGNs. A description of the columns is presented in Appendix G.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Subset & Prediction & F\({}_{B}\) & MCC & Precision & Recall \\ & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) \\ \hline \multirow{4}{*}{HETDEX} & AGN-galaxy & 42.57 & 0.00 & 42.57 & 42.57 \\ & Radio-detection (label) & 12.84 & 0.00 & 12.84 & 12.84 \\ & Radio AGN & 05.47 & 0.00 & 05.47 & 05.47 \\ & AGN-galaxy & 81.29 & 0.00 & 81.29 & 81.29 \\ & Radio-detection (label) & 04.59 & 0.00 & 04.59 & 04.59 \\ & Radio AGN & 03.73 & 0.00 & 03.73 & 03.73 \\ \hline \end{tabular} 1
\end{table}
Table 14: Results of no-skill selection of sources in different stages of pipeline to the labelled sources in the HETDEX test subset and S82 fields.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{4}{c}{HETDEX test set} \\ Method\({}^{a}\) & F\({}_{B}\) & MCC & Precision & Recall \\ & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) \\ \hline S12 & 86.10 & 78.78 & 93.98 & 80.51 \\ M12 & 51.80 & 49.71 & 98.87 & 37.18 \\ M16 & 67.21 & 61.30 & 97.48 & 53.48 \\ B18 & 82.14 & 75.76 & 97.54 & 72.66 \\ This work & 92.71 & 87.64 & 94.00 & 91.67 \\ \hline \hline \multicolumn{4}{c}{SS2 (labelled)} \\ Method & F\({}_{B}\) & MCC & Precision & Recall \\ & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) & (\(\times\)100) \\ \hline S12 & 83.59 & 45.47 & 93.95 & 76.62 \\ M12 & 46.80 & 28.22 & 99.90 & 52.54 \\ M16 & 64.69 & 37.76 & 98.80 & 50.32 \\ B18 & 79.71 & 51.07 & 98.72 & 68.77 \\ This work & 90.63 & 58.53 & 94.15 & 87.91 \\ \hline \end{tabular} 1
\end{table}
Table 12: Predicted and original properties for the 5 sources in the HETDEX field with the highest predicted redshift on the unlabelled sources predicted to be radio AGNs. A description of the columns is presented in Appendix G.
Figure 9: W1 - W2, W2 - W3 colour-colour diagrams for sources in the testing subset, from HETDEX, and labelled sources from S82 given their position in the AGN-galaxy confusion matrix (see, for HETDEX, rightmost panel of Fig. 8). In the background, density plot of all CW-detected sources in the full HETDEX field sample is displayed. Colour of each square represents the number of sources in that position of parameter space, with darker squares having more sources (as defined in the colourbar of the upper-right panel). Contours represent distribution of sources for each of the aforementioned subsets at 1, 2, 3, and 4 \(\sigma\) levels (shades of blue, for testing set and shades of red for labelled S82 sources). Coloured, solid lines display limits from the criteria for the detection of AGNs described in Sect. 5.1.1.[ENDFOOTNOTE]
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline ID & RA\_CRS & DE\_CRS & band\_num & Score\_AGN & Prob\_AGN & radio\_detect & Score\_radio & Prob\_radio & Score\_rAGN & Prob\_AGN & pred\_c \\ & (deg) & (deg) & & & & & & & & & \\ \hline
3244450 & 20.309235 & 53.746429 & 6 & 0.500007 & 0.57804 & 0 & 0.351672 & 0.345250 & 0.175838 & 0.199832 & 4.7114 \\
1235548 & 220.383210 & 50.3919160 & 5 & 0.500007 & 0.578044 & 0 & 0.397123 & 0.794128 & 0.468568 & 0.459644 & 4.4064 \\
1384216 & 219.8939124 & 526.260328 & 7 & 0.500015 & 0.650248 & 0.0213846 & 0.230529 & 0.106926 & 0.149901 & 4.5622 \\
6698239 & 184.694901 & 49.063766 & 5 & 0.499995 & 0.467527 & 0 & 0.799085 & 0.662753 & 0.399538 & 0.309855 & 4.5483 \\
2951011 & 175.852446 & 55.497799 & 5 & 0.500008 & 0.589419 & 0 & 0.823295 & 0.681768 & 0.411654 & 0.401847 & 4.5320 \\ \hline \end{tabular} 1
\end{table}
Table 13: Predicted and original properties for the 5 sources in S82 with the highest predicted redshift on the unlabelled sources predicted to be radio AGNs. A description of the columns is presented in Appendix G.
the data and, therefore, gain some insight into the nature or triggering mechanisms of the radio emission. We, therefore, reserve the discussion of the most important features, and the linked physical processes, driving the pipeline improved predictions to Sect. 5.3.1.
#### 5.1.3 Redshift value prediction
We compare our results to that of Ananna et al. (2017, Stripe 82X) where the authors analysed multi-wavelength data from more than 6 100 X-ray detected AGNs from the 31.3 deg\({}^{2}\) of the Stripe 82X survey. They obtained photometric redshifts for almost 6 000 of these sources using the template-based fitting code LePhare (Arnouts et al., 1999; Ilbert et al., 2006). Their results present a normalised median absolute deviation of \(\sigma_{\rm NMAD}\)=0.062 and an outlier fraction of \(\eta\)=13.69%, values which are similar to our results in HETDEX and S82 except for a better outlier fraction (as shown in Table 8, we obtain \(\sigma_{\rm S82}=25.18\)%, \(\sigma_{\rm NMAD}^{\rm HETDEX}\)=0.071, and \(\eta^{\rm HETDEX}\)=18.9%).
On the ML side, we compare our results to those produced by Carvajal et al. (2021) in S82, with \(\sigma_{\rm NMAD}=0.1197\) and \(\eta=29.72\)%, and find that our redshift prediction model improves by at least 25% for any given metric. The source of improvement is probably many-fold. First, it might be related to the different sets of features used (colours vs ratios) and, second, the more specific population of radio AGN used to train our models. Carvajal et al. (2021) used a limited set of colours to train their model, while we allowed the use of all available combinations of magnitudes (Sect. 2.2). Additionally, their redshift model was trained on all available AGNs in HETDEX, while we trained (and tested) it only with radio-detected AGNs. Using a more constrained sample reduces the likelihood of handling sources that are too different in the parameter space.
Another example of the use of ML for AGN redshift prediction has been presented by Luken et al. (2019). They studied the use of the k-nearest neighbours algorithm KIN(Cover & Hart, 1967), a non-parametric supervised learning approach, to derive redshift values for radio-detectable sources. They combined 1.4 GHz radio measurements, infrared, and optical photometry in the European Large Area Infrared Space Observatory (ISO) Survey-South 1 (ELAIS-S1; Oliver et al., 2000) and extended Chandra Deep Field South (eCDFS; Lehmer et al., 2005) fields, matching their sensitivities and depths to the expected values in the Evolutionary Map of the Universe (EMU; Norris et al., 2011). From the different experiments they run, their resulting NMAD values are in the range \(\sigma_{\rm NMAD}=0.05-0.06\), and their outlier fraction can be found between \(\eta=7.35\)% and \(\eta=13.88\)%. As an extension to the previous results, Luken et al. (2022) analysed multi-wavelength data from radio-detected sources the eCDFS and the ELAIS-S1 fields. Using KIN and RF methods to predict the redshifts of more than 1 300 RGs, they developed regression methods that show NMAD values between \(\sigma_{\rm NMAD}=0.03\) and \(\sigma_{\rm NMAD}=0.06\), \(\sigma_{z}=0.10-0.19\), and outlier fractions of \(\eta=6.36\)% and \(\eta=12.75\)%.
In addition to the previous work, Norris et al. (2019) compared a number of methodologies, mostly related with ML but also LePhare, for predicting redshift values for radio sources. They used more than 45 photometric measurements (including 1.4 GHz fluxes) from different surveys in the COSMOS field. From several settings of features, sensitivities, and parameters, they retrieve redshift predictions with NMAD values between \(\sigma_{\rm NMAD}=0.054\) and \(\sigma_{\rm NMAD}=0.48\) and outlier fractions that range between \(\eta=7\)% and \(\eta=80\)%. The broad span of obtained values might be due to the combinations of properties for each individual training set (including the use of radio or X-ray measurements, the selection depth, and others) and to the size of these sets, which was small for ML purposes (less than 400 sources). The slightly better results can be understood given the heavily populated photometric data available in COSMOS.
Specifically related to HETDEX, it is possible to compare our results to those from Duncan et al. (2019). They use a hybrid photometric redshift approach combining traditional template fitting redshift determination and ML-based methods. In particular, they implemented a GP algorithm, which is able to model both the intrinsic noise and the uncertainties of the training features. Their redshift prediction analysis of AGN sources with a spectroscopic redshift detected in the L0SS DRI (\(6,811\) sources) recovers a NMAD value of \(\sigma_{\rm NMAD}=0.102\) and an outlier fraction of \(\eta=26.6\)%. The differences between these results and those obtained from the application of our models (individually as part of the prediction pipeline) might be due to the differences in the creation of the training sets. Duncan et al. (2019) used information from all available sources in the HETDEX field for training the redshift GP whilst our redshift model has been only trained on radio-detected AGNs, giving it the opportunity to focus its parameter exploration only on these sources.
Finally, Cunha & Humphrey (2022) also produced photometric redshift predictions for almost 3 500 000 sources (stars, galaxies, and QSOs) as part of their pipeline (see Sect. 5.1.1). They combined three algorithms for their predictions: XGBoost, CatBoost, and LightGBM(Ke et al., 2017). This procedure leads to \(\sigma_{\rm NMAD}=0.018\) and \(\eta=2\)%. As with previous examples, the differences with our results can be a consequence of the number of training samples. Also, in the case of Cunha & Humphrey (2022), they applied an additional post-processing step to the redshift predictions attempting to predict and understand the appearance of catastrophic outliers.
### Influence of data imputation
One effect which might influence the training of the models and, consequently, the prediction for new sources is related to the imputation of missing values (cf. Sect. 2.1). In Fig. 10, we plotted the distributions of predicted scores (for classification models) and predicted redshift values as a function of the number of measured bands (band_num) for each step of the pipeline as applied to sources predicted to be of each class in the test subset.
The top panel of Fig. 10 shows the influence of the degree of imputation in the classification between AGNs and galaxies. For most of the bins, probabilities for predicted galaxies are distributed close to 0.0, without any noticeable trend. In the case of predicted AGNs, the combination of low number of sources and high degree of imputation (band_num \(<5\)) lead to low mean probabilities.
The case of radio detection classification is somewhat different. Given the number and distribution of sources per bin, it is not possible to extract any strong trend for the probabilities of radio-predicted sources. The absence of evolution with the number of observed bands is stronger for sources predicted to be devoid of radio detection.
Finally, a stronger effect can be seen with the evolution of predicted redshift values for radio-detectable AGNs. Despite the lower number of available sources, it is possible to recognise that sources with higher number of available measurements are predicted to have lower redshift values. Sources that are closer to us have higher probabilities to be detected in a large number of
bands. Thus, it is expected that our model predicts lower redshift values for the most measured sources in the field.
In consequence, Fig. 10 allows us to understand the influence of imputation over the predictions. The most highly affected quantity is the redshift, where large fractions of measured magnitudes are needed to obtain scores that are in line with previous results (cf. Sect. 5.1.3). The AGN-galaxy and radio detection classifications show a mild influence of imputation in their results.
### Model explanations
Given the success of the models and pipeline in classifying AGNs, their radio detectability and redshift with the provided set of observables, knowing the relative weights that they have in the decision-making process is of utmost relevance. In this way, physical insight might be gained about the triggers of AGN and radio activity and its connection to their host. Therefore, we estimated both local and global feature importances for the individual models and the combined pipeline. Global importances were retrieved using the so-called 'decrease in impurity' approach (see, for example, Breiman 2001). Local importances have been determined via Shapley values. A more detailed description of what these importances are and how they are calculated is given in the following sections.
#### 5.3.1 Global feature importances
Overall, mean or global feature importances can be retrieved from models that are based on decision trees (e.g. random forests and boosting models, Breiman 2001, 2003). All algorithms selected in this work (RF, CatBoost, XGBoost, ET, GBR, and GBC) belong to these two classes. For each feature, the decrease in impurity (a term frequently used in the literature related to machine learning) of the dataset is calculated for all the nodes of the tree in which that feature is used. Features with the highest impurity decrease will be more important for the model (Louppe et al. 2013)9.
Footnote 9: For some models not based on decision trees, feature importances can be obtained from the coefficients delivered by the training process. These coefficients are related to the level to which each quantity is scaled to obtain a final prediction (as in the coefficients from a polynomial regression).
Insight into the decision-making of the pipeline can only rely on the specific weight of the original set of features (see Sect. 3.1). Table 16 presents the ranked combined importances from the observables selected in each of the three sequential models that compose the pipeline. They have been combined using the importances from the meta learner (as shown in Table 17) and that of base learners. The derived importances will be dependent on the dataset used, including any imputation for the missing data, and the details of the models (i.e. algorithms used and stacking procedure). We first notice in Table 16 that the order of the features is different for all three models. This difference reinforces the need, as stated in Sect. 3, of developing separate models for each of the prediction stages of this work that would evaluate the best feature weights for the related classification or regression task.
For the AGN-galaxy classification model, it is very interesting to note that the most important feature for the predicted probability of a source to be an AGN is the WISE colour W1 - W2 (as well as W1 - W3). This colour is indeed one of the axes of the widely used WISE colour-colour selection, with the second axis being the W2 - W3 colour (cf. Sect 5.1.1). The WISE W3
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{AGN-galaxy (meta model: CatBoost)} \\ Feature & Importance & Feature & Importance & Feature & Importance \\ \hline W1\_W2 & 68.945 & R\_K & 1.715 & z\_W2 & 1.026 \\ W1\_W3 & 4.753 & y\_W1 & 1.659 & z\_W & 0.722 \\ g\_r & 4.040 & y\_W2 & 1.513 & W3\_W4 & 0.669 \\ r\_j & 4.006 & i\_Y & 1.441 & W4mag & 0.558 \\ r\_i & 3.780 & i\_z & 1.366 & H\_W3 & 0.408 \\ band\_num & 1.842 & y\_J & 1.187 & J\_H & 0.371 \\ \hline \hline \multicolumn{5}{c}{Radio detection (meta model: GBC)} \\ Feature & Importance & Feature & Importance & Feature & Importance \\ \hline W2\_W3 & 9.609 & y\_W1 & 7.150 & Wmag & 4.759 \\ y\_J & 8.102 & q\_r & 7.123 & K\_W4 & 2.280 \\ W1\_W2 & 8.010 & z\_W1 & 7.076 & J\_H & 1.283 \\ g\_i & 7.446 & r\_z & 6.981 & H\_K & 1.030 \\ K\_W3 & 7.357 & i\_z & 6.867 & band\_num & 1.018 \\ z\_y & 7.321 & r\_i & 6.588 & & \\ \hline \hline \multicolumn{5}{c}{Redshift prediction (meta model: ET)} \\ Feature & Importance & Feature & Importance & Feature & Importance \\ \hline y\_W1 & 35.572 & y\_J & 3.018 & i\_z & 1.215 \\ W1\_W2 & 13.526 & r\_z & 3.000 & J\_H & 1.162 \\ BZ\_W3 & 12.608 & r\_i & 2.896 & g\_W3 & 1.000 \\ band\_num & 6.358 & z\_y & 2.827 & K\_W3 & 0.925 \\ H\_K & 4.984 & Wmag & 2.784 & K\_W4 & 0.762 \\ g\_r & 4.954 & i\_y & 2.408 & & \\ \hline \end{tabular}
\end{table}
Table 16: Relative importances (rescaled to add to 100) for observed features from the three models combined between meta and base models.
Figure 10: Evolution of predicted probabilities (top: probability to AGN, middle: probability of AGNs to be detected in radio) and redshift values for radio-detectable AGNs (bottom panel) as a function of the number of observed bands for sources in test set. In top panel, sources have been divided between those predicted to be AGN and galaxy. In middle panel, sources are divided between predicted AGN that are predicted to be detected in radio and those predicted to not have radio detection. Background density plots (following colour coding in colour-bars) show location of predicted values. Overlaid boxplots display main statistics for each number of measured bands. Black rectangles encompass sources in second and third quartiles. Vertical lines show the place of sources from first and fourth quartiles. Orange lines represent median value of sample and dashed, green lines indicate their mean values. Dashed, grey lines show PR thresholds for AGN-galaxy and radio detection classifications. Close to each boxplot, written values correspond to the number of sources considered to create each set of statistics.
photometry is though significantly less sensitive than W1, W2 or PS1 (see Fig. 3) and a significant number of sources will be represented as upper limits in such plot (see Table 2). From the importances in Table 16 and the values presented in Fig. 1 we infer that using optical colours could in principle create selection criteria with metrics equivalent to those shown in Table 15 but for a much larger number of sources (100 000 sources for colour plots using W3 vs 4 700 000 sources for colours based in r, i or z magnitudes). We tested this hypothesis and derived a selection criterion in the g - r vs W1 - W2 colour-colour plot shown in Fig. 11 using the labelled sources in the test subset of the HETDEX field. The results of the application of this criterion to the testing data and to the labelled sources in SS82 is presented in the last row of Table 15. Their limits are defined by the following expressions:
\[g-r > -0.76\,, \tag{21}\] \[g-r < 1.8\,,\] (22) \[W1-W2 > 0.227\times(g-r)+0.43\,, \tag{23}\]
where W1, W2, g, and r are Vega magnitudes. Our colour criteria provides better and more homogeneous scores across the different metrics with purity (precision) and completeness (recall) above 87%. Avoiding the use of the longer WISE wavelength (W3 and W4), the criteria can be applied to a much larger dataset.
One of the main potential uses of the pipeline is its capability to pinpoint radio-detectable AGNs. The global features analysis for the radio detection model shows a high dependence on the near- and mid-IR magnitudes and colours, especially those coming from WISE. As a useful outcome similar to the AGN-galaxy classification, we can use the most relevant features to build useful plots for the pre-selection of these sources and get insight into the origin of the radio emission. This is the case for the W4 histogram, shown in Fig. 12, where sources predicted to be radio-emitting AGNs extend to brighter measured W4 magnitudes. This added mid-IR flux might be simply due to an increased star formation rates (SFR) in these sources. In fact the 24\(\mu\)m flux is often used, together with that of H\(\alpha\) as a proxy for SFR (Kennicutt et al., 2009). The radio detection for these sources might have a strong component linked to the ongoing SF, especially for the sources with real or predicted redshift below \(z\)\(\sim\)1.5. A detailed exploration of the implications that these dependencies might have in our understanding of the triggering of radio emission on AGNs, whether related to SF or jets, is left for a future publication (Carvajal et al. in preparation).
Finally, the redshift prediction model shows again that the final estimate is mostly driven by the results of the base learners, accounting for \(\sim\)82% of the predicting power. The overall combined importance of features shows also in this case a strong dependence on several near-IR colours of which y - W1 and W1 - W2 are the most relevant ones. The model still relies, to a lesser extent, on a broad range of optical features needed to trace the broad range of redshift possibilities (\(z\in[0,6]\)).
#### 5.3.2 Local feature importances: Shapley values
As opposed to the global (mean) assessment of feature importances derived from the decrease in impurity, local (i.e. source by source) information on the performance of such features can be obtained from Shapley values. This is a method from coalitional game theory that tells us how to fairly distribute the dividends (the prediction in our case) among the features (Shapley, 1953). The previous statement means that the relative influence of each property from the dataset can be derived for individual predictions in the decision made by the model (which is not the same as obtaining causal correlations between features and the target; Ma & Tourani, 2020). The combination of Shapley values with several other model explanation methods was used by
Figure 11: AGN classification colour-colour plot in the HETDEX field using CW (W1, W2) and PS1 (g, r) passbands. Grey-scale density plot include all CW detected and non-imputed sources. Red contours highlight the density distribution of the AGNs in the Million QSO catalogue (MQC) and blue contours show the density distribution for the galaxies from SDSS DR16. Contours are located at 1, 2, and 3 \(\sigma\) levels.
Figure 12: W4 magnitudes density distribution of the newly predicted radio AGNs (clean histograms) in HETDEX (ochre histograms) and S82 (blue histograms) and W4 magnitudes from predicted AGNs that are predicted to not have radio detection (dashed histograms).
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{4}{c}{AGN-galaxy model (CatBoost)} \\ Feature & Importance & Feature & Importance \\ \hline gbc & 49.709 & xgboost & 14.046 \\ et & 19.403 & rf & 8.981 \\ Remaining feature importances: & 7.861 \\ \hline \hline \multicolumn{4}{c}{Radio detection model (GBC)} \\ Feature & Importance & Feature & Importance \\ \hline rf & 12.024 & catboost & 7.137 \\ et & 7.154 & xgboost & 6.604 \\ Remaining importances: & 67.081 \\ \hline \hline \multicolumn{4}{c}{Redshift prediction model (E1)} \\ Feature & Importance & Feature & Importance \\ \hline xgboost & 25.138 & catboost & 21.072 \\ gbr & 21.864 & rf & 13.709 \\ \multicolumn{4}{c}{Remaining importances:} & 18.217 \\ \hline \end{tabular}
\end{table}
Table 17: Relative feature importances (rescaled to add to 100) for base algorithms in each prediction step.
Lundberg & Lee (2017) to create the SHapley Additive exPlanations (SHAP) values. In this work, SHAP values were calculated using the python package SHAP10 and, in particular, its module for tree-based predictors (Lundberg et al., 2020). To speed calculations up, the package FastTreeSHAP11(v0.1.2; Yang, 2021) was also used, which allows the user to run multi-thread computations.
Footnote 10: [https://github.com/slundberg/shap](https://github.com/slundberg/shap)
Footnote 11: [https://github.com/linkedin/fasttreeshap](https://github.com/linkedin/fasttreeshap)
One way to display these SHAP values is through the so-called decision plots. They can show how individual predictions are driven by the inclusion of each feature. Besides determining the most relevant properties that help the model make a decision, it is possible to detect sources that follow different prediction paths which could be, eventually and upon further examination, labelled as outliers. An example of this decision plot, linked to the AGN-galaxy classification, is shown in Fig. 13 for a subsample of the high-redshift (\(z\geq 4.0\)) spectroscopically classified AGNs in the HETDEX field (121 sources, regardless of them being part of any subset involved in the training or validation of the models). The different features used by the meta learner are stacked on the vertical axis with increasing weight and these final weight are summarised in Table 18. Similarly, SHAP decision plots for the radio detection and redshift prediction are presented in Figs. 14 and 15, respectively.
As it can be seen, for the three models, base learners are amongst the features with the highest influence. This result raises the question of what drives these individual base predictions. Appendix F includes SHAP decision plots for all base learners used in this work. Additionally, and to be able to compare these results with the features importances from Sect. 5.3.1, we constructed Table 19, which displays the combined SHAP values of base and meta learners but, in this case, for the same 121 high-redshift confirmed AGNs (with 29 of them detected by LoTSS). Table 19 shows, as Table 16, that the colour W1 - W2 is the most important discriminator between AGNs and galaxies for this specific set of sources. The importance of the rest of the features is mixed: similar colours are located on the top spots (e.g. g - r, W1 - W3 or r - i).
For the radio classification step of the pipeline, we find that features linked to those 121 high-\(z\) AGNs perform at the same level as for the overall population. The improved metrics with respect to those obtained from the no-skill selection do indicate that the model has learned some connections between the data and the radio emission. Feature importance has changed when compared to the overall population. If the radio emission observed from these sources were exclusively due to SF, this connection would imply SFR of several hundred \(M_{\odot}\,\mathrm{yr}^{-1}\). This explanation can not be completely ruled out from the model side but some contribution of radio emission from the AGN is expected. The detailed analysis of the exact contribution for the SF
Figure 14: Decision plot from the SHAP values for all features from the radio detection model in the 121 high redshift (\(z\geq 4\)) spectroscopically confirmed AGNs from HETDEX. Description as in Fig. 13.
Figure 13: Decision plot from SHAP values for AGN-galaxy classification from the 121 high redshift (\(z\geq 4\)) spectroscopically confirmed AGNs in HETDEX. Horizontal axis represents the model’s output with a starting value for each source centred on the selected naive threshold for classification. Vertical axis shows features used in the model sorted, from top to bottom, by decreasing mean absolute SHAP value. Each prediction is represented by a coloured line corresponding to its final predicted value as shown by the colourbar at the top. Moving from the bottom of the plot to the top, SHAP values for each feature are added to the previous value in order to highlight how each feature contributes to the overall prediction. Predictions for sources detected by LOFAR are highlighted with a dotted, dashed line.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{AGN-galaxy model (CatBoost)} \\ Feature & SHAP value & Feature & SHAP value \\ \hline gbc & 36.250 & rf & 21.835 \\ et & 30.034 & zgboost & 7.198 \\ & Remaining SHAP values: & 4.683 \\ \hline \hline & \multicolumn{4}{c}{Radio detection model (GBC)} \\ Feature & SHAP value & Feature & SHAP value \\ \hline rf & 11.423 & catboost & 3.696 \\ xgboost & 7.741 & et & 5.115 \\ & Remaining SHAP values: & 70.025 \\ \hline \hline & \multicolumn{4}{c}{Reshift prediction model (\(\mathcal{EI}\))} \\ Feature & SHAP value & Feature & SHAP value \\ \hline xgboost & 41.191 & gr & 13.106 \\ catboost & 20.297 & rf & 11.648 \\ & Remaining SHAP values: & 13.758 \\ \hline \hline & \multicolumn{4}{c}{Article number, page 17 of 24} \\ \end{tabular}
\end{table}
Table 18: SHAP values (rescaled to add to 100) for base algorithms in each prediction step for observed features using 121 spectroscopically confirmed AGNs at high redshift values (\(z>4\)).
and AGN component will be left for a forthcoming publication (Carvajal et al. in preparation).
## 6 Summary and conclusions
With the ultimate intention of better understanding the triggering of radio emission in AGNs, in this paper, we have shown that it is possible to build a pipeline to detect AGNs, determine their detectability in radio, within a given flux limit, and predict their redshift value. Most importantly, we have described a series of methodologies to understand the driving properties of the different decisions, in particular for the radio detection which is, to our best knowledge, the first attempt at doing so.
We have trained the models using multi-wavelength photometry from almost 120 000 spectroscopically identified infrared-detected sources in the HETDEX field and created stacked models with them. These models were applied, sequentially, to 15 018 144 infrared detections in the HETDEX Spring field, leading to the creation of 68 252 radio AGNs candidates with their corresponding predicted redshift values. Additionally, we applied the models to 3 568 478 infrared detections in the S82 field, obtaining 22 445 new radio AGNs candidates with their predicted redshift values.
We then applied a number of analyses to the models to understand the influence of the observed properties over the predictions and their confidence levels. In particular, the use of SHAP values gives the opportunity to extract the influence that the feature set has for each individual prediction. From the application of the prediction pipeline on labelled and unlabelled sources and the analysis of the predictions and the models themselves, the following conclusions can be drawn.
* Generalised stacking is a useful procedure which collects results from individual ML algorithms into a single model that can outperform each of the individual models, while preventing the inclusion of biases from individual algorithms. Proper selection of models and input features, together with detailed probability and threshold calibration, maximises the metrics of the final model.
* Classification between AGNs and galaxies derived from our model is in line with previous works. Our pipeline is able to retrieve a high fraction of previously classified AGNs from HETDEX (recall \(=0.9621\), precision \(=0.9449\)) and from the S82 field (recall \(=0.9401\), precision \(=0.9481\)).
* Radio detection classification for predicted AGNs has proven to be highly demanding in terms of the data needed for creating the models. Thanks to the use of the techniques shown in this article (i.e. feature creation and selection, generalised stacking, probability calibration, and threshold optimisation), we were able to retrieve previously known radio-detectable AGNs in the HETDEX field (recall \(=0.5216\), precision \(=0.3528\)) and in the S82 field (recall \(=0.5816\), precision \(=0.1229\)). These rates improve significantly upon a purely random selection (4 times better for the HETDEX field and 13 times better for S82), showing the power of ML methods for obtaining new RG candidates.
* The prediction of redshift values for sources classified as radio-detectable AGNs can deliver results that are in line with works that use either traditional or ML methods. The good quality of these predictions is achieved despite the fact of them being produced after two previous ML steps (the two classifications of the pipeline), which might introduce large uncertainties to their values.
* Our models (classification and regression) can be applied to areas of the sky that have different radio coverage from that used for training without a strong degradation of the prediction results. This feature can lead to the use of our pipeline over very distinct datasets (in radio and multi-wavelength coverage) expecting to recover the sources predicted to be radio-detectable AGNs with a high probability.
* Machine-learning models cannot be only used for a direct prediction of a value (or a set of values). They can also be subject to analyses that allow additional results to be extracted. We took advantage of this fact by using global and local feature importances to derive novel colour-colour AGN selection methods.
Figure 15: Decision plot from the SHAP values for all features from the redshift prediction model in the 121 high redshift (\(z\geq 4\)) spectroscopically confirmed AGNs from HETDEX. Description as in Fig. 13.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{5}{c}{AGN-galaxy model} \\ Feature & SHAP value & Feature & SHAP value & Feature & SHAP value \\ \hline
9\_1\_W\_2 & 32.458 & \_3\_Y & 5.086 & z\_y & 1.591 \\
9\_r & 11.583 & y\_J1 & 4.639 & H\_W3 & 1.048 \\
9\_W1\_W3 & 8.816 & band\_num & 4.050 & W4mag & 0.514 \\ r\_i & 7.457 & y\_W2 & 3.228 & H\_K & 0.466 \\ i\_z & 6.741 & z\_W2 & 2.348 & W3\_W4 & 0.466 \\ r\_j & 6.613 & y\_J & 1.718 & J\_H & 0.178 \\ \hline \hline \multicolumn{5}{c}{Radio detection model} \\ Feature & SHAP value & Feature & SHAP value & Feature & SHAP value \\ \hline
9\_i & 14.120 & z\_J1 & 6.751 & W4mag & 2.691 \\ W2\_W3 & 13.201 & r\_i & 5.577 & band\_num & 2.661 \\ g\_r & 12.955 & r\_z & 5.161 & K\_W4 & 0.939 \\ y\_J & 8.224 & i\_z & 4.512 & H\_K & 0.719 \\ K\_W3 & 7.441 & z\_y & 4.121 & J\_H & 0.190 \\ W1\_W2 & 6.874 & y\_W1 & 3.864 & \\ \hline \hline \multicolumn{5}{c}{Redshift prediction model} \\ Feature & SHAP value & Feature & SHAP value & Feature & SHAP value \\ \hline
9\_r & 32.594 & z\_y & 3.557 & W4mag & 1.639 \\ y\_W1 & 20.770 & y\_J & 3.010 & g\_W3 & 1.479 \\ W2\_W3 & 12.462 & band\_num & 2.595 & K\_W3 & 0.853 \\ W1\_W2 & 5.692 & i\_y & 2.381 & K\_W4 & 0.451 \\ r\_i & 4.381 & H\_K & 2.230 & J\_H & 0.146 \\ r\_z & 3.755 & i\_z & 2.005 & \\ \hline \hline \end{tabular}
\end{table}
Table 19: Combined and normalised (rescaled to add to 100) mean absolute SHAP values for observed features from the three models using 121 spectroscopically confirmed AGNs at high redshift values (\(z\geq 4\)).
The next generation of observatories is already producing source catalogues with an order of magnitude better sensitivity over large areas of the sky than was previously the case. Some examples of these catalogues and surveys include the Rapid ASKAP Continuum Survey (RACS; McConnell et al. 2020), EMU (Norris et al. 2011), and the MeerKAT International GHz Tiered Extragalactic Exploration (MIGHTEE; Jarvis et al. 2016). With the increased number of radio detections, the need to understand the fraction of those detections related to AGNs and to determine counterparts across wavelengths is more necessary than ever.
Although we developed the pipeline as a tool to better understand the aforementioned issues, we foresee additional possibilities in which the pipeline can be of great use. The first possibility involves the use of the pipeline to assist with the selection of radio-detectable AGNs within any set of observations. This application might turn out to be particularly valuable in recent surveys carried out with MeerKAT (Jonas & MeerKAT Team 2016) or the future SKA where the population at the faintest sources will be dominated by star-forming galaxies. This change needs to use the corresponding data in the training set.
Future developments of the pipeline will concentrate on minimising the existent biases in the training sample as well as in increasing the coverage of the parameter space. We also plan to generalise the pipeline to make it useful for non-radio or galaxy-related research communities. These developments include, for instance, the capability to carry the full analysis for the galactic and stellar populations (i.e. models to determine if a galaxy can be detected in the radio and to predict redshift values for galaxies and non-radio AGNs).
In order to increase the parameter space of our training sets, we plan to include information from radio surveys with different properties in terms of covered area and multi-wavelength coverage. In particular, we aim to include far-IR, X-ray, and multi-survey radio measurements from larger areas. The inclusion of a larger, and possibly deeper, set of measurements makes part of our goal to improve detections, not only in radio, but in additional wavelengths.
###### Acknowledgements.
We thank the anonymous referee for their valuable comments and constructive suggestions which have greatly improved the manuscript. The authors would also like to thank insightful comments from P. Papaderos and B. Arsichi. This work was supported by Fundagio para a Ciencia e a Tecnologia (FCT) through research grants PTDC/FIS-AST/29245/2017, EXP/FIS-AST/108582/2021, UID/FIS/044342/2019, UID/04434/2020, and UID/04434/2020. RC acknowledges support from the Fundagio para a Ciencia e a Tecnologia (FCT) through the Fellowship PD/BD/150455/2019 (PhD:SPACE Doctoral Network P000040212) and POC/FRES (EC). IM acknowledges support from ID. 57/2016 (P2461) from the "Departamento de Fisica, Faculdade de Ciencias da Universidade de Lisso". Aft acknowledges support from contract D. 57/2016/CP/3164/(CT002) and an FCT-CAPES funded "Transnational Cooperation project" "Strategic Partnership" in Astrophysics Portugal-Brazil". PACC acknowledges financial support by the Fundacao para a Ciencia e a Tecnologia (FCT) through the grant 2022.1477.BD. DB acknowledges support from the Fundacao para a Ciencia e a Tecnologia (FCT) through the Fundacao para a Ciencia e Tecnologia (FCT) through the Fundacao para a Ciencia e a Tecnologia (FCT) through the Fundacao para a Ciencia e a Tecnologia (FCT) through the work Contract No. 2020.03946.CEEICIND. CP acknowledges support from DL. 57/2016 (P2460) from the "Departamento de Fisica, Faculdade de Ciencias da Universidade de Lisso". This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology. funded by the National Aeronautics and Space Administration. LOFAR data products were provided by the LOFAR Surveys Key Science project (LSKSP12) and were derived from observations with the International LOFAR Telescope (ILT). LOFAR (van Haarlem et al. 2013) is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, which are owned by various parties (each with their own funding sources), and which are collectively operated by the ILT foundation under a joint scientific policy. The efforts of the LSKSPF have benefited from funding from the European Research Council, NOVA, NWO, CNS-INNU, the SURF Co-operative, the UK Science and Technology Funding Council and the Julich Supercomputing Centre. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and is participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR220 issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1288877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This work made use of public data from the Sloan Digital Sky Survey, Data Release 16. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics 1 Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPR Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatiatique Nacional 7 MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. This research has made use of NASA's Astrophysics Data System, TOPCAT (A'13) (Yapoor 2009; Taylor, Jupel,4)14 (Kluyver et al. 2016), Aladin sky atlas (v11.0.24; Bonmarel et al. 2000) developed at CDS, Strasbourg Observatory, France, and the Viriker catalogue access tool, CDS, Strasbourg, France (DOI: 10.26093/cds/vizier). The original description of the Viriker service was published in Ochsenbein et al. (2000). This work made extensive use of the Python packages PXC45 (v2.3.16; Ali 2020), scikit-learn (v2.3.2; Pedregosa et al. 2011), pandas16 (v1.4.2; Wes McKinney 2010), Astropy17, a community-developed core Python package for Astronomy (v5.6; Astropy Collaboration et al. 2013, 2018, 2022), Matplotlib (v3.5.1; Hunter 2007), betacal18 (v1.1.9), and CMasher19 (v1.6.3; van der Velden 2020).
Footnote 18: [http://www.star.bris.ac.uk/~mbt/topcat/](http://www.star.bris.ac.uk/~mbt/topcat/)
|
2309.07030 | Optimal transport distances for directed, weighted graphs: a case study
with cell-cell communication networks | Comparing graphs by means of optimal transport has recently gained
significant attention, as the distances induced by optimal transport provide
both a principled metric between graphs as well as an interpretable description
of the associated changes between graphs in terms of a transport plan. As the
lack of symmetry introduces challenges in the typically considered
formulations, optimal transport distances for graphs have mostly been developed
for undirected graphs. Here, we propose two distance measures to compare
directed graphs based on variants of optimal transport: (i) an earth movers
distance (Wasserstein) and (ii) a Gromov-Wasserstein (GW) distance. We evaluate
these two distances and discuss their relative performance for both simulated
graph data and real-world directed cell-cell communication graphs, inferred
from single-cell RNA-seq data. | James S. Nagai, Ivan G. Costa, Michael T. Schaub | 2023-09-13T15:36:39Z | http://arxiv.org/abs/2309.07030v3 | Optimal Transport Distances for Directed, Weighted Graphs: A Case Study with Cell-Cell Communication Networks
###### Abstract
Comparing graphs by means of optimal transport has recently gained significant attention, as the distances induced by optimal transport provide both a principled metric between graphs as well as an interpretable description of the associated changes between graphs in terms of a transport plan. As the lack of symmetry introduces challenges in the typically considered formulations, optimal transport distances for graphs have mostly been developed for undirected graphs. Here, we propose two distance measures to compare directed graphs based on variants of optimal transport: (i) an earth movers distance (Wasserstein) and (ii) a Gromov-Wasserstein (GW) distance. We evaluate these two distances and discuss their relative performance for both simulated graph data and real-world directed cell-cell communication graphs, inferred from single-cell RNA-seq data.
James S. Nagai\({}^{a}\) Ivan G. Costa\({}^{a}\) Michael T. Schaub\({}^{b}\)\({}^{\dagger}\)\({}^{a}\) Institute for Computational Genomics, RWTH Aachen Medical Faculty, Germany
\({}^{b}\)Department of Computer Science, RWTH Aachen University, Germany
Drected graphs, graph distances, optimal transport, cell-cell communication networks
## 1 Introduction
Exploring the similarities between graphs is a crucial primitive for comparing patterns in complex networks. A large variety of distance measures between graphs exist, often based on comparing features derived from (algebraic representations) of the graph structures. Examples include distance measures based on spectral features, the computation of certain subgraph statistics such as graphlets [1] or graph kernels [2]. While the comparison of such graph features is powerful, it often does not provide a notion of where precisely one graph differs from another, or an insight about how two graphs align, respectively.
Optimal transport based graph distances [3, 4], which have risen to prominence recently, and address these challenges. In a nutshell, OT-based graph distances associate with each graph a certain probability distribution. Two graphs can then be compared by finding a transport plan (a mapping) between those two probability distributions with the minimal transport cost [5]. The transport plan associated with the minimal cost can then be used to provide an interpretable and robust alignment of the two graphs considered, highlighting the changes between the graphs relevant to the computed distance.
To date, most OT-based methods for measuring network similarities have been proposed for undirected graphs. In many applications, however, we are interested in comparing directed graphs. Yet, extending OT-based distances to directed graphs is not simple, as a _symmetric_ distance metric, typically derived from the distances between nodes in the graph, is required within the cost function(s) typically employed within OT. To address this problem, here we consider two node-to-node distances, which have been developed for directed graphs, namely, the Generalized Effective Resistance (GRD) [6] and Markov chain hitting time (HTD) [7]. Employing these distance measures enables us to compute OT-based graph distances even for directed graphs. Specifically, we explore the use of these metrics for both Wasserstein (Earth Mover) and Gromov-Wasserstein based OT distances for graphs [5].
Having established our OT-based distances framework for directed graphs, we evaluate the relative performance of the proposed distance measures in the context of clustering cell-cell communication networks which arise in the study of single-cell sequencing data. These networks are intrinsically directional and provide a challenging test case, as technical artifacts such as data dropouts (e.g., missing connections and entities), outliers, and noise are often present in those [8].
**Contributions** (i) We present and evaluate two OT formulations (Wasserstein and Gromov Wasserstein) to compare directed graphs based on two node-to-node distances for directed graphs, namely, the Markov chain hitting time [7] and the Generalized Effective Resistance [6] (ii) We evaluate proposed approaches with simulated directed stochastic block models and on a case study of patient cell-cell communication networks.
**Outline** The remainder of the paper is structured as follows. We briefly outline some related work in Section 2. In Section 3, we then discuss two node-to-node distance measures for directed graphs, that we employ to formulate two optimal transport distances between directed graphs. We illustrate the utility of these different formulations in Section 4, in which we provide some numerical illustrations of these distances for synthetic and real world data. We close with a short discussion outlining future work.
## 2 Related Work
**Optimal transport distances between graphs.** One of the earliest OT-based distances for comparing undirected graphs considers vectorial embeddings of the graph adjacency matrices and then computes graph similarities using a Wasserstein metric (earth movers distance) between the embeddings [9]. A different perspective is taken by Graph Optimal Transport (GOT) [3]. Here, the pseudoinverse of the graph Laplacian is assumed to correspond to the covariance of a multivariate Gaussian distribution, and a multivariate Gaussian OT problem [10] is then solved to define the distance between two graphs.
In the context of directed graph, far fewer distances between graphs exists. Recently, Silva and collaborators [11] have explored the use of Wasserstein distances between distributions of directed graphlets to compare directed graphs. Other OT formulations for the comparison of directed graphs are based on Gromov-Wasserstein formulations (GWOT) [12, 13], where the node-to-node distances within the graph are transported instead of the node embeddings.
**Node-to-node distance metrics for directed graphs** When considering undirected graphs, the most commonly adopted node distance measures are the shortest-path distance, or the resistance distance [14]. For directed graphs, defining a distance measure between nodes becomes a nontrivial task, however, due to the non-symmetric nature of the graph. In the following, we leverage two formulations for a directed distance measure between nodes in a graph to define an optimal transport distance between graphs. By considering the Markov Chain associated with a directed weighted graph, Young and collaborators [6] extended the resistance distance to directed and weighted graphs. A related formulation has been provided by [7], that introduced a node-to-node pseudo-metric based on Markov Chain Hitting times.
## 3 Methods
In this section, we define two distance measures between _directed, weighted graphs_ based on optimal transport theory. We consider a setup in which we are confronted with a set of directed weighted graphs \(\{\mathcal{G}^{1},...,\mathcal{G}^{p}\}\) where each \(\mathcal{G}^{i}\) denotes a digraph \(\mathcal{G}^{i}=(\mathcal{V}^{i},\mathcal{E}^{i},w^{i})\) consisting of a node set \(\mathcal{V}\) a set of directed edges \(\mathcal{E}^{i}\subset\mathcal{V}\times\mathcal{V}\) and an associated non-negative weight function \(w^{i}:\mathcal{E}\rightarrow\mathbb{R}_{\geq 0}\).
Our task is now to defined a distance function \(d(\mathcal{G}^{k},\mathcal{G}^{j})\) between any two those graphs by means of optimal transport. To obtain a well-defined optimal transport problem we to define a geometry according to which the transport cost can be computed. In the following, we describe two ways to obtain such a geometry from directed weighted graphs, specifically from node-to-node distance metrics defined for directed graphs. We then employ these two node-to-node distances within two different optimal transport formulations to compare directed weighted graphs, yielding in total 4 possible distance measures.
### Node-to-Node distances for directed graphs
#### 3.1.1 Generalized Effective Resistance Distance (GRD)
The effective resistance or resistance distance is a well known distance metric for nodes within undirected networks [15, 16]. To derive the resistance distance for undirected graphs, we consider a graph as resistor network, where the weight of each edge corresponds to a conductance. The resistance distance between two nodes \(i\) and \(j\) in the graph is then equal to the potential difference that is induced by injecting a unit current between two nodes [15, 16].
Formally, let \(A\in\mathbb{R}^{N\times N}\) be the adjacency matrix of a graph with \(N\) nodes, with entries \(A_{ij}=w_{ij}\) equal to the weight \(w_{ij}\) of the edge from node \(i\) to \(j\), and \(A_{ij}=0\) otherwise. The Laplacian of the graph is then given as \(L=D-A\), where \(D=\text{diag}(A\mathbf{I})\) is the diagonal matrix of (weighted) node degrees. The (square root of the) resistance distance between node \(i\) and \(j\) can then be computed as [15, 16]:
\[r_{ij}=\sqrt{(e_{i}-e_{j})^{\top}L^{\dagger}(e_{i}-e_{j})}, \tag{1}\]
where \(L^{\dagger}\) denotes the Moore-Penrose pseudoinverse of the Laplacian, and \(e_{i},e_{j}\) are the \(N\) dimensional indicator vectors associated to node \(i\) and node \(j\), respectively.
To generalize this notion of effective resistance to directed graphs, we now consider the following formulation, due to Young et al. [17, 6]. Let \(Q\in\mathbb{R}^{N-1\times N}\) be a (grounding) matrix that satisfies:
\[Q\mathbf{I}_{N}=0\qquad QQ^{\top}=I_{N-1}\qquad Q^{\top}Q=I-\mathbf{11}^{\top }/N. \tag{2}\]
A _grounded_ Laplacian matrix for the graph can now be computed via \(Q\) as follows:
\[\widetilde{L}=QLQ^{\top}. \tag{3}\]
Using the grounded Laplacian, the (square root of) the resistance distance can now be alternatively written as \(r_{ij}=[(e_{i}-e_{j})^{\top}\widetilde{L}^{-1}(e_{i}-e_{j})]^{-1/2}\), where the inverse of the grounded Laplacian is used in lieu of the pseudoinverse of the standard Laplacian matrix. Indeed, the action of the matrix \(Q\) can be understood in terms of removing the null-space of \(L\) by effectively "grounding" the resistor network, i.e., fixing a reference potential [18, 16].
Young et al. [17] now extended the notion of resistance distance by using an alternative characterization of the inverse \(\widetilde{L}^{-1}\) that appears in the definition of the effective resistance. Specifically, they define the _generalized effective resistance distance_ between node \(k\) and node \(j\) by:
\[d_{\text{GRD}}(k,j)=\sqrt{(e_{k}-e_{j})^{\top}\widetilde{X}(e_{k}-e_{j})}. \tag{4}\]
Here the matrix \(X\) is defined via
\[X=2Q^{\top}\Sigma Q, \tag{5}\]
where \(\Sigma\) is the solution of the Lyapunov equation:
\[\widetilde{L}\Sigma+\Sigma\widetilde{L}^{\top}=I_{N-1}. \tag{6}\]
This solution is unique under the assumption that there exists a globally reachable node within the graph. Note that this formulation reduces to the classical resistance distance in the case of undirected graphs. For more details, we refer to [17, 6].
#### 3.1.2 Hitting Time Based Distance (HTD)
A second type of node-to-node distance metric for directed graphs is the class of hitting time metrics developed by Boyd et al. [7].
Consider a discrete-time Markov chain \((X_{t})_{t\geq 0}\) on the space of the vertices \(\mathcal{V}=\{1,...,N\}\) of a strongly connected graph, with initial distribution \(\lambda\) and a irreducible transition matrix \(P=D^{-1}A\) such that:
\[P(X_{0}=i)=\lambda_{i}\qquad\text{and}\qquad P(X_{e+1}=j|X_{t}=i)=P_{i,j}. \tag{7}\]
Let \(\pi\in\mathbb{R}^{N}\) be the invariant distribution of the chain, i.e., \(\pi P=\pi\). For a starting point distributed according to \(\lambda\), the _hitting time_ from a starting vertex \(i\in\mathcal{V}\) is a random variable
\[\tau_{i}=\inf\{t\geq 1:X_{t}=i\}. \tag{8}\]
Following [7], we denote the probability that starting in a node \(i\) the hitting time of \(j\) is less than the time it takes to return back to \(i\) by
\[Q_{i,j}:=P_{i}[\tau_{j}\leq\tau_{i}]. \tag{9}\]
Based on the matrix \(Q\) a normalized hitting time matrix \(T^{(i)}\) can be defined in terms of its entries
\[T^{(i)}_{i,j}=\begin{cases}\frac{e_{i}^{\pi}}{\tau_{i}^{\pi}}Q_{i,j}&i\neq j, \\ 1&\text{otherwise},\end{cases} \tag{10}\]
where \(\beta\) is a scalar parameter that can be used to adjust the measure If \(P\) is an irreducible stochastic matrix (i.e., the underlying graph is strongly connected, the Hitting Time Distance Matrix for \(\beta=(0.5,1]\) can be obtained by:
\[d_{\mathrm{HTD}}^{(\beta)}(i,j)=HTD_{i,j}^{(\beta)}\quad\text{where}\quad HTD^{ (\beta)}=-\log(T^{(\beta)}) \tag{11}\]
**Remark.** (Not strongly connected graphs). Note that both distance functions discussed above make assumptions about the global reachability of (all) nodes in the graph. To make the above distance measures well defined in case these assumptions are not fulfilled, one option is to add a low-rank regularization term, as popularized within the context of the well-known PageRank algorithm [19, 20].
### Optimal Transport distances for Directed Weighted Graphs
In the following, will exploit the geometries defined by the above defined node-to-node distances measures \(d_{\mathrm{GRD}}\)\(d_{\mathrm{HTD}}^{(\beta)}\), to derive two optimal transport distances between directed graphs: 1) a Gromov-Wasserstein distance and 2) a Wasserstein distance.
#### 3.2.1 Gromov-Wasserstein distance for directed graphs
The intuitive idea underpinning Gromov-Wasserstein distances is to probabilistically map one geometric configuration (metric-space) onto another, while distorting the distances as weakly as possible. In the context of graphs these ideas have been explored, e.g., in [12, 13]. To define a Gromov-Wasserstein distance for graphs, we thus need to consider the graph as a collection of points (corresponding to the nodes), with a (symmetric) metric distance defined between them. This is precisely for what we will use the above introduced node-to-node distance metrics.
Formally, consider two graphs \(\mathcal{G}^{k}\) and \(\mathcal{G}^{\prime}\). To each graph, we associated the tuple \((\mathcal{C}^{k},\,\mathbf{p}^{k})\) consisting of a node-to-node distance matrix \(\mathcal{C}^{k}\) and a probability vector \(p_{k}\in\mathbb{R}^{N}\) which describes the relative "mass" we associate to each node in that graph. Without further available information, we will typically use the uniform distribution \(\mathbf{p}^{k}=\mathbf{1}/N\) as an agnostic choice.
The Gromov-Wasserstein distance between two graphs \(\mathcal{G}^{k}\) and \(\mathcal{G}^{\prime}\) can then be obtained as the solution to the following minimization problem:
\[d_{\mathrm{GW}}(\mathcal{G}^{k},\mathcal{G}^{\prime})=\min_{\mathbf{r}\in \Pi(p_{k},p_{l})}\ \ \sum_{x_{1},x_{2},y_{2},y_{2}}\mathcal{L}(\mathcal{C}^{k}_{x_{1},x_{2}}, \mathcal{C}^{\prime}_{y_{1},y_{2}})\Gamma_{x_{1},y_{1}}\Gamma_{x_{2},y_{2}} \tag{12}\]
where \(\prod(p_{k},p_{l})=[\Gamma\in\mathbb{R}^{N\times N}:\Gamma_{ij}\leq 0,\Gamma \mathbf{1}=\mathbf{p}^{k},\Gamma^{\top}\mathbf{1}=\mathbf{p}^{l}]\) is the set of all possible transport plans with marginal distributions \(\mathbf{p}^{k}\), \(\mathbf{p}^{l}\), and \(\mathcal{L}\) is a loss function defined over the distance matrices \(\mathcal{C}^{k}\). Commonly, the loss function \(\mathcal{L}(.,.)\) is simply an (elementwise) \(L_{2}\) norm.
#### 3.2.2 Wasserstein Distance for directed graphs
To derive a Wasserstein distance for directed graphs, we switch our interpretation of the observed data. Instead of considering the geometry defined by each graph in our observation set \(\{\mathcal{G}^{k},...,\mathcal{G}^{\prime}\}\) separately, and transporting these geometries such that the distortion is minimized, we now interpret the edge-weights of each weighted graph as a specific realization of a signal supported on the _same underlying space_. Stated differently, we consider the edge-weights as a distribution supported on the same set of latent edges, and we aim to transport these edge distributions on top of each other optimally.
To this end, we allocate the observed edge weights in a matrix \(\mathbf{P}\in\mathbb{R}^{Exp}\), whose rows are indexed by the (directed) edges and whose columns by the respective graph, i.e., the column \(\mathbf{P}_{:,k}\) describes the weights of the \(k\)th graph \(\mathcal{G}^{k}\).
Using this definition, we build a _directed linegraph_\(\mathbf{L}=(\mathbf{V},\mathbf{E},\mathbf{W})\). The vertex set \(\mathbf{V}\) of the linegraph contains each possible edge \((j,k)\) contained in one of the graphs \(\mathcal{G}^{i}\). The edge set \(\mathbf{E}\) of the linegraph is defined as follows: an edge \(e^{\prime}=(u^{\prime},v^{\prime})\in\mathbf{E}\) exists if the target of \(u^{\prime}\) is the source of \(v^{\prime}\), i.e., the target node of edge \(u^{\prime}\) in the original graph, is the source node of edge \(v^{\prime}\) in the original graph. Note that, to build the line graph, the weights of the interactions are not considered (but are considered as distribution supported on the line-graph). Finally, the weight \(w_{u^{\prime}v^{\prime}}\) of edge \((u^{\prime},v^{\prime})\) in the line graph is simply the proportion of graphs containing both edges \(u^{\prime}\) to \(v^{\prime}\).
Once we have a weighted, directed linegraph \(\mathbf{L}\), we can again associate a distance matrix based on the generalized effective resistance or the hitting time with \(\mathbf{L}\). The Wasserstein distance between two graphs \(\mathcal{G}^{k}\) and \(\mathcal{G}^{\prime}\) can then be computed as follow:
\[d_{w}(\mathcal{G}^{k},\mathcal{G}^{\prime})=\operatorname*{arg\,min}_{\Gamma \in\Pi(\mathbf{L},\mathbf{L})}\ \ \ (\Gamma,\mathcal{C}_{\mathbf{L}})_{F} \tag{13}\]
\(\mathcal{C}_{\mathbf{L}}\) is one of the distance matrices (generalized effective resistance, hitting time distance) defined for directed graphs, as discussed above. Here \(\Pi(\mathbf{L})=[\Gamma\in\mathbb{R}^{(\mathcal{W}\times\mathcal{W})}:\Gamma_ {ij}\leq 0,\Gamma\mathbf{1}=P_{:,k},\Gamma^{\top}\mathbf{1}=P_{:,l}]\) is again the set of all admissible transport matrices, whose marginal distribution is now, however, defined by the respective distribution of edge weights as encoded in the columns of the matrix \(\mathbf{P}\).
Figure 1: **Optimal Transport distance between directed graphs.** We consider an illustrative example network, consisting of four directed “local” cycles (indicated by node color), who are again connected “globally” in a cyclic way. Left: original reference network (note that the network is strongly connected). Middle: We perturb the network by swapping the direction of one of the “local” edges within one of the four cycles. Right: We perturb the network by swapping the direction of one of the “global” edges connecting the four cycles. The table below displays the obtained distances considering the following three baselines: the Frobenius norm of the difference between the adjacency matrices; the optimal transport-based GOT distance [3], when considering the graph as undirected, and the optimal transport-based GWOT distance [13] (again considering the graph as undirected). We contrast these with the results obtained via our Gromov-Wasserstein and Wasserstein (earth moves distance) formulation based on both generalized effective resistance and hitting time distance. Note that only our distance metrics that account for the directionality of the graph edges can distinguish these different cases.
## 4 Numerical Experiments
### Experiments on synthetic graphs: an illustrative example
To illustrate the behavior of our distance measures for directed weighted graphs, we consider a synthetic example in which we connected four cyclic graphs with four nodes each in a cyclic fashion -- see Figure 1 for an illustration. We now consider two different perturbations of these graphs and assess how far these changes can be detected. Both of these perturbations consist merely of a flip of the orientation of one of the edges, meaning that when the graph is considered undirected there is no apparent change in the graph structure. The first perturbation we consider is a flip in orientation in one of the "local" edges within one of the cyclic four-node-graphs (cf. Figure 1). The second perturbation we consider is a flip in the orientation of one of the "global" edges that connect two of the cyclic graphs (cf. Figure 1).
The distances between these perturbed graphs, relative to the original graph, are displayed in the table shown in Figure 1. We see that both of our distance metrics for directed graphs capture a difference between the two scenarios, irrespective of what directed node-to-node metric we use for them. In fact, in most cases, the "global" edge flip leads to a larger distance. If we consider the directed graph as a flow circulation pattern, it can be argued that, indeed, this single "global" flip leads to a larger overall perturbation in the overall flow pattern, in contrast to flipping a "local" edge which leads only to a reversal of the flow within one of the cycles.
We also compare our distances, to standard distance metrics for undirected graphs as baselines, i.e., the Frobenius norm, GOT [3]. Furthermore, a previously proposed method for comparing directed graphs, GWOT [13], was also considered in our benchmark. Interestingly, this approach also fails to distinguish the graph with the global edge flip from the original graph. From the evaluated distances, the Frobenius norm identifies the two perturbed graphs to be equally far apart from our original graph. As there is no difference in the undirected graph structure, the GOT distance assigns a zero distance between all graphs.
Overall, we see that accounting for directions in the edges is thus important to obtain meaningful distances between the graphs.
### Case Study: single-cell RNAseq derived cell-cell interactions disease clustering
To illustrate the utility of our graph distances, we further consider a real-world analysis task, namely the comparison of cell-cell communication networks. Specifically, we consider a scenario in which we are given a patient cohort with \(p\) patients, from which we generate a set of cell-cell communication networks \(\mathbf{G}=[\mathcal{G}^{1},...,\mathcal{G}^{p}]\) using the ligand-receptor analysis method CrossTalkeR [8]. Our question here is whether the comparison of cell-cell networks can be used to delineate the disease stage or sub-types in these patients. To construct our set of graphs, we retrieved two single-cell RNA sequencing datasets publicly available. These data have 20-35 samples, which are annotated regarding the health status of patients. Next, we compute the graph-to-graph distances using our introduced distance measures.
To reveal how far we can find a partition of the patient cohorts, we perform \(k\)-means with the estimated distance matrix as input. For simplicity, we set \(k\) to the (true) number of classes in the data, and use the Adjusted Rand index (ARI) [21] between the true label and clustering result to evaluate the performance of the clustering result. We also include a few baseline methods. In the first baseline, we would like to compare our methods with a simplistic way of obtaining a distance using the graph set \(\mathbf{G}\). Here, the matrix \(\mathbf{P}\) is used as an input for a Principal Component Analysis, which can be obtained by computing an SVD of the centered \(\mathbf{P}\) matrix \(\tilde{\mathbf{P}}=U\Sigma V^{\top}\) Next, the principal components matrix \(U_{pc(p-1)}\) are used to compute the pairwise Euclidean distance between the samples.
In the second baseline, we attempt to compare our method(directed) with an undirected cost function. For that, we compute the correlation distance \(dCor(u,v)\), between the pair of rows from \(\mathbf{P}\).
\[dCor(u,v)=1-\frac{\langle(u-\bar{u}),(v-\bar{v})\rangle}{\|(u-\bar{u})\|_{2} \|(v-\bar{v})\|_{2}}, \tag{14}\]
where \(\bar{u}\) and \(\bar{v}\) denote the average of the vectors \(u\) and \(v\), respectively. The distance matrix resulting from that process is then used as a cost function for the Wasserstein approach described above.
We display the results of these experiments in Table 1. As can be seen, our optimal transport-based distances outperform the baseline methods (correlation and PCA), disregarding edge directions. Interestingly, we find that in this task, the Wasserstein-based construction outperforms the Gromow-Wasserstein formulation. In contrast, the node-to-node distance metric used to induce a geometry for the directed graphs appears to play a far less significant role. These results indicate that the Wasserstein formulation might be better suited for sparse and noisy DWG obtained from cell-cell communication networks from single-cell RNA sequencing. A potential explanation may be that the Wasserstein formulation is adaptive with respect to the whole ensemble of graphs considered, due to the specific construction of the linegend.
## 5 Conclusions
We presented two OT-based distance measures to compare directed graphs, based upon node-to-node distance metric for directed graphs. Our results highlight the importance of considering edge directions when comparing graphs. Indeed, as our example in Figure 1 shows our approach can capture essential features related to the direction of each interaction in opposition to our baseline models.
We further provided a case study of cell-cell communication networks as a real word application of the proposed methods. Here our methods enabled the identification of latent disease stages and potentially disease-related interaction groups.
In future work, we intend to improve the robustness of our structural comparison, e.g., by adding other synthetic directed weighted graph models. We will also aim to characterize in more detail the effects of noise and sparseness in our methods as well as explore approaches using the computed transport maps for explainability. For instance, in the context of our example application we may want to highlight which cell-cell pairs provide the core contributions to the observed differences between healthy and disease states.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Dataset & PCA(P) & correlation(P) & \(GW_{HTBP^{\prime}S}\) & \(GW_{GBD}\) & \(GW_{MTD^{\prime}}\) & \(Wasserstein_{GBD}\) & \(Wasserstein_{HTD^{\prime}}\) & \(Wasserstein_{HTD^{\prime}S}\) \\ \hline \hline Pancreas Cancer & 0.037006 & 0.109974 & 0.109974 & 0.033237 & 0.138969 & **0.884777** & **0.884777** & **0.884777** \\ Heart Myocardial Infarction & -0.055866 & 0.320291 & 0.312917 & 0.312917 & **0.455679** & **0.455679** & **0.455679** & **0.455679** \\ \hline \end{tabular}
\end{table}
Table 1: Adjusted Rand (AR) measuring the accuracy of DWG clustering. Values in bold indicate the outperforming method. |
2310.14891 | SpeakEasy: A Conversational Intelligence Chatbot for Enhancing College
Students' Communication Skills | Social interactions and conversation skills separate the successful from the
rest and the confident from the shy. For college students in particular, the
ability to converse can be an outlet for the stress and anxiety experienced on
a daily basis along with a foundation for all-important career skills. In light
of this, we designed SpeakEasy: a chatbot with some degree of intelligence that
provides feedback to the user on their ability to engage in free-form
conversations with the chatbot. SpeakEasy attempts to help college students
improve their communication skills by engaging in a seven-minute spoken
conversation with the user, analyzing the user's responses with metrics
designed based on previous psychology and linguistics research, and providing
feedback to the user on how they can improve their conversational ability. To
simulate natural conversation, SpeakEasy converses with the user on a wide
assortment of topics that two people meeting for the first time might discuss:
travel, sports, and entertainment. Unlike most other chatbots with the goal of
improving conversation skills, SpeakEasy actually records the user speaking,
transcribes the audio into tokens, and uses macros-e.g., sequences that
calculate the pace of speech, determine if the user has an over-reliance on
certain words, and identifies awkward transitions-to evaluate the quality of
the conversation. Based on the evaluation, SpeakEasy provides elaborate
feedback on how the user can improve their conversations. In turn, SpeakEasy
updates its algorithms based on a series of questions that the user responds to
regarding SpeakEasy's performance. | Hyunbae Jeon, Rhea Ramachandran, Victoria Ploerer, Yella Diekmann, Max Bagga | 2023-09-23T17:19:32Z | http://arxiv.org/abs/2310.14891v1 | ###### Abstract
###### Abstract
Social interactions and conversation skills separate the successful from the rest and the confident from the shy. For college students in particular, the ability to converse can be an outlet for the stress and anxiety experienced on a daily basis along with a foundation for all-important career skills. In light of this, we designed SpeakEasy: a chatbot with some degree of intelligence that provides feedback to the user on their ability to engage in free-form conversations with the chatbot. SpeakEasy attempts to help college students improve their communication skills by engaging in a seven-minute spoken conversation with the user, analyzing the user's responses with metrics designed based on previous psychology and linguistics research, and providing feedback to the user on how they can improve their conversational ability. To simulate natural conversation, SpeakEasy converses with the user on a wide assortment of topics that two people meeting for the first time might discuss: travel, sports, and entertainment. Unlike most other chatbots with the goal of improving conversation skills, SpeakEasy actually records the user speaking, transcribes the audio into tokens, and uses macros--e.g., sequences that calculate the pace of speech, determine if the user has an over-reliance on certain words, and identifies awkward transitions--to evaluate the quality of the conversation. Based on the evaluation, SpeakEasy provides elaborate feedback on how the user can improve their conversations. In turn, SpeakEasy updates its algorithms based on a series of questions that the user responds to regarding SpeakEasy's performance. The source code for SpeakEasy can be found here: [https://github.com/HarryJeon24/SpeakEasy.git](https://github.com/HarryJeon24/SpeakEasy.git). See README for detailed instructions about setup.
## 1 Team Vision
Communication is essential in all facets of life. The way people talk, express themselves, and share ideas is at the forefront of our daily lives. Humans are a social species, which is why communicating effectively is key to maintaining relationships, getting a job, and increasing self-confidence. For instance, 95.9% of employers found written/oral communication essential for jobs, but estimated that a mere 41.6% of employees actually possessed this skill [17] Furthermore, research suggests that improving communication and social skills can help alleviate development of social anxiety [11]. This is where SpeakEasy comes into play.
### Overall Objective
The goal of SpeakEasy is to help people improve their communication skills through researched-backed metrics and tips. A dialogue system is perfect to achieve this goal, as it provides the perfect avenue for the user to practice and receive feedback on their communication without the social pressure of interacting with another human being. SpeakEasy first engages in a friendly conversation with the user. On top of providing great practice for small talk in a safe environment, this dialogue is used to collect information about how the user communicates and which areas can be strengthened. Such information can only be collected through a dialogue system. Then, the chatbot presents constructive feedback that the user can easily implement into future conversations. After using SpeakEasy, users will be prepared to build stronger relationships, land and keep jobs more easily, and build their confidence. As such, the overarching goal of SpeakEasy is to improve the user's overall quality of life.
### Target Audience
The target audience for our system is college students, both males, and females. We decided to cater to this age group because communication skills are essential for personal growth and development. However, many individuals struggle to express themselves due to social anxiety or other speaking disorders. This has especially been heightened through the pandemic with 88% of students suffering from stress, 44% from anxiety, and 36 % from severe depression [7]. While an inability to communicate can be attributed to these disorders, Lee et al. [7] claims that lack of communication can also be one of the determinants of the aforementioned disorders. In other words, the inability to communicate can cause anxiety and other disorders which in turn worsens the student's communicative skills: a vicious cycle. With this in mind, we believed we optimized SpeakEasy's impact by designing it for college students.
An individual's experience in college often determines the course of the rest of their academic and personal lives. Thus, by catering our system to college students, we can help them become more confident in themselves and their abilities as they navigate different relationships, job opportunities, and personal growth. Individuals who can communicate effectively can express their feelings, needs, and ideas clearly. This, in turn, can improve their relationships and overall well-being. Studies evaluating students before, during, and after university experience has established college as the most profound time in a person's life in which they can learn cooperative skills [8]. If learned successfully, this cooperative methodology serves as a building block from which future corporate and office skills develop. Consequently, it is imperative that college students successfully develop the communication skills necessary during the short time they have in university and as a result, SpeakEasy likely has its most predominant impact when used by college students.
We specifically catered to university students by incorporating conversational topics that appeal to college students. Although we directed the conversation to engage with college students, everyone can benefit from interacting with our system. The ability to make conversion is a universal skill that people continue to improve on throughout the course of their lives.
## 2 Challenges
The main challenge with developing SpeakEasy was creating a chatbot that understood questions the user may ask. With the nature of SpeakEasy, it is expected that the user will try to have an engaging conversation since they know their conversational abilities will be judged. The best way to increase engagement is to ask questions - it is one of the quantitative metrics measured, after all. However, predicting every possible question the user may ask and generating a response is near impossible. To overcome this challenge, SpeakEasy has opinions and a personality. For example, this chatbot wants to live in Barcelona and loves the beach. Adding depth to SpeakEasy allows it to answer personal questions the user may ask, increasing engagement and interactivity. However, there may still be instances where the user's questions go unanswered. To guarantee that these instances were not negatively impacting the user experience too much, one aspect of the evaluation included how natural the conversation felt. Ideally, SpeakEasy is able to acknowledge what the user is saying and answer enough questions to resemble real human conversation.
Furthermore, understanding human emotion became a challenge as well. To ensure that our conversational AI operated fast enough to simulate as much of a proper conversation as possible, we attempted to minimize the amount of GPT API calls. Consequently, we could not always flag the emotion of the user's input using the GPT API. Unfortunately, this results in some instances where SpeakEasy responds positively to something clearly negative which would not happen in an actual conversation and actually detracts from the natural nature of the conversation. We attempted to circumnavigate this by designing a dialogue flow that could have more neutral ts response. Because the entire aim of our project is to help the user improve their ability to converse in a realistic environment, maintaining the realism of our conversation was the most pertinent challenge. With this in mind, we attempted to ensure that the flow of the conversation was as seamless as possible: i.e., no abrupt transitions to other topics or off-topic dialogue; however, since we cannot account for all possible user inputs, ensuring that this was the case was no easy task. To try to prevent against making abrupt transitions, we tried to preempt transitions to other topics by priming the user with
questions that could generate user responses that relate to the topic SpeakEasy intends to switch to.
A technical challenge arose when using Natex with the speech to text technology. These two technologies are incompatible, so regular expressions embedded within Macros were used to overcome this. Other technical challenges came from the differential operating systems of Windows and Mac. For example, the string inputs for the systems calls accessing the operating system and thus the speakers differ for each version. Consequently, we had to develop different versions for each kind of system. Additionally, package installation is far more difficult on Mac as certain packages require complicated procedures in order to properly install the right version. Background noise also played a huge technical challenge. If they system cannot properly pick up the audio from the background, then the string input for the GPT API calls can often passed in garbled text--e.g., if it is noisy in the background, sometimes characters and words from languages are picked up which force the GPT to throw errors which forces it to terminate early. As a result, we tried to ensure that some of the error statements force the dialogue flow to ask further questions as opposed to simply terminating the code. Finally, SpeakEasy relies on the GPT API to process what the user is saying. Though this increases its language understanding capabilities immensely, the GPT API experiences major slowdowns at times which negatively impacts user experience. Thankfully, the use of an API key mitigated this negative effect.
## 3 Dialogue Overview
The chatbot's dialogue flow creates a natural conversation that lasts for at least seven minutes (averages around ten minutes) and covers topics related to travel, health, and entertainment. The general idea is that two strangers meet for the first time and create small talk. Therefore, the dialogue will start with questions and phrases such as "Hello, what is your name?", "My name is...", "How are you doing?", "I am doing...". In the case that the person is a returning user, the system conducts a different greeting that incorporates a short evaluation of the user's prior experience with the bot. After the introduction, the chatbot guides the conversation into talking about health. We wanted to design the flow such that the user does not feel forced to talk about the topics; therefore, the chatbot uses the guiding questions "Is it part of your everyday routine?" and "Do you make it part of your daily schedule?" to transition into talking about health. During the conversation about health, the chatbot asks about and discusses the user's eating habits, workout routine, and work-life balance. Following these topics, the chatbot stirs the conversation either towards talking about entertainment or about travel. The latter transition, for example, is done by pointing out the difficulties of maintaining a healthy diet while traveling.
During the travel conversation, the chatbot asks the user questions such as "Where are you from?", "Where
Figure 1: A diagram of the dialogue flow
would you rather live?" and "Are there any other places you would like to live in or visit?". Furthermore, the chatbot asks the user for recommendations on what to do in specific travel locations and, if they prefer, would ever consider solo travel. The chatbot is also able to share information about its persona. For example, the chatbot is able to tell the user that it is from Austria and that it wants to live in Barcelona. At the end of this conversation, the chatbot enters the feedback state of the conversation.
When talking about entertainment, the chatbot is able to converse about movies and music. It first asks the user if they have a favorite movie or song, and if so, ask questions about it. For example, "Who is your favorite artist for that type of music?", "What genre is it"?, and "Who is your favorite character in the movie?". The chatbot then moves on to recommending a song or movie to the user. If the user is satisfied, the chatbot either transitions over to talking about travel or it will end the conversation and enter the feedback state of the conversation. The travel state can transition back over to the movie state in the event that a concept from a movie is brought up--e.g., did you know that this movie was filmed here--in order to prevent the conversation from being fully linear.
During the feedback state, the chatbot provides information about the user's conversation performance and previously determined metrics, such as how many questions the user asks, word choice, talking speed, and the level of attention that the user gives to their conversation partner. It also asks the user if it wants to hear the underlying metrics of the analysis: e.g., the specific statistics we incorporated from previous research in order to evaluate the conversation.
Lastly, the chatbot asks the user for feedback on its performance. The user is be able to do so by answering questions like "Did I say anything wrong during the conversation?" and "Did you ever have a hard time understanding me?" in order to assess the chatbots performance. These questions were chosen in line to the evidence presented in section 5.
Figure 2 provides a sample dialogue conducted by the chatbot during the entertainment dialogue flow. Because the user has a favorite movie, the chatbot proceeds to ask questions about the movie. After some
Figure 2: Sample dialogue conducted by the chatbot
time, the chatbot ends the conversation and proceed to giving feedback to the user about their performance.
## 4 Methodology
For the conversation framework, we are using EmoraSTDM [3], which allows developers to add custom macros and ontologies. One of the built-in features in EmoraSTDM is Natex, which functions similarly to regular expressions. However, we chose not to utilize Natex in our implementation. Instead, we relied on the GPT API [10] to understand user input. It is important to note that we did not use GPT to generate responses in this system. To improve conversational skills, we aimed to mimic a human conversation environment as closely as possible. Given that more people experience anxiety when talking rather than texting [13], we decided to implement speech-to-text and text-to-speech macros.
### Audio Function
In this project, we aimed to develop a chatbot capable of taking audio input and producing audio output. To achieve this, we utilized OpenAI's Whisper Automatic Speech Recognition (ASR) API [12] for transcribing audio to text and the gTTS (Google Text-to-Speech) package in Python for converting text to audio. This section details the scientific approach and methods employed in creating the chatbot. To enable the chatbot to accept audio input, we used the PyAudio library in Python to record the user's voice. The following parameters were used for recording:
* Format: 16-bit integer (paInt16)
* Channels: 1 (mono)
* Sample rate: 44,100 Hz
* Chunk size: 1,024 samples
The audio recording was saved as a WAV file, and a separate thread was created to handle the recording process. This allowed users to stop the recording by pressing the Enter key. The duration of the recording was calculated by measuring the time elapsed between the start and end of the recording process. After obtaining the audio input, we utilized OpenAI's Whisper API to transcribe the audio to text. The API was called with the Whisper ASR model "whisper-1" and the recorded audio file. The API returned a transcript of the audio, which was then assigned to a variable for further processing. For text-to-audio conversion, we employed the gTTS package in Python. The gTTS library converts text to audio using Google's Text-to-Speech API. To generate audio output, we passed the chatbot's response text to the gTTS function, specifying the following parameters:
* Text: The chatbot's response
* Language: English (en)
The gTTS function saved the synthesized audio as an MP3 file. To play the audio output, we used the "os.system()" function in Python. For Windows, we executed the "start" command, followed by the MP3 file's name, and used the "time.sleep()" function to pause the execution of the script for the duration of the audio playback, which was determined using the MP3 library to extract the length of the audio file. For macOS, we executed the "afplay" command, followed by the MP3 file's name, without the need for a sleep function. The chatbot's workflow was as follows:
1. Record the user's audio input using the MacroRecordAudio class.
2. Transcribe the audio input to text using OpenAI's Whisper API.
3. Process the text input and generate a response using the chatbot's logic.
4. Convert the text response to audio using the MacrogTTS class.
5. Play the audio output for the user.
This approach allowed us to create a chatbot capable of interacting with users through audio input and output, offering a more natural and accessible user experience.
### Transitions
While creating the introduction state, we used GPT to get the user's name. From there we store all user names as a key in the USERS dictionary which we use to check whether the user has previously used the system. If they are a returning user, we store their rating for their previous interaction with the bot as well as the rest of their feedback later in the dialogue flow. We also use GPT for retrieving information like how the user is doing and what they have been up to. In addition to this, we use a variety of GET and SET macros to store GPT responses in various variables which we can use to retrieve the information later through our.pkl file.
For the Health transition, we again utilize GPT as well as several GET and SET macros to store information about the user's health (i.e. the user's lifestyle, whether or not they exercise, their eating habits, etc). We also utilized IF macros to determine whether or not a user indicated yes or no to a particular question.
The Entertainment section works in a similar way, using GTP and GET/SET macros to save user information. It also utilizes the Spotify API for song suggestions and the The Movie Database (TMDB) API for movie recommendations.
Once again, GTP and macros was utilized to build the Travel portion of the conversation. SpeakEasy prompts the user to talk about where they are from and where they have traveled or want to travel. Beyond this, SpeakEasy can discuss favorite activities at these locations and the user's specific travel habits. To be able to acknowledge what the user is saying, many SET and GET macros were employed to store and retrieve responses. GTP was only used to process what the user is saying, not to generate responses. Beyond this, SpeakEasy's personality shines through in the travel portion. We made this chatbot have a home country, and even specific travel wishes.
### Feedback
A set of macros is designed to help SpeakEasy provide advice to users by evaluating their conversational skills. We focus on five aspects for evaluation: awkward transition, verbal tic, acknowledgment, number of questions, and number of words per minute. This evaluation matrix is based on three papers [4], [16], [15].
The work by Garrels [4] highlights the importance of intonation, smooth transition, and asking questions for good conversation. Based on these findings, we created the MacroAwkward, MacroNumQuestions, and MacroAVGToken classes. The MacroAwkward class checks for awkward transitions by comparing the user's transitional words to a predefined list of awkward transitional words, which were generated by ChatGPT for suggestion and can be expanded [4]. If the user uses less than 10 awkward transitions, they are considered to be doing well. The MacroNumQuestions class counts the number of question marks in user responses and advises the user to ask more questions if the ratio of questions to total utterances is less than 0.39, as this was the high question rate that showed positive effects on liking towards the conversation partner [5]. The MacroAVGToken class measures the user's intonation by calculating the words per minute (WPM) and providing appropriate feedback based on a range of 120 to 150 WPM, which is the average American talking speed in conversation according to the cited source [2].
Research by Russell et al. [16] demonstrates the negative impact of overusing specific words. To address this aspect, we created the MacroTic class, which calculates the frequency of each token the user used, excluding common words like articles. The class provides feedback on the overuse of certain words based on the sorted frequency list of tokens.
Experiments conducted by Rubin and Martin [15] reveal the importance of acknowledging and showing empathy towards the speaking partner. To measure these elements, we implemented the MacroAcknow
class. One way to show acknowledgment and empathy is by mentioning what the speaking partner said previously. The MacroAcknow class calculates the language style matching (LSM) using a function words list commonly used in conversation, comparing the user's responses to SpeakEasy's responses. If the LSM is greater than or equal to 0.8, the user is considered to have successfully shown attention to their partner under symmetric, cooperative, social conditions, where the two people had to accomplish a task. This is in contrast to asymmetric, competitive, or negotiation conditions, where the LSM may not hold the same impact [14]. Otherwise, they are advised to show more attention by acknowledging what their partner said previously.
## 5 Evaluation
Because for Quiz 6, we all designed separate evaluation plans, we chose one of the evaluation plan's statistics to present here and detail our reasoning behind our evaluation. Prior to defining the five categories for the evaluation, we gathered information about the diversity of the participants that evaluated our chatbot SpeakEasy. Each member was assigned to gather ten individuals to evaluate our system. Of the ten total participants in the context of this evaluation plan, six of them were female and four of them were male. The age of the participants ranged from 20-22 while the average age was 21.2 years old. All ten participants came from middle to upper-middle class backgrounds. four of the participants were from Caucasian descent, five were from Asian descent, and one was from African American descent. The context of all ten conversations were casual, the purposes of the conversations were to improve conversational skills, and the power dynamics of the conversation were that all participants were my peers.
The metric chosen was the Likert scale from 1-5. In class, we went over the Likert scale from 1-3; however, we decided to expand it to 1-5 because according to one of the original papers using the Likert scale, the optimal metrics were (1) Strong Disagree, (2) Disagree, (3) Undecided, (4) Agree, and (5) Strongly Agree. Jamieson [6] argues that the extra feeling of intensity (e.g., between disagree and strongly disagree) is important and because it allows for better statistical analysis. Furthermore, according to their analysis, the difference in intensity between each category is roughly even allowing for a well-distributed interval for the variable.
In terms of satisfaction in a conversation with a chatbot, according to Mohelska and Sokolova [9], the main attributes that must be fulfilled is if the chatbot met expectation and left an impression. Consequently, the two questions we asked were "did the chatbot met expectations" and "did the chatbot leave an impression". All ten participants gave a five to both these questions; however, from a qualitative perspective, it was less the dialogue flow and conversation, and more the voice technology that stunned the user. As a result, in the future, evaluations of conversation satisfaction should ask questions specific to dialogue flow. Mohelska and Sokolova [9] also recommends asking about the conversation duration and number of conversation turns which we did. All but one participant strongly agreed that the conversation duration was appropriate while the other user said they agreed that the conversation duration was appropriate. All ten people evaluating SpeakEasy said that the number of conversation turns was appropriate.
According to Vanjani et al. [18], the biggest limitation in evaluation techniques was evaluating how natural the chatbots are. They argue that chatbots can be conversationally unnatural in two ways: the chatbot can give garbled responses or give too accurate of an answer. As a result, we decided to ask, "did the chatbot give garbled responses" and "were the answers the chatbot gave were unnaturally accurate". The answers for whether the chatbot gave garbled responses ranged from 2-4 with the average answer being 2.8. The biggest reasons for why the participants responded undecided or disagree were that the chatbot failed to respond accurately to questions or completely ignored. This is something that the chatbot needs to improve on going forward. Specifically, we can use the GPT API to first determine whether is asking a question, then use the GPT API to determine if the user is being on topic, and finally randomly select a response from an answer bank that we construct.
When generating automatic conversation evaluators, Yi et al. [20] determined that the four most determiners of chatbot coherence were whether the system response is comprehensible, on topic, interesting, and if the user wanted to continue the conversation. As a result, we ask four questions: (1) "Are SpeakEasy's responses comprehensible", (2) "Are SpeakEasy's responses on topic", (3) "Are SpeakEasy's response interesting", and
(4) "Did you want to continue the conversation". The average response for comprehensibility was a 4.8. The average response for on topic was a 3.8 with the answers ranging from a 2-5. The main qualitative issues were the aforementioned cases in which SpeakEasy fails to respond to questions and in certain cases SpeakEasy switches topics with fully closing out the previous topic. The average response for whether the conversation was interesting was a 4.2. Participants that did not give a strongly agree mainly stated that while sometimes SpeakEasy gave a response specific to the participant's response, there are some instances in which it could give a more user-centric response. All five participants that responded 5 to whether they wanted to continue the conversation gave qualitative feedback mostly stating that they loved being able to have a conversation in which they could actually speak to the chatbot.
When we designed SpeakEasy, we wanted it to be as empathetic as possible because we felt that a person cannot be good at conversation without introducing some degree in empathy into the conversation. The main question asked by a study evaluating chatbot empathy conducted by Agarwal et al. [1] was whether the chatbot disregarded any of the feelings of the user. As a result, we asked the following question "Did SpeakEasy disregard any emotion you interjected into the conversation" and "Did SpeakEasy reciprocate any emotions that you conveyed". For the former question, results varied from 1-5 with the average response being 3.1. A particularly concerning qualitative response was that in response to the user saying that they were crying, SpeakEasy responded to "I enjoy crying". We must fix this in the future as this error can drive away potential users. To the later question, the average response was a 3.8 with no specific qualitative concerns.
Wahde and Virgolin [19] argue that the interpret-ability of conversational AI depends on a variety of factors most importantly if the chatbot can explain its reasoning, its ability to contain facts that make sense to the user, and to want unwanted bias. Therefore, we asked the following questions: (1) "was the chatbot's reasoning behind its feedback well-explained", (2) "were the facts presented in the feedback clear", and (3) "was there any bias in the analysis". All users but one gave a 5 for the first question, with the one dissenting participant giving it a 4 stating that the LSM score was unclear. An explanation of this score can be explained better going forward. For questions 2 and 3, all users gave a 5 saying that the feedback was accurate and easily understood.
Based on the aforementioned analyses, we need to improve on few specific areas. Most importantly, we need to redesign SpeakEasy to be able to recognize and answer questions better. A more minor upgrade would be better designing our chatbot to recognize specifics of the user response and changing states (the number of which would increase) to accordingly respond to the user. Specifically, we need to recognize negative responses better and avoid responding in a positive tone as that can appear condescending and scare off users. Finally, we should explain the LSM score better and in future evaluations we should ask questions more explicitly about the dialogue flow.
## 6 Novelty
During free conversation with SpeakEasy, the chatbot saves the user's and the bot's conversation logs. These logs are then used to evaluate and provide feedback on the user's conversational skills, focusing on five aspects: awkward transition, verbal tic, acknowledgment, number of questions, and number of words per minute. This evaluation matrix is based on three papers [4], [16], [15].
The work by Garrels [4] highlights the importance of intonation, smooth transition, and asking questions for good conversation. Based on these findings, we created the MacroAwkward, MacroNumQuestions, and MacroAVGToken classes. The MacroAwkward class checks for awkward transitions by comparing the user's transitional words to a predefined list of awkward transitional words, which were generated by ChatGPT for suggestion and can be expanded [4]. If the user uses less than 10 awkward transitions, they are considered to be doing well. The MacroNumQuestions class counts the number of question marks in user responses and advises the user to ask more questions if the ratio of questions to total utterances is less than 0.39, as this was the high question rate that showed positive effects on liking towards the conversation partner [5]. The MacroAVGToken class measures the user's intonation by calculating the words per minute (WPM) and providing appropriate feedback based on a range of 120 to 150 WPM, which is the average American talking speed in conversation according to the cited source [2].
Research by Russell et al. [16] demonstrates the negative impact of overusing specific words. To address this aspect, we created the MacroTic class, which calculates the frequency of each token the user used, excluding common words like articles. The class provides feedback on the overuse of certain words based on the sorted frequency list of tokens.
Experiments conducted by Rubin and Martin [15] reveal the importance of acknowledging and showing empathy towards the speaking partner. To measure these elements, we implemented the MacroAcknow class. One way to show acknowledgment and empathy is by mentioning what the speaking partner said previously. The MacroAcknow class calculates the language style matching (LSM) using a function words list commonly used in conversation, comparing the user's responses to SpeakEasy's responses. If the LSM is greater than or equal to 0.8, the user is considered to have successfully shown attention to their partner under symmetric, cooperative, social conditions, where the two people had to accomplish a task. This is in contrast to asymmetric, competitive, or negotiation conditions, where the LSM may not hold the same impact [14]. Otherwise, they are advised to show more attention by acknowledging what their partner said previously.
The idea of improving communication through technology is not new. However, there is a clear market gap for the use dialogue systems to provide personalized feedback on conversation. There are two big competitors that share similarities with SpeakEasy, yet neither provides the same service. Rocky.AI is a personal development coaching app that provides help on improving communication. However, its main purpose is to help the user use self-awareness and reflection to reach their personal goals. Communication is just a small portion of this. On the other hand, SpeakEasy provides much more in-depth feedback and is able to narrow down exactly what the user needs to work on without relying on self-assessment questions. Rather, our metrics are based on research and use a quantitative approach to assessing communication skills. Furthermore, SpeakEasy provides the user with the opportunity to practice small-talk, something Rocky.AI lacks. Rocky.AI is a self-help coach app that focuses on self-reflection, while SpeakEasy is a dialogue system that provides personalized feedback in hopes of improving the user's communication. Though both systems aim to improve communication, the approach Rocky.AI takes is entirely different from what SpeakEasy aims to do.
Yoodli is our most similar competitor. It is an AI speech coach that records the user talking while they are speaking in a professional setting, and then uses this information to provide personalized feedback on how to the user can communicate more effectively. SpeakEasy and Yoodli share the same formulaic approach to generate feedback. That is, both systems assess real dialogue the user has to give appropriate feedback. The biggest difference is that Yoodli focuses on improving professional speech, while SpeakEasy aims to improve communication of all types. While superficially this difference may be minute, it actually drives the novelty of SpeakEasy. Presentations, which Yoodli specializes in, relies more on capturing the audiences attention by changing the tone, volume, and pace of the speaking voice [14]. In other words, the content of the speaking is less important in public speaking: captivating the audience trumps all. SpeakEasy will generally improve all forms of communication, but focuses mainly on conversations. Engaging conversations do not revolve around the quality of speech but instead build of the creativity of the talk [4]. As a result, SpeakEasy focuses heavily on developing extensive ontology's to capture a wide array of conversations and then perform a series of analyses in order to analyze the quality of the subjects discussed. This skill itself is more important than the public speaking skills developed by Yoodli with conversations being essential to mental health [13]. In short, SpeakEasy assesses communication in a casual conversation setting, while Yoodli provides feedback on interviews and public speaking.
## 7 Contributions
The development of SpeakEasy can be essentially divided into three main processes: development of the input audio and transcribing process, construction of the analysis Macros, and developing each of the main conversational transition topics. However, in order to abide by the central scrum tenants of having working software at the end of each sprint and not working on the software one step of time to avoid integration errors, we worked on several component simultaneously with different members working on different elements. Hyunbae Jeon and Max Bagga were primarily responsible for developing the speech to text process and
developing the analysis macros. Hyunbae developed the windows version of the software while Max developed the Mac version of the software. Hyunbae contributed primarily by developing the initial framework for the audio input and output and developing the analysis Macros. Max worked primarily by refining the audio component, integrating the dialogue flows, and working on the supplemental Macros needed to make the dialogue flow work. Victoria Ploerer, Rhea Ramachandran, and Yella Diekmann developed the conversational transitions. Specifically, Rhea created the introduction and health transitions, while Victoria completed the Travel transition, and Yella created the Entertainment transition. Upon completing each transition, we all worked together to connect them in order to carry out a smooth transition from one topic to another. Despite the given roles, everyone was very involved in all aspects, and each of us contributed 20 percent to guarantee the chatbot functioned the best it can.
|
2309.04511 | Systematic Review of Techniques in Brain Image Synthesis using Deep
Learning | This review paper delves into the present state of medical imaging, with a
specific focus on the use of deep learning techniques for brain image
synthesis. The need for medical image synthesis to improve diagnostic accuracy
and decrease invasiveness in medical procedures is emphasized, along with the
role of deep learning in enabling these advancements. The paper examines
various methods and techniques for brain image synthesis, including 2D to 3D
constructions, MRI synthesis, and the use of transformers. It also addresses
limitations and challenges faced in these methods, such as obtaining
well-curated training data and addressing brain ultrasound issues. The review
concludes by exploring the future potential of this field and the opportunities
for further advancements in medical imaging using deep learning techniques. The
significance of transformers and their potential to revolutionize the medical
imaging field is highlighted. Additionally, the paper discusses the potential
solutions to the shortcomings and limitations faced in this field. The review
provides researchers with an updated reference on the present state of the
field and aims to inspire further research and bridge the gap between the
present state of medical imaging and the future possibilities offered by deep
learning techniques. | Shubham Singh, Ammar Ranapurwala, Mrunal Bewoor, Sheetal Patil, Satyam Rai | 2023-09-08T14:20:01Z | http://arxiv.org/abs/2309.04511v1 | # Systematic Review of Techniques in Brain Image Synthesis using Deep Learning
###### Abstract
**Abstract -- This review paper delves into the present state of medical imaging, with a specific focus on the use of deep learning techniques for brain image synthesis. The need for medical image synthesis to improve diagnostic accuracy and decrease invasiveness in medical procedures is emphasized, along with the role of deep learning in enabling these advancements. The paper examines various methods and techniques for brain image synthesis, including 2D to 3D constructions, MRI synthesis, and the use of transformers. It also addresses limitations and challenges faced in these methods, such as obtaining well-curated training data and addressing brain ultrasound issues. The review concludes by exploring the future potential of this field and the opportunities for further advancements in medical imaging using deep learning techniques. The significance of transformers and their potential to revolutionize the medical imaging field is highlighted. Additionally, the paper discusses the potential solutions to the shortcomings and limitations faced in this field. The review provides researchers with an updated reference on the present state of the field and aims to inspire further research and bridge the gap between the present state of medical imaging and the future possibilities offered by deep learning techniques.**
Medical Imaging, Deep learning, 3D Compounding, Image synthesis.
## I Introduction
The review discusses significance of medical imaging in the diagnosis and treatment planning of various diseases and the various imaging modalities utilized. Other topics discussed includes tools and architectures developed in deep learning around medical applications.
Medical imaging process can be dissected into several steps involving capturing the image, processing the image, synthesizing the image if necessary, and finally using an algorithm to run a diagnosis on it. The review breaks down these processes and talks about innovations in deep learning that are enhancing these processes. Additionally, the review highlights the challenges associated with these processes such as cost and associated risks. Conventional synthesis approaches utilize nonlinear models, such as dictionary learning and random forest, to process handcrafted medical image features selected by experts. However, these manual features have a limited ability to represent the complex information present in medical images, thus affecting synthesis performance.
Deep learning-based methods have been successful in addressing these limitations by automatically learning features with sufficient descriptive power through the training of mapping models. These advanced deep-learning models have improved the quality of medical image synthesis and decreased the associated costs. The literature review will examine recent research in this field and discuss the potential for deep learning-based medical image synthesis to improve diagnostic accuracy and reduce invasiveness in medical procedures.
In recent years, medical imaging field has seen significant advancements with the increasing use of deep learning techniques enhancing diagnostic accuracy and reducing the invasiveness of medical procedures. The studies reviewed in this section provide an overview of recent developments in deep learning applications for medical image analysis.
In context to the brain, its applications include brain tumour segmentation, lesion identification, and radiation therapy planning [1]. Deep neural networks have come to outperform human capabilities in computer vision tasks such as disease diagnosis and are being explored to deliver precision medicine [10]. Several tools, architectures, and algorithms have been developed to aid research in the field. "A review of the application of deep learning in medical image classification and segmentation" talks about them in detail. One of the most popular structures for image classification is the convolutional neural network (CNN). AlexNet, a CNN-based deep learning model, was proposed in 2012 which popularized CNNs. Other popular CNN structures include the network in the network (NIN), GoogLeNet, VGGNet, SegNet, U-Net, and ResNet. Some popular deep-learning frameworks used in the
field of medical imaging include Caffe, TensorFlow, and PyTorch. Caffe is known for its high performance, seamless switching between CPU and GPU modes, and support for multiple platforms. TensorFlow is a library which is open source, widely used and provides powerful visualization capabilities and support for heterogeneous distributed computing. Pytorch is specifically targeted at GPU-accelerated deep neural network programming, and it has a dynamic calculation graph that can be changed in real-time. These advancements have greatly aided the field of deep-learning image research in medical imaging [14].
## 2 Existing works
### Image Processing
There are a lot of state-of-the-art deep learning architectures for segmentation and classification of medical images which uses techniques to extract information from images and present it in an efficient and effective form, which can assist doctors in diagnosing and predicting the risk of diseases more accurately.
But there are also challenges and research issues related to the use of deep learning in medical imaging including the availability of big data, recent deep learning algorithms modelled on the human brain, and processing power [15]. To overcome the issue of big data unavailability the field of supervised deep Learning is required to shift from supervised to unsupervised or semi-supervised methods as deep learning applications rely on extremely large datasets, and the availability of such data is not always possible.
The study on the topic by Andreasen talks about the basic issues with the analysis of structural and functional imaging data in neuroimaging studies. Data transmission, boundary detection, volume estimates, 3-D reconstruction and presentation, surface and volume rendering, shape analysis, and picture overlay are some of these issues.
The Study asserts that these problems require application of different methods of image analysis, implemented on a set of software programs, to conduct neuroimaging research using magnetic resonance, single-photon emission computed tomography [1]. It also describes a group of software programs called BRAINS, designed to provide a comprehensive solution for these problems [1].
Noise reduction in medical images is important because noise can interfere with the interpretation of the image and lead to inaccurate diagnoses or treatment decisions. Noise may result from a variety of factors such as electronic noise from image sensors, quantization errors, and compression artifacts. Noise can make it difficult to distinguish between important structures or features in an image, making it harder to make a diagnosis or treatment decision. Additionally, noise can reduce the visibility of small or subtle structures or features in an image, which can be critical for accurate diagnoses or treatment decisions.
Furthermore, noise can also increase the risk of false positives or false negatives, which can have serious consequences for patients. Therefore, it is crucial to reduce noise in medical images to ensure accurate and reliable diagnoses or treatment decisions. The introduction of methods like fuzzy filters has provided a simple and efficient option for noise reduction of medical images [21].
Qualitative and quantitative volumetric analysis of brain images can be carried out for the development, implementation, and validation of an image-processing system. The system allows for the visualization and quantitation of global and regional brain volumes and uses techniques such as automated adaptive Bayesian segmentation, normalization, and tessellation of segmented brain images into the Talairach space, and a hybrid method combining a region-of-interest approach and voxel-based analysis. The goal of this image-processing system is to measure the magnitude, rate, and regional pattern of longitudinal changes in the brain, and it could be useful in understanding and detecting presence of abnormalities in brain structure due to aging or disease [10]. The below figure illustrates a system overview of the entire model.
The development and implementation of an automated image processing and Quality Control pipeline for the brain images obtained from various modalities using imaging protocol and processing pipelines [11]. The paper proposes a lot of great pipelines designed to help convert the raw data from the imaged subjects into useful summary information and is available for use by other researchers.
Figure 1: Noise reduction using fuzzy filters [21]
Fig 2. The image processing system
### _Addressing problems with ultrasound imaging_
Brain ultrasound scans are difficult to capture with accuracy simply because the skull blocks some of the sound signals, thus adding in noise. The image processing techniques discussed above are a crucial step to removing that noise.
There are several other modalities that try to tackle the problem in an alternative way.
Transcranial Doppler (TCD) ultrasonography has been used as a non-invasive method for measuring blood flow and cerebrovascular hemodynamic within the basal arteries of the brain. TCD has limitations such as its operator dependency and the inadequate acoustic windows prevalent in certain populations, which can hinder its more widespread use. It is concluded that TCD is an essential tool that can be used in the diagnosis and management of various cerebrovascular disorders and in research settings. (Purkayastha and Sorond et al., 2013).
Researchers reviewed the efforts to circumvent the blood-brain barrier through the design of new drugs and the development of more sophisticated delivery methods which additionally highlighted the recent advances in the development of non-invasive, targeted drug delivery by MRI-guided ultrasound-induced BBB disruption. However, it also acknowledged the limitations of focused ultrasound, such as the need for an acoustic window made by a craniotomy and the trade-off between frequency and focal spot size (Vykhodtseva, McDannold, and Hynynen et al., 2008).
### _2D to 3D constructions:_
After discussions into image acquisition and image processing, recent development, has been to create 3D images of medical images, this helps us understand the images better and make better decisions. This is particularly difficult to do for a regular patient due to the need for several slowly captured images in a linear axial motion and then, combining them into a 3D scan. There are papers highlighting the problem and trying to solve them.
2D to 3D image reconstruction is a technique used in medical imaging to convert a series of 2D images into a single 3D image. The techniques used include multi-planar reformatting, Volume rendering, Maximum intensity projection, Surface rendering, Shape-from-shading, Cone-beam CT, Model-based algorithms, and Deep Learning-based methods. 2D to 3D reconstruction can be used in various medical imaging modalities such as CT, MRI, and ultrasound. The benefits include improved diagnostic accuracy and treatment planning, but it also has some limitations such as increased computation time, storage requirements, and reliance on the quality of the 2D images.
Heimann et. al. proposed several new ideas in the field like the use of Statistical shape models as a robust tool for the segmentation of medical images. It considers the methods required to construct and use these 3D SSMs while concentrating on landmark-based shape representations and thoroughly examining the most popular variants of Active Shape and Active Appearance models. It also describes several alternative approaches to statistical shape modelling. The paper discusses shape representation and correspondence, model construction, local appearance models, and search algorithms, and provides an overview of the current state of the art in the field.
It highlights that establishing dense point correspondences between all shapes of the training set is the most challenging part of 3D model construction and one of the major factors influencing model quality. It also discusses that manual landmarking is getting increasingly unpopular due to the tedious and time-consuming expert work required and the lack of reproducibility of the results. A study on the topic by Krucker describes a new technique called 3D spatial compounding of ultrasound images using image-based nonrigid registration. The technique is used to overcome the limitations of resolution during the compounding of ultrasound images. The method uses volumetric ultrasound data acquired by scanning a linear matrix array probe in the elevational direction in a focal lesion phantom and in a breast in vivo. The study shows that the
technique is successful in enabling high spatial resolution in 3D spatial compounding of ultrasound images. (Krucker and Charles et.al, 2000).
### _Medical image Synthesis:_
A Deep learning-based approach can be taken for MRI synthesis from brain computed tomography (CT) images for magnetic resonance (MR)-guided radiotherapy. A method can be employed for synthesizing brain MRI images from corresponding planning CT (pCT) images using deep learning methods. A mixture of different deep learning models were applied to implement this task, including CycleGAN, Pix2Pix model, and U-Net. It evaluated these methods using several metrics, including mean absolute error (MAE), mean squared error (MSE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Overall, the authors conclude that the proposed method has the potential to improve patient positioning in radiotherapy by generating synthetic brain MRI images from CT (Li et al., 2020).
## III Brain imaging applications of Deep learning:
Deep learning has been widely used in a variety of medical imaging applications, including brain tumour detection and classification. Here's a list of the latest applications where deep learning has helped with the diagnosis and disease detection of the brain:
* Deep learning-based brain tumour classification (B Kokila et al, 2021).
* Brain tumour segmentation and grading of lower-grade glioma (Mohamed et. al, 2020).
* Classification of CT brain images to consolidate 2D images along spatial axial directions and 3D segmented blocks. (Gao et. al, 2016).
* Diagnosis of Alzheimer's disease via brain imaging (Dan et. al, 2020).
* Multimodal Brain Imaging Classification using EEG signals (Jiang et. al., 2020).
* Brain Cancer Classification (Haq et. al.,2022).
* Cerebellum Segmentation from Foetal Brain Images (Sreelakshmy et. al, 2022).
Fig 4: EP_IMG-GAN architecture (Luo et. al, 2021)
* Semi-supervised deep learning of brain tissue segmentation. [11].
* Developing a brain atlas. [12].
* Future of Transformers in Medical Imaging
Transformers are the latest buzz in deep-learning research, but it is still in its nascent phase. Since the introduction of transformer architecture in 2017 in the paper by Vaswani et. al, it has become one of the most dominant deep-learning models in the research community. The following figure demonstrates transformer architecture.
Transformer models have been gaining popularity in recent years and there is a growing interest in using them in medical imaging as well. The future of Transformers in medical imaging looks promising as they have the potential to
revolutionize the field by providing more accurate and efficient analysis of medical images. However, the use of Transformers in medical imaging is still in its infancy and there are several challenges that need to be addressed. They have become popular in medical imaging because of their ability to preserve contextual and edge information, which is critical for providing clinical information. Additionally, the use of self-supervised learning and multi-scale features in Transformer-based approaches has also shown promising results in cross-modality image synthesis.
The adoption of Transformers in medical imaging is limited due to the need for large amounts of well-curated training data. Obtaining labelled data for medical imaging can be expensive and difficult. The problem can be addressed by applying transfer learning aided with pretraining on a self-supervised task using large amounts of unlabelled medical data. Another challenge is the computational requirements of Transformers. These models are known to be computationally intensive and require significant computational resources. This can be a major obstacle to the widespread adoption of these models in medical imaging. Privacy concerns also need to be addressed when working with medical imaging data. Patient data is sensitive and must be always protected. This requires careful consideration of data storage, access, and sharing protocols.
The future of Transformers in medical imaging is promising and it will be interesting to see how these models continue to evolve and improve in the coming years. The use of self-supervised transfer learning with pretraining on large amounts of unlabelled medical data and the integration of edge-preserving mechanisms and multi-scale features have the potential to revolutionize medical imaging by providing a more accurate and efficient analysis of medical images.
A recent paper published in July 2022 proposes Brainformer as the latest fMRI-based hybrid Transformer Architecture for universal generalizable brain disease classification and shows good performance over different datasets and classification tasks. The results are achieved by first modelling the local cues within each brain region through 3D CNN, capturing the global relations among distant regions through two global attention blocks, and then applying a data normalization layer to handle the multisite data. [1].
## V Future Scope and discussion
Deep learning has the potential to revolutionize medical imaging by automating the analysis of medical images and improving diagnostic accuracy. Some specific areas where deep learning is being applied or has the potential to be applied include computer-aided diagnosis, image segmentation, object, or anomaly detection, etc.
One of the most promising models for brain image synthesis can be the Transformer architecture, which is a popular choice for natural language processing tasks. The key innovation of the
Fig 5: Transformer architecture [23]
Transformer is its use of self-attention mechanisms, which allows the model to weigh the importance of different parts of the input when making a prediction. This contrasts with traditional recurrent neural networks (RNNs), which rely on fixed-length context windows and can struggle to handle inputs of variable length. The Transformer's ability to efficiently process long sequences of data and its ability to be parallelized make it well-suited to tasks such as machine translation. The Transformer architecture differs from other deep learning techniques in several keyways: 1. Self-Attention, 2. Parallelization, 3. Handling sequential data, 4. Pretraining, 5. Multi-Head Attention. The Transformer architecture has been extensively used in pre-training models such as BERT and GPT-3, which have been fine-tuned for various NLP tasks with good results.
An approach that is widely getting popular is the use of Statistical shape models (SSMs) for 3D medical image segmentation. SSMs are considered a robust tool for the segmentation of medical images, and the authors reviewed the techniques around 3D SSMs. It uses landmark-based shape representations and active shape and active appearance models. They also described several alternative approaches to statistical shape modelling. However, they highlighted that establishing dense point correspondences between all shapes of the training set is generally the most challenging part of 3D model construction and one of the major factors influencing model quality, and that manual landmarking is getting increasingly unpopular due to the tedious and time-consuming expert work required and the lack of reproducibility of the results.
Another approach discussed in this review that can be useful is the use of deep learning for cross-modality MRI synthesis. The paper "Edge-preserving MRI image synthesis via the adversarial network with iterative multi-scale fusion" proposed a novel approach for cross-modality MRI synthesis using a generative adversarial network (GAN) called EP_IMF-GAN. The GAN is designed to preserve edge information, which is critical for providing clinical information, by incorporating an auxiliary task of generating the corresponding edge image of the target modality. Results from experiments on the BRATS dataset showed that the proposed EP_IMF-GAN method outperforms state-of-the-art image synthesis approaches in both qualitative and quantitative measures.
While these models, approaches, and problems have shown promising results, they also have their limitations. The Transformer architecture's reliance on the quality of the 2D images and the increased computation time and storage requirements are some of the limitations of 2D to 3D image reconstruction techniques. Additionally, SSM's reliance on manual landmarking and the lack of reproducibility of the results are some of the limitations of SSM. The limitations of TCD such as its operator dependency and the inadequate acoustic windows prevalent in certain populations can hinder its more widespread use. It is understood that deep learning-based approaches have a huge potential to impact the field of medical image analysis by improving their speed, accuracy, and efficiency and greatly improving patients' lives in the future, but it needs to go a long way to become a blind-use tool for medical imaging.
## VI Conclusion
The development of advanced imaging techniques and the advent of deep learning has led to significant advancements in the field of medical imaging. In this review, we have discussed several models, approaches, and problems related to medical imaging that have been proposed in recent literature.
The review also discussed the significance of 2D to 3D image reconstruction techniques in medical imaging. The 2D to 3D image reconstruction techniques discussed include multi-planar reformatting (MPR), Volume rendering, Maximum intensity projection (MIP), Surface rendering, Shape-from-shading (SFS), Cone-beam CT, Model-based algorithms, and Deep Learning-based methods. These techniques provide a more comprehensive and detailed view of the anatomy being imaged, allowing for more accurate diagnosis and treatment planning. The benefits of 2D to 3D image reconstruction include improved diagnostic accuracy and treatment planning, but it also has some limitations such as increased computation time, storage requirements, and reliance on the quality of the 2D images.
It also discussed the application of deep learning in brain imaging. The authors discussed that deep learning-based techniques have been used for brain tumour detection and classification, brain tumour segmentation and grading of lower-grade glioma, classification of CT brain images, detection, and diagnosis of Alzheimer's disease via brain imaging, multimodal brain imaging classification using EEG signals, brain cancer classification, classification of CT signals, brain cancer classification, classification of CT signal, brain cancer classification, and diagnosis of Alzheimer's disease via brain imaging, Multimodal Brain Imaging Classification using EEG signals, Brain Cancer Classification, Classifying tumour brain images,
Cerebellum Segmentation from Foetal Brain Images, Semi-supervised deep learning of brain tissue segmentation, Developing a brain atlas.
This review has highlighted the advances and recent developments in the field of brain imaging and medical imaging. The objective was to enable beginners in the field to obtain a quick overview of the progress so far and then, look at the prospects of the work.
|
2301.00155 | Kibble-Zurek scaling in one-dimensional localization transitions | In this work, we explore the driven dynamics of the one-dimensional ($1$D)
localization transitions. By linearly changing the strength of disorder
potential, we calculate the evolution of the localization length $\xi$ and the
inverse participation ratio (IPR) in a disordered Aubry-Andr\'{e} (AA) model,
and investigate the dependence of these quantities on the driving rate. At
first, we focus on the limit in the absence of the quasiperiodic potential. We
find that the driven dynamics from both ground state and excited state can be
described by the Kibble-Zurek scaling (KZS). Then, the driven dynamics near the
critical point of the AA model is studied. Here, since both the disorder and
the quasiperiodic potential are relevant directions, the KZS should include
both scaling variables. Our present work not only extends our understanding of
the localization transitions but also generalize the application of the KZS. | Xuan Bu, Liang-Jun Zhai, Shuai Yin | 2022-12-31T08:51:16Z | http://arxiv.org/abs/2301.00155v1 | # Kibble-Zurek scaling in one-dimensional localization transitions
###### Abstract
In this work, we explore the driven dynamics of the one-dimensional (1D) localization transitions. By linearly changing the strength of disorder potential, we calculate the evolution of the localization length \(\xi\) and the inverse participation ratio (IPR) in a disordered Aubry-Andre (AA) model, and investigate the dependence of these quantities on the driving rate. At first, we focus on the limit in the absence of the quasiperiodic potential. We find that the driven dynamics from both ground state and excited state can be described by the Kibble-Zurek scaling (KZS). Then, the driven dynamics near the critical point of the AA model is studied. Here, since both the disorder and the quasiperiodic potential are relevant directions, the KZS should include both scaling variables. Our present work not only extends our understanding of the localization transitions but also generalize the application of the KZS.
## I Introduction
The physics of phase transitions between localized and metallic phases in disordered systems have attracted long-term attentions since the pioneering work of Anderson [1; 2; 3; 4; 5; 6]. As a result of the destructive interference of scattered waves, the wave function can be localized at some isolated sites. Theoretically, it was shown that for one- and two-dimensional disordered systems, the localization transition happens for infinitesimal disorder strength, whereas for higher-dimensional systems, the localization transition happens for finite disorder strength [2; 3]. Moreover, universality classes of Anderson transition have been categorized [7; 8; 9; 10; 11]. In addition, besides the disordered systems, it was shown that the localization can also happen in quasiperiodic systems [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. For instance, it was shown that the Aubry-Andre (AA) model hosts a localization transition at finite strength of quasiperiodic potential [12; 13; 20; 21; 22; 23; 24; 25; 26]. Experimentally, the localization transition has been observed in various platforms [34; 35; 36; 37; 38; 39], such as cold atomic systems [34; 35], quantum optics [36; 37], acoustic waves [38], and electronic systems [39].
On the other hand, great progresses have been made in controlling quantum matter with high precision in the last decades, inspiring the investigations on the nonequilibrium dynamics of quantum systems [40; 41; 42; 43]. In particular, the driven dynamics across a critical point has aroused wide concern due to its potential application in adiabatic quantum computations [44]. A general theory describing the driven critical dynamics is the celebrated Kibble-Zurek scaling (KZS) [45; 46; 47; 48; 49; 50; 51; 52; 53]. By linearly changing the distance to the critical point, the KZS states that the whole driven process can be divided into different stages. In the initial stage, the system evolves adiabatically along the equilibrium state. Then, the system enters an impulse region, in which the evolution of the system lags behind the external driving as a result of the critical slowing down. A full finite-time scaling form with the driving rate being a typical scaling variable has been proposed in characterizing the nonequilibrium dynamics in the whole process [54; 55; 56]. This full scaling form has been verified in both classical and quantum phase transitions [57; 58; 59; 60; 61].
Recently, the nonequilibrium dynamics in the localization transition have also attracted increasing attentions, which have extended our understanding of localization transitions and universality far from equilibrium [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72]. For instance, in disordered systems, dynamical phase transition characterized by the peaks in the Loschmidt echo after a sudden quench was studied [66]. In addition, the KZS has been investigated in the localization transitions in quasiperiodic AA model and its non-Hermitian variant for changing the quasiperiodic potential to cross the critical point [70; 71; 72]. However, there is still unknown whether the KZS is applicable for changing the disorder strength.
In this work, we study the driven dynamics of localization transitions in one-dimensional (1D) disordered systems. We illustrate the dynamic scaling in a disordered AA model and focus on two cases. In the first case, there is no quasiperiodic potential and this model recovers the usual Anderson model. In the second case, the system is located near the AA critical point. For both cases, we change the disorder coefficient across the transition point and calculate the evolution of the localization length \(\xi\) and the inverse participation ratio (IPR). For the Anderson model, we find that the evolution of these quantities satisfy the usual KZS from both ground state and highest excited state; whereas for the disordered AA model, since the quasiperiodic potential is another relevant direction, the full scaling form should also include the contribution from this term. In particular, in the overlap region between the critical regions of the AA model and the Anderson transition, we show that the dynamic scaling behaviors can be described by both the AA critical
exponents and the critical exponents of the Anderson localization.
The rest of the paper is arranged as follows. The 1D disordered AA model and the characteristic quantities are introduced in Sec. II. In Sec. III.1, the driven dynamics in the Anderson model is studied. Then, we explore the driven dynamics near the AA critical point in Sec. III.2. A summary is given in Sec. IV.
## II Model and static scaling properties
The Hamiltonian of the disordered AA model reads [73]
\[H = -J\sum_{j}^{L}(c_{j}^{\dagger}c_{j+1}+h.c.)\] \[+(2J+\delta)\sum_{j}^{L}\cos{[2\pi(\gamma j+\phi)]}c_{j}^{\dagger}c _{j}\] \[+\varepsilon\sum_{j}^{L}w_{j}c_{j}^{\dagger}c_{j},\]
in which \(c_{j}^{\dagger}(c_{j})\) is the creation (annihilation) operator of the hard-core boson at site \(j\), and \(J\) is the hopping amplitude between the nearest-neighboring sites and is chosen as the unity of energy, \((2J+\delta)\) measures the amplitude of the quasiperiodic potential, \(\gamma\) is an irrational number, \(\phi\) is the phase of the potential with a uniform distribution in \([0,1)\), \(w_{j}\) provides the quenched disorder distributed uniformly in the interval of \([-1,1]\), and \(\varepsilon\) is the coefficient of the disorder. To satisfy the periodic boundary condition, \(\gamma\) has to be approximated by a rational number \(F_{n}/F_{n+1}\) where \(F_{n+1}=L\) and \(F_{n}\) are the Fibonacci numbers [71; 23].
The phase diagram of model (1) is shown in Fig. 1. For \(\delta=-2J\), Eq. (1) recovers the Anderson model in which all states are localized for any finite \(\varepsilon\). Thus its critical point of the localization transition is at \(\varepsilon=0\). In the critical region, the localization length \(\xi\), defined as [70; 71; 73]
\[\xi=\sqrt{\sum_{j}^{L}[(j-j_{c})^{2}]P_{j}}, \tag{2}\]
with \(P_{j}\) being the probability of the wave function at site \(j\), and \(j_{c}\equiv\sum_{j}jP_{j}\) being the localization center, diverges as
\[\xi\propto\varepsilon^{-\nu}, \tag{3}\]
in which \(\nu=2/3\)[73; 74]. Another quantity to characterize the localization transition is the inverse participation ratio (IPR), which is defined as [75; 76]
\[\text{IPR}=\sum_{j=1}^{L}|\Psi(j)|^{4}, \tag{4}\]
where \(\Psi(j)\) is the wavefunction. For a localized state, the wave function is localized on some isolated sites, and \(\text{IPR}\propto L^{0}\), whereas \(\text{IPR}\propto L^{-1}\) for the delocalized states. Close to the critical point, IPR scales with \(\varepsilon\) as
\[\text{IPR}\propto\varepsilon^{s}, \tag{5}\]
with the critical exponent being \(s=2/3\)[73]. In addition, the dynamic exponent \(z\) for the Anderson model is \(z=2\)[74].
For \(\varepsilon=0\), Eq. (1) recovers the AA model. It was shown that all the eigenstates of are localized when \(\delta>0\), and all the eigenstates are delocalized when \(\delta<0\). In the critical region, the localization length \(\xi\) satisfies
\[\xi\propto\delta^{-\nu_{\delta}}, \tag{6}\]
with \(\nu_{\delta}=1\)[70; 74]. And the IPR obeys
\[\text{IPR}\propto\delta^{ss}, \tag{7}\]
with \(s_{\delta}\approx 0.333\)[73]. Besides, the dynamic exponent \(z\) for the AA critical point is \(z_{\delta}\approx 2.37\)[70].
Moreover, previously we showed that the disorder \(\varepsilon\) also provides a relevant direction in the AA critical point. For \(\delta=0\), the localization length \(\xi\) obeys
\[\xi\propto\varepsilon^{-\nu_{\varepsilon}}, \tag{8}\]
with \(\nu_{\varepsilon}=0.46(1)\)[73]. Note that this exponent is remarkably different from \(\nu\) and \(\nu_{\delta}\). In addition, the IPR obeys
\[\text{IPR}\propto\varepsilon^{s_{\delta}\nu_{\varepsilon}/\nu_{\delta}}. \tag{9}\]
Figure 1: Sketch of the phase diagram of the disorder AA model. When \(\delta=-2J\) (denoted by the yellow point), this model recovers the Anderson model. The dark blue region (region A) denotes the critical region of localization transition of the disordered AA model. The light green region (region B) denotes the critical region of the Anderson localization transition. Near the critical point of \(\delta=0\) and \(\varepsilon=0\), these critical regions overlap with each other.
## III The KZS in the localization transition
### KZS for the Anderson model
Here, we consider the driven dynamics of the Anderson model with \(\delta=-2J\) in Eq. (1). At first, we show the detailed driven process. Initially, the system is in the localization phase for a specific realization of \(w_{j}\) with coefficient \(\varepsilon_{0}>0\). Then \(\varepsilon\) is decreased according to
\[\varepsilon=\varepsilon_{0}-Rt, \tag{10}\]
to cross the critical point, and \(w_{j}\) keep invariant. Then \(w_{i}\) is resampled for another process with same initial \(\varepsilon_{0}\). At last, the quantities are averaged for many realization of samples to make the evolution curves smooth.
The KZS states that when \(\varepsilon>R^{1/\nu r}\) with \(r=z+1/\nu\), the system can evolve adiabatically since the state has enough time to adjust to the change in the Hamiltonian; in contrast, when \(\varepsilon<R^{1/\nu r}\), the system enter the impulse region and the system stop evolving as a result of the critical slowing down. However, investigations showed that the assumption that the system does not evolve in the impulse region is oversimplified. To improve it, a finite-time scaling theory has been proposed and demonstrates that the external driving provides a typical time scale of \(\zeta\propto R^{-z/r}\)[54; 55; 56]. In the impulse region, \(\zeta\) controls the dynamic scaling behaviors and macroscopic quantities can be scaled with \(\zeta\). For instance, for large enough system size, the full scaling form of the localization length \(\xi\) around the critical point reads [71]
\[\xi(\varepsilon,R)=R^{-1/r}f_{1}(\varepsilon R^{-1/r\nu}), \tag{11}\]
in which \(f_{1}\) is the scaling function. When \(\varepsilon>R^{-1/r\nu}\), the evolution is in the adiabatic stage, in which \(f_{1}(\varepsilon R^{-1/r\nu})\sim(\varepsilon R^{-1/r\nu})^{-\nu}\). Accordingly, \(\xi\) satisfies Eq. (3) and does not depend on the driving rate \(R\). In contrast, near the critical point, when \(\varepsilon<R^{-1/r\nu}\), \(f_{1}(\varepsilon R^{-1/r\nu})\) tends to a constant and \(\xi\propto R^{-1/r}\), demonstrating that the divergence of \(\xi\) at the critical point has been truncated by the external driving and \(\xi\) decreases as \(R\) increases.
Similarly, the driven dynamics of the IPR around the critical point satisfies
\[\text{IPR}(\varepsilon,R)=R^{s/r\nu}f_{2}(\varepsilon R^{-1/r\nu}). \tag{12}\]
When \(\varepsilon>R^{-1/r\nu}\), \(f_{2}(\varepsilon R^{-1/r\nu})\sim(\varepsilon R^{-1/r\nu})^{s}\) and Eq. (12) recovers Eq. (5). In contrast, near the critical point, when \(\varepsilon<R^{-1/r\nu}\), \(\text{IPR}\propto R^{s/r}\).
To verify the scaling functions of Eq. (11) and (12), we numerically solve the Schrodinger equation for model (1), and calculate the dependence of \(\xi\) and IPR on \(\varepsilon\) for various driving rate \(R\). The finite difference method in the time direction is used, and the time interval is chosen as \(10^{-3}\). The lattice size is chosen as \(L=500\), which is large enough to ignore the finite-size effect. \(\varepsilon_{0}\) is set as \(\varepsilon_{0}=2\), which is far enough from the critical point at \(\varepsilon=0\).
First, the initial state is chosen as the ground state of model (1) for \(\varepsilon=\varepsilon_{0}\). Figure 2 (a1) shows the evolution of the localization length \(\xi\) for different \(R\). Initially, one finds that \(\xi\) almost does not depend on \(R\), indicating the system evolves adiabatically in this stage. Then when \(\varepsilon\) approaches to the critical point, the curves for different \(R\) begin to separate from each other, indicating that the system enters the impulse region. After rescaling \(\xi\) and
Figure 3: Driven dynamics in the Anderson model with the initial state being the highest excited state. The curves of \(\xi\) versus \(\varepsilon\) before (a1) and after (a2) rescaled for different \(R\). The curves of IPR versus \(\varepsilon\) before (b1) and after (b2) rescaled for different \(R\). The arrows in (a1) and (b1) point the quench direction.
Figure 2: Driven dynamics in the Anderson model with the initial state being the ground state. The curves of \(\xi\) versus \(\varepsilon\) before (a1) and after (a2) rescaled for different \(R\). The curves of IPR versus \(\varepsilon\) before (b1) and after (b2) rescaled for different \(R\). The arrows in (a1) and (b1) point the quench direction.
\(\varepsilon\) as \(\xi R^{1/r}\) and \(\varepsilon R^{-1/\nu r}\), respectively, we find that the rescaled curves collapse onto each other near the critical point, as shown in Fig. 2 (a2). These results confirm Eq. (11). In particular, exactly at the critical point, i.e., \(\varepsilon=0\), Fig. 2 (a2) demonstrates \(\xi\propto R^{-1/r}\).
Similarly, Fig. 2 (b1) shows the evolution of IPR for different \(R\). After an initial adiabatic stage, in which the evolution of IPR is almost independent of \(R\), hysteresis effect of IPR appears near the critical point and the IPR increases as \(R\) increases. After rescaling IPR and \(\varepsilon\) as \(\text{IPR}^{-s/\nu r}\) and \(\varepsilon R^{-1/\nu r}\), respectively, we find that the rescaled curves match with each other near the critical point, as shown in Fig. 2 (b2). These results confirm Eq. (12). In particular, exactly at the critical point, i.e., \(\varepsilon=0\), Fig. 2 (b2) demonstrates \(\text{IPR}\propto R^{s/\nu r}\). These results clearly demonstrates that the KZS is applicable in the localization transition of the Anderson model.
Moreover, different from the usual quantum phase transition which happens only in the ground state, here the Anderson localization happens in all eigenstates. It is interesting to explore the driven dynamics with the initial state being the excited state. To this end, we calculate the dynamics of \(\xi\) and IPR with the initial state being the highest excited state and show the results in Fig. 3. After rescaling the curves by \(R\), we find that the rescaled curves collapse onto each other, as shown in Fig. 3, verifying Eqs. (11) and (12) and demonstrating that the KZS is still applicable in the driven dynamics from the excited states.
### KZS for the disordered AA model
In this section, we consider the driven dynamics near the AA critical point with small \(\delta\) in model (1) by changing the coefficient of the disorder term. Note that different from the Anderson model, there are two relevant directions near the critical point of the disordered AA model. One direction is the quasiperiodic potential, represented by \(\delta\), the other is the disorder term, represented by \(\varepsilon\). Thus, in the full scaling form, both two relevant terms should be included.
Figure 4: Driven dynamics near the AA critical point with fixed \(\delta R^{-1/r_{e}r_{e}}=0.3\). The curves of \(\xi\) versus \(\varepsilon\) before (a1) and after (a2) rescaled for different \(R\). The curves of IPR versus \(\varepsilon\) before (b1) and after (b2) rescaled for different \(R\). The arrows in (a1) and (b1) point the quench direction.
Figure 5: Driven dynamics near the AA critical point with fixed \(\delta R^{-1/r_{e}r_{e}}=-0.3\). The curves of \(\xi\) versus \(\varepsilon\) before (a1) and after (a2) rescaled for different \(R\). The curves of IPR versus \(\varepsilon\) before (b1) and after (b2) rescaled for different \(R\). The arrows in (a1) and (b1) point the quench direction.
Figure 6: Driven dynamics near the AA critical point with fixed \(\delta=-0.1\). The curves of \(\xi\) versus \(\varepsilon\) before (a1) and after (a2) rescaled for different \(R\). The curves of IPR versus \(\varepsilon\) before (b1) and after (b2) rescaled for different \(R\). The arrows in (a1) and (b1) point the quench direction.
In analogy to the analyses in Sec. III.1, the evolution of the localization length \(\xi\) should satisfy
\[\xi(\varepsilon,\delta,R)=R^{-1/r_{\varepsilon}}f_{3}(\varepsilon R^{-1/r_{ \varepsilon}\nu_{\varepsilon}},\delta R^{-1/r_{\varepsilon}\nu_{\varepsilon}}), \tag{13}\]
in which \(r_{\varepsilon}=z_{\delta}+1/\nu_{\varepsilon}\). For \(R\to 0\) and \(\delta=0\), \(f_{3}\sim(\varepsilon R^{-1/r_{\varepsilon}\nu_{\varepsilon}})^{-\nu_{ \varepsilon}}\) and Eq. (13) restores Eq. (8). For \(R\to 0\) and \(\varepsilon=0\), \(f_{3}\sim(\delta R^{-1/r_{\varepsilon}\nu_{\varepsilon}})^{-\nu_{\varepsilon}}\) and Eq. (13) restores Eq. (6).
Similarly, under external driving, the IPR should satisfy
\[{\rm IPR}(\varepsilon,\delta,R)=R^{ss/r_{\varepsilon}\nu_{\delta}}f_{4}( \varepsilon R^{-1/r_{\varepsilon}\nu_{\varepsilon}},\delta R^{-1/r_{ \varepsilon}\nu_{\varepsilon}}). \tag{14}\]
For \(R\to 0\) and \(\delta=0\), \(f_{4}\sim(\varepsilon R^{-1/r_{\varepsilon}\nu_{\varepsilon}})^{s_{ \varepsilon}\nu_{\varepsilon}/\nu_{\varepsilon}}\) and Eq. (14) recovers Eq. (9). For \(R\to 0\) and \(\varepsilon=0\), \(f_{3}\sim(\delta R^{-1/r_{\varepsilon}\nu_{\varepsilon}})^{s_{\delta}}\) and Eq. (14) restores Eq. (7).
Equations (13) and (14) should be applicable for any values of \(\delta\) and \(\varepsilon\) near the critical point of the AA model. Particularly, for \(\delta<0\), there is an overlap critical region between the critical region of the AA critical point and the critical region of the Anderson localization, as illustrated in Fig. 1. Therefore, in this overlap region, the driven critical dynamics of should simultaneously satisfy Eqs. (11) and (13) for \(\xi\) and Eqs. (12) and (14) for the IPR.
We at first examine Eqs. (13) and (14) for \(\delta>0\) with the initial state being the ground state. For a fixed \(\delta R^{-1/r_{\varepsilon}\nu_{\delta}}\), we calculate the evolution of \(\xi\) and IPR for various driving rate \(R\). After rescaling the evolution curves with \(R\), we find that the rescaled curves match with each other, as shown in Fig. 4, confirming Eqs. (13) and (14).
For \(\delta<0\), Fig. 5 shows the evolution of \(\xi\) and IPR with various driving rate \(R\) for a fixed \(\delta R^{-1/r_{\varepsilon}\nu_{\varepsilon}}\). After rescaling the curves by \(R\) with the critical exponents of the AA critical point, we find that the curves collapse onto each other, as shown in Fig. 5, confirming Eqs. (13) and (14). Moreover, Fig. 6 shows the evolution of \(\xi\) and IPR for various driving rate \(R\) with a fixed \(\delta\), which is near the AA critical point. After rescaling the curves by \(R\) with the critical exponents of the Anderson model, we find that the curves also collapse onto each other, as shown in Fig. 6, obeying Eqs. (11) and (12). Thus, we confirm that for \(\delta<0\) the driven critical dynamics can simultaneously be described by Eqs. (11) and (13) for \(\xi\) and Eqs. (12) and (14) for the IPR.
Here we remark on the results. (a) Although here we only show the results with the initial state being the ground state, it is expected that these scaling analyses are also applicable for the excited states, similar to the results shown in Sec. III.1. (b) In Ref. [70], the driven dynamics in the AA model without the disorder term was studied for changing the quasiperiodic potential. Here we change the disorder strength to cross the AA critical point. Comparing these two cases, we find that although the scaling forms of the KZS are similar, the dimensions of the driving rate are different in two cases. Combining these results, we find that the KZS can apply in the localization transitions for different driving dynamics.
## IV Summary
In summary, we have studied the driven dynamics in the localization transitions in 1D disordered AA model. By changing the disorder coefficient to cross the critical point, we calculate the dynamics of the localization length \(\xi\) and the IPR. For both the critical point of the Anderson model and the AA model, we have verified that the KZS is applicable in characterizing the driven dynamics. Moreover, we have also generalized the KZS to describe the driven dynamics from the excited states. In addition, in the overlap critical region near the AA critical point, we have found that the driven dynamics can be simultaneously described by the KZS with both the critical exponents of the AA model and the critical exponents of the Anderson model. As one possible generalization, one can also investigate the driven dynamics in the many-body localization transition [77, 78, 79, 80, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81].
###### Acknowledgements.
B. X. and S. Y. is supported by the National Natural science Foundation of China (Grant No. 12075324), the Science and Technology Projects in Guangzhou (Grant No. 202102020367) and the Fundamental Research Funds for Central Universities, Sun Yat-Sen University (Grant No. 22qntd3005). L.-J. Zhai is supported by the National Natural science Foundation of China (Grant No. 11704161) and China Postdoctoral Science Foundation (Grant No. 2021M691535), and Zhongwu Youth Innovation Talent Support Plan of Jiangsu University of technology.
|
2309.06306 | CDL: A fast and flexible library for the study of permutation sets with
structural restrictions | In this paper, we introduce CDL, a software library designed for the analysis
of permutations and linear orders subject to various structural restrictions.
Prominent examples of these restrictions include pattern avoidance, a topic of
interest in both computer science and combinatorics, and "never conditions"
utilized in social choice and voting theory.
CDL offers a range of fundamental functionalities, including identifying the
permutations that meet specific restrictions and determining the isomorphism of
such sets. To facilitate exploration of large permutation sets or domains, CDL
incorporates multiple search strategies and heuristics. | Bei Zhou, Klas Markstrōm, Søren Riis | 2023-09-12T15:17:16Z | http://arxiv.org/abs/2309.06306v2 | # CDL: A fast and flexible library for the study of permutation sets with structural restrictions
###### Abstract
In this paper, we introduce CDL, a software library designed for the analysis of permutations and linear orders subject to various structural restrictions. Prominent examples of these restrictions include pattern avoidance, a topic of interest in both computer science and combinatorics, and "never conditions" utilized in social choice and voting theory.
CDL offers a range of fundamental functionalities, including identifying the permutations that meet specific restrictions and determining the isomorphism of such sets. To facilitate exploration across extensive domains, CDL incorporates multiple search strategies and heuristics.
## 1 Motivation and significance
Permutations, and equivalently linear orders, which avoid various small structures, show up in a wide range of research areas. In the theory of algorithms, they appeared in Knuth's 1968 result Knuth (1968), which showed that a permutation can be sorted using a stack if and only if the permutation avoids the pattern 231, and later on Simion and Schmidt (1985) started an entire research area in combinatorics. In social sciences, sets of linear orders, called domains, appear as the sets of possible rankings of election candidates. Black Black (1948) gave an example of a structural restriction that gives a domain where majority voting behaves well, in contrast to Arrow's Arrow (1951) result, which shows that without such restriction, majority voting can misbehave in various ways. Sen Sen (1966) gave the general form for such voting restrictions as never conditions. The global structure and maximum possible size of such domains has been followed up in a large body of work, surveyed in Monjardet (2009); Karpov (2022); Puppe and Slinko (2022). The large and lively research fields stemming from these early examples have also overlapped, as computational social choice leads to intricate questions in computational complexity Elkind et al. (2022).
The CDL library, originally named Condorcet Domain Library, was first developed to study Condorcet domains, i.e. domains where majority voting leads to a transitive ranking of the candidates. However, since work on pattern-restricted permutations and Condorcet domains require the same underlying functionality, the library was extended to support both research areas and include even more general functions than used there.
CDL can be used to easily find the domain consisting of all permutations that satisfy a prescribed set of constraints and look for constraints leading to domains with different properties. The library can also be used to analyse the structure of a domain, test if domains are isomorphic and visualize domains. For ease of use, a Python interface is available, while the speed-sensitive low-level functions are coded in C++.
## 2 Software description
The Condorcet Domain Library is a comprehensive and versatile tool designed to work with Condorcet domains and forbidden permutations. The library has interfaces for C++ and Python programming
languages, allowing users seamless library integration into their projects while leveraging its power in their preferred programming environment. The library's modular design also makes customisation easy according to project-specific requirements. The low-level functions of CDL are implemented in C++ and follow best practices for efficient and reliable computation.
The basic objects for the library are _domains_, our common name for a set of permutations satisfying some constraints. The permutations are always on the set \(X_{n}=\{1,2,\ldots,n\}\) and we refer to the elements of this sets as either symbols or alternatives.
The most general form of our constraints is a _scheme_, a collection of \(k\)-tuples of the alternatives and, for each tuple, a collection of forbidden sets of permutations of those alternatives. The latter is called a _law_.
Pattern-avoiding permutations give the simplest form of a scheme. All \(k\)-tuples have a law corresponding to a single forbidden permutation of those \(k\) alternatives. That law comes from a single forbidden pattern, such as a monotone increasing sequence of length \(k\). For Condorcet domains, the laws are derived from _never conditions_. A never condition \(xNi\) for a triple of alternatives specifies that the \(i\)th element of the triple cannot appear in position \(x\) when the permutation is restricted to those three alternatives. For Condorcet domains, the never conditions typically vary between different triples. A never condition is commonly called a _never rule_ or a _rule_.
### Software architecture
The library contains two main classes CondorcetDomain providing the functionalities for Condorcet domain-related calculation, in which the triples and their rule are stored in a TRS object and ForbiddenPermutation for working with forbidden permutations on \(k\)-tuple, in which the \(k\)-triples and their laws are stored in a TLS object. The functionalities offered by these two classes mirror each other besides a few minor necessary differences on the interfaces. The terminologies we use in this section are consistent with the ones used in Zhou and Riis (2023); Akello-Egwell et al. (2023). We refrain from discussing the mathematical rationales behind the functions but instead lay focus on the implementation aspects. In this section, we show the core ingredients of this library and recommend readers visit our GitHub page for more information.
### Software functionalities
#### 2.2.1 Ordering \(k\)-tuples, rule initialization and assignment
When constructing the domain defined by a set of constraints, the ordering of the set of kk-tuples is crucial. A well-chosen ordering can rapidly reduce the size of partial domains during the search. Different orders of these tuples possess distinct properties that can be exploited. To this end, the library inherently supports two orders for general \(k\)-tuples, namely lexicographic (Lex) order (init_trs_lex), co-lexicographic (CoLex) order (init_trs_colex), and the RZ-order for triples (init_trs) proposed in Zhou and Riis (2023).
The lexicographic order dictates that \(k\)-tuple \(\{x_{1},x_{2},\ldots,x_{k}\}\) is before \(\{y_{1},y_{2},\ldots,y_{k}\}\) if \(x_{1}\leq y_{1}\) and \(x_{2}\leq y_{2}\),..., and \(x_{k}\leq y_{k}\). The Colexicographic order specifies that \(k\)-triple \(\{x_{1},x_{2},\ldots,x_{k}\}\) is before \(\{y_{1},y_{2},\ldots,y_{k}\}\) if \(x_{k}\leq y_{k}\) and \(x_{2}\leq y_{2}\),..., and \(x_{1}\leq y_{1}\). The CoLex order has the property that all triples that contain \(n+1\) come after the triplets from \(\{1,2\ldots n\}\). The RZ-order for triples is specifies that triple \(\{x_{1},x_{2},x_{3}\}\) is before \(\{y_{1},y_{2},y_{3}\}\) if \(x_{1}<y_{1}\) or (\(x_{1}=y_{1}\) and \(x_{3}<y_{3}\)) or (\(x_{1}=y_{1}\) and \(x_{2}=y_{2}\) and \(x_{3}<y_{3}\)).
The three ways to initialize a list of tuples leave all unassigned. The library provides two methods to associate a tuple with a rule. The first method uses assign_rule that assigns a given rule to the specified tuple. The second method, assign_rule_by_index assigns a given rule to the tuple with a specified index where the index is the location of the tuple in the TRS.
CDL also provides the flexibility of defining customized schemes to initializing rules. Initializing rules with a scheme plays an important role in constructing large Condorcet domains, for instance, the alternating scheme Fishburn (1997) and the set-alternating scheme Karpov et al. (2023). The init_trs_by_scheme initializes rules with a customized function that takes a triple as a parameter and returns the rule assigned according to the scheme.
For Condorcet domains, we have also implemented a new approach named dynamic_triple_orde ring that determines the next triple to be assigned a rule dynamically. The aim is to find the triple
that leads to a (partial) CD with the smallest size possible among the unassigned ones. This approach entails keeping a list of the size of partial CDs for all the unassigned triples when assigned with one of the four rules that maximize the size of this partial CD and choosing the triple corresponding to the smallest domain size in that list.
#### 2.2.2 Domain construction and size calculation
Constructing a domain, or calculating its size, from a list of \(k\)-tuples with a rule assigned is one of the core functionalities provided by this library. Starting with a domain on 2 alternatives \(\{\{1,2\},\{2,1\}\}\), the domain function adopts a breadth-first search method that iteratively expands the domain by inserting a new alternative to every position in the permutations and discarding the ones that violate one of the laws.
Building a domain to determine its size is impractical when the number of alternatives \(n\) is large since the memory use is proportional to the size of the domain. The latter is often exponential in the parameter \(n\). To address this issue, we provide the size function, which employs depth-first search to count elements in the domain.
CDL has an extension function that takes a single permutation and a list of (partially) assigned \(k\)-tuples and returns the permutations with one added alternative that satisfies all the rules. Inside the size function, a global counter keeping track of the remaining permutations is first set. Given a domain with two initial permutations, the extension function is applied to each permutation in the list and then recursively repeats for each resulting permutation. If a valid permutation on all \(n\) alternatives is found, then the size counter is increased by 1, and afterward, the permutation is discarded. When the computation is finished, the counter records the size of the resulting domain. Counting the size this way eliminates most of the memory usage incurred in the domain function and makes it possible to calculate the size for large alternatives in practice.
Additionally, a reverse function domain_to_trs is also available, which given a domain, finds the rules satisfied by the list of \(k\)-tuples.
#### 2.2.3 Subset Functions
A domain on \(n\) alternatives can be restricted to a subset of \(t\) of the alternatives.
We define state as a numeric representation of a list of rules assigned to triples in an order and provide trs_to_state function that converts the list of never rules in a TRS object to a list of numbers as well as the state_to_trs function that achieves the opposite. Given a TRS object with \(n\) alternatives created by init_trs function, we provide the subset_states that takes it as input and returns a list of TRS restricted to \(t\) alternative subsets, initialized with init_subset(sub_n+t).
For a list of tuples in lexicographic or co-lexicographic order or customized order, the subset_states_any_ordering function returns a list of subset states where the tuples are in the RZ-order.
#### 2.2.4 Hashing and identifying non-isomorphic domains
For Condorcet domains and more general domains from social choice and voting theory one often wants to identify isomorphic domains. Here, two domains \(D_{1}\) and \(D_{2}\) are isomorphic if there is a bijection from the set of alternatives of \(D_{1}\) to the set of alternatives of \(D_{2}\) which maps \(D_{1}\) to \(D_{2}\).
Using isomorphism also allows us to define a normal form for a domain \(D\). Here we take the normal form to be the lexicographically smallest domain which is isomorphic to \(D\).
This normal form is found by the function (isomorphic_hash) where two full domains are isomorphic if and only if they have the same hash value. The hash function utilizes the inverse_cd that transforms a domain \(\mathbf{A}\) with a permutation \(g\), yielding a domain \(\overline{\mathbf{A}}\) that is isomorphic to \(\mathbf{A}\). Given a domain \(\mathbf{A}\), the isomorphic_domains function generates all of its isomorphic domains. It applies the inverse_cd function on every permutation in \(\mathbf{A}\) and removes the domains are identical, resulting in a list of domains that are isomorphic to it.
Built on top of the isomorphic_domains function, given a domain, the isomorphic_hash function gets the list of isomorphic domains, sorts them, and returns the smallest domain as its hash value.
With the hash function, eliminating the isomorphic domains from a list of domains seems trivial. We can apply the isomorphic_hash function to each and retain one domain out of these with the same hash value. This ensures that the list of domains is non-isomorphic to each other. But this
is inefficient. We instead implement a faster algorithm in non_isomorphic_domains. Here, for every domain \(\mathbf{A}\) in the list the function generates the isomorphic domain \(\overline{\mathbf{A}}\) for every permutation in \(\mathbf{A}\) using inverse_cd function, and then deletes all domains in the list that are identical to \(\overline{\mathbf{A}}\), resulting in a list of non-isomorphic domains.
Additionally, the library provides another function named is_trs_isomorphic that given a TRS object, checks if it is lexicographically minimal. Assuming the list of the triples and their laws are given an ordering such that they can be sorted lexicographically, meaning that any (partial) domain can be represented by a list of laws for triples and we can thus compare two domains by comparing their list of laws lexicographically. Taking a permutation \(g\), we can apply it to the alternatives in a triple and its assigned rule to get a new triple and a new rule for it. A triple \(\{i,j,k\}\) with a rule \(xNp\) is transformed to \(\{gi,gj,gk\}\) with rule \(gxNp\). Doing this to all triples and their assigned rule, and then list the new triples according to the predefined order on them, leads to a new list of triples with their new rule which can be compared with the original list.
The latter function can be used in searches to remove partial domains that must be isomorphic to a domain in another part of the search tree.
#### 2.2.5 Native support for general forbidden permutation domains
Both sets of pattern-restricted permutations and Condorcet domains are special cases of general forbidden permutation domains. For Condorcet domains, any never condition can be expressed as a pair of forbidden permutations. For example, the condition \(3N1\) can be translated into two forbidden permutations 312 and 321.
Generally forbidden permutation domains are implemented via a class named ForbiddenPermutation that supports core functionalities and interfaces similar to those of the CondorcetDomain.
Inside ForbiddenPermutation, every \(k\)-tuple can be assigned with a list of forbidden permutations we call laws, differing from that in the CondorcetDoma in where every triple is associated with a single rule. Correspondingly, we have the TLS (denoting TupleLaws) class whose object stores a list of \(k\)-tuples with their assigned laws.
## Illustrative Examples
In this section, we provide an overview of the core functionalities of this library with two examples using CondorcetDomain and ForbiddenPermutation classes, demonstrating a way to put them into practice. Readers can visit our GitHub page that houses this information for further details, examples, and explanations.
### Condorcet domains
In the code snippet below, the import statement at the first line loads all the classes and functions in CDL into the program. We then define the alternating scheme as a function that takes input as a triple and returns the rule assigned to it under the scheme. To work with 8 alternative domains, we initialized the CondorcetDomain object with the n parameter set to 8. The init_trs_by_scheme function taking the alternating_scheme function as the parameter creates a TRS object that stores a list of triple whose rule is assigned as per the alternating scheme.
The following domain and size functions build up the domain and calculate its size in totally different ways as described in the above section. To modify the rule assigned to a triple, {2, 3, 4} for instance, we can use the assign_rule method to change its assigned rule from 2N3 to 3N1. subset_states function finds all the subset states restricted to 6 alternatives, given the sub_n value is set to 6 in the init_subset function.
In the end, we show that given a list of domains, the non_isomorphic_dom aims removes all the isomorphic domains and returns the nonisomorphic ones.
```
1fromcdimport*
2
3defalternating_scheme(triple):#buildthealternatingscheme
41,j,k=triple
5ifj%2==0:
6return"2N1"
```
else: return"2N3"
* [noitemsep]
* ed=ConordrectDomain(n=8)#initializedtheCondorcetdomainobject
* #initializedthetrtswiththepredefinedalternatingscheme
* trs=cd.init.trs.by.scheme(alternating.scheme)
* domain=cd.domain(trs)#constructtheCondorcetdomain
* #calculatethesizeoftheresultingCondorcetdomain(222)
* size=cd.size(trs) assertlen(domain)==size#True
* #changetheruleassignedtothetriple[2,3,4]from"2N3"to"3N1"
* trs=cd.assign_rule(trs,[2,3,4],"3N1")
* size=cd.size(trs)#thesizeofthenewdomainis210.
* #getainstoff28subsetstatesin6alternatives
* substates=cd.subset_states(trs)
* #buildalsistofdomains
* domains=[cd.domain(cd.init_trs.random())for_inrange(100)]
* filterouttheisomorphicdomains
* non_isomorphic_cds=cd.non_isomorphic_domains(domains)
### Pattern restricted permutations
The functionalities and naming conventions of the ForbiddenPermutation class are consistent with those of the CondorcetDomain class, besides the restrictions are the laws that consist of a list of forbidden permutations. In the following code block, we give an example showing how to use the functions in the ForbiddenPermutation to find the size of the domains for 5 to 10 alternatives on 5-tuples avoiding the forbidden permutation \(\{2,5,3,1,4\}\).
```
1fromcdimportForbiddenPermutation
2
3fornimrange(5,11):
4#initializetheForbiddenPermutationobjectfor5-tuples
5fp=ForbiddenPermutation(n,5)
6tls=fp.init_tls()
7fortlints:
8#assignallthe5-tupleswiththelaw[2,5,3,1,4]
9tls=fp.assign_laws(tls,tl.tuple,[[2,5,3,1,4]])
10print(fp.size(tls))
```
This code structure closely resembles the one working with CDs. The size and domain functions in the ForbiddenPermutation class have been used to verify known results for permutations that avoid any of the three inequivalent length 4 patterns (A022558, A061552, and A005802) and the patterns of length 5 presented in Clisby et al. (2021).
## 3 Impact
Karpov et al. (2023) introduced a new class of Condorcet domains constructed via a method called a set-alternating scheme. They used CDL, in particular the size function, to calculate the sizes of the resulting domains. Due to the volume of computation, the parallel functionalities of CDL were utilized on a Linux cluster.
Akello-Egwell et al. (2023) released a comprehensive list of non-isomorphic Condorcet domains on 7 alternatives. Their computational results were obtained without using CDL, but the isomorphic_hash function was used to give an independent verification of the isomorphism reduction for the final data set.
A search algorithm called Prioritised Restriction Search (PRS) Zhou and Riis (2023) was developed using CDL. This algorithm uses a user-defined score function to search for large Condorcet domains and led to the discovery of record-size Condorcet domains for 10 and 11 alternatives, shown in Table 1. The algorithm also recreated the largest Condorcet domain for n=8 found in Leedham-Green et al. (2023), using CDLs functions for parallelized search. Due to the effectiveness of PRS, it was later added
to the CDL, allowing users to design customized score functions and integrate the search algorithm into their projects. When PRS was added to CDL a depth-first search version of the original PRS algorithm was also added. This was done in order to address a limitation of the original PRS where exhaustive searches sometimes consumed too much memory.
Using the CDL Zhou and Riis (2023) also tested the performance, for constructing large Condorcet domains, of a wide range of machine learning algorithms, including deep reinforcement learning algorithms, evolutionary algorithms, and local search algorithms. Performance data was presented in a way that makes it easy to evaluate new search algorithms.
## 4 Conclusion
This paper has introduced CDL as a tool for research projects on Condorcet domains and pattern-avoiding permutations.
The core functionalities of this library have been optimized and are stable, however, research in the areas for which CDL is intended is very active, and new concepts and algorithms appear regularly. In response, we intend to continuously add new functionalities to the library and keep it relevant for the research front of this thriving research field.
## Acknowledgements
This work was funded by the Chinese Scholarship Council (CSC). This research utilised Queen Mary's Apocrita HPC facility King et al. (2021), supported by QMUL Research-IT.
|
2309.16115 | Compositional Sculpting of Iterative Generative Processes | High training costs of generative models and the need to fine-tune them for
specific tasks have created a strong interest in model reuse and composition. A
key challenge in composing iterative generative processes, such as GFlowNets
and diffusion models, is that to realize the desired target distribution, all
steps of the generative process need to be coordinated, and satisfy delicate
balance conditions. In this work, we propose Compositional Sculpting: a general
approach for defining compositions of iterative generative processes. We then
introduce a method for sampling from these compositions built on classifier
guidance. We showcase ways to accomplish compositional sculpting in both
GFlowNets and diffusion models. We highlight two binary operations
$\unicode{x2014}$ the harmonic mean ($p_1 \otimes p_2$) and the contrast ($p_1
\unicode{x25D1}\,p_2$) between pairs, and the generalization of these
operations to multiple component distributions. We offer empirical results on
image and molecular generation tasks. | Timur Garipov, Sebastiaan De Peuter, Ge Yang, Vikas Garg, Samuel Kaski, Tommi Jaakkola | 2023-09-28T02:46:53Z | http://arxiv.org/abs/2309.16115v1 | # Compositional Sculpting of
###### Abstract
High training costs of generative models and the need to fine-tune them for specific tasks have created a strong interest in model reuse and composition. A key challenge in composing iterative generative processes, such as GFlowNets and diffusion models, is that to realize the desired target distribution, all steps of the generative process need to be coordinated, and satisfy delicate balance conditions. In this work, we propose Compositional Sculpting: a general approach for defining compositions of iterative generative processes. We then introduce a method for sampling from these compositions built on classifier guidance. We showcase ways to accomplish compositional sculpting in both GFlowNets and diffusion models. We highlight two binary operations -- the harmonic mean (\(p_{1}\bigotimes p_{2}\)) and the contrast (\(p_{1}\bigodot p_{2}\)) between pairs, and the generalization of these operations to multiple component distributions. We offer empirical results on image and molecular generation tasks. Project codebase: [https://github.com/timgaripov/compositional-sculpting](https://github.com/timgaripov/compositional-sculpting).
## 1 Introduction
Large-scale general-purpose pre-training of machine learning models has produced impressive results in computer vision [1, 2, 3], image generation [4, 5, 6], natural language processing [7, 8, 9, 10, 11], robotics [12, 13, 14] and basic sciences [15]. By distilling vast amounts of data, such models can produce powerful inferences that lead to emergent capabilities beyond the specified training objective [16]. However, generic pre-trained models are often insufficient for specialized tasks in engineering and basic sciences. Field-adaptation via techniques such as explicit fine-tuning on bespoke datasets [17], human feedback [18], or cleverly designed prompts [19, 20] is therefore often required. An alternative approach is to compose the desired distribution using multiple simpler component models.
Compositional generation [21, 22, 23, 24, 25, 26, 27] views a complex target distribution in terms of simpler pre-trained building blocks which it can learn to mix and match into a tailored solution to a specialized task. Besides providing a way to combine and reuse previously trained models, composition is a powerful modeling approach. A composite model fuses knowledge from multiple sources: base models trained for different tasks, enabling increased capacity beyond that of any of the base models in isolation. If each individual base model captures a certain property of the data, composing such models provides a way to specify distributions over examples that exhibit multiple properties simultaneously [28]. The need to construct complex distributions adhering to multiple constraints arises in numerous practical multi-objective design problems such as multi-objective molecule generation [29, 30, 31]. In the context of multi-objective generation, compositional modeling provides mechanisms for adjustment and control of the resulting distribution, which enables exploration of different trade-offs between the objectives and constraints.
Prior work on generative model composition [21, 23, 28] has developed operations for piecing together Energy-Based Models (EBMs) via algebraic manipulations of their energy functions. For example, consider two distributions \(p_{1}(x)\propto\exp\{-E_{1}(x)\}\) and \(p_{2}(x)\propto\exp\{-E_{2}(x)\}\) induced by energy functions \(E_{1}\) and \(E_{2}\). Their _product_\(p_{\text{prod}}(x)\propto p_{1}(x)p_{2}(x)\propto\exp\left(-\left(E_{1}(x)+E_{2 }(x)\right)\right)\) and _negation_\(p_{\text{neg}}(x)\propto p_{1}(x)/(p_{2}(x))^{\gamma}\propto\exp\left(-\left(E _{1}(x)-\gamma\,E_{2}(x)\right)\right)\) correspond to operations on the underlying energy functions.
Iterative generative processes including diffusion models [5, 32, 33, 34] and GFlowNets [35, 36] progressively refine coarse objects into cleaner ones over multiple steps. The realization of effective compositions of these models is complicated by the fact that simple alterations in their generation processes result in non-trivial changes in the distributions of the final objects. For instance, the aforementioned product and negation between EBMs cannot be realized simply by means of adding or subtracting associated score-functions. Prior work addresses these challenges by connecting diffusion models with EBMs through annealed Markov-Chain Monte-Carlo (MCMC) inference. However, Metropolis-Hastings corrections are required to ensure that the annealing process reproduces the desired distribution [27].
Jain et al. [31] develop Multi-Objective GFlowNets (MOGFNs), an extension of GFlowNets for multi-objective optimization tasks. The goal of a vanilla GFlowNet model is to capture the distribution induced by a single reward (objective) function \(p_{\theta}(x)\propto R(x)\) (see Section 2.1 for details of GFlowNet formulation). A Multi-Objective GFlowNet aims to learn a single conditional model that can realize distributions corresponding to various combinations (e.g. a convex combination) of multiple reward functions. While a single MOGFN effectively realizes a spectrum of compositions of base reward functions, the approach assumes access to the base rewards at training time. Moreover, MOFGNs require the set of possible composition operations to be specified at generative model training time. In this work, we address post hoc composition of pre-trained GFlowNets (or diffusion models) and provide a way to create compositions that need not be specified in advance.
In this work, we introduce Compositional Sculpting, a general approach for the composition of pre-trained models. We highlight two special examples of binary operations -- _harmonic mean_: (\(p_{1}\otimes p_{2}\)) and _contrast_: (\(p_{1}\mathbin{\hbox{\vrule height 0.4pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt} \hbox{\vrule height 6.0pt width 0.4pt depth 0.
This sequential generation process is controlled by a parameterized stochastic "forward policy" \(P_{F}(s^{\prime}|s;\theta)\) which for each state \(s\in S\setminus\mathcal{X}\) specifies a probability distribution over all possible successor states \(s^{\prime}:(s\to s^{\prime})\in\mathcal{A}\). Generation is performed by starting at \(s_{0}\) and sequentially sampling transitions from the forward policy \(P_{F}(\cdot|\cdot)\) until a terminal state is reached.
### Diffusion Models
Diffusion models, see 5, 32-34, 37) are a family of generative models developed for continuous domains. Given a dataset of samples \(\{\hat{x}_{i}\}_{i=1}^{n}\) forming the empirical distribution \(\hat{p}(x)=\frac{1}{n}\sum_{i}\delta_{\hat{x}_{i}}(x)\) in \(\mathcal{X}=\mathbb{R}^{d}\), diffusion models seek to approximate \(\hat{p}(x)\) via a generative process \(p(x)\), which can then be used to generate new samples.
Stochastic Differential Equation (SDE) perspective.We discuss diffusion models from the perspective of stochastic differential equations (SDE) [34]. A diffusion process is a noise process that gradually destroys the original "clean" data \(x\). It can be specified as a time-indexed collection of random variables \(\{x_{i}\}_{i=0}^{T}\) in \(\mathcal{X}=\mathbb{R}^{d}\). We use \(p_{i}(\cdot)\) to denote the density of the distribution of \(x_{i}\). The process interpolates between the data distribution \(p_{0}(x)=\hat{p}(x)\) at \(t=0\), and the prior distribution \(p_{T}(x)\) at \(t=T\), which is typically constructed to have a closed form (e.g. standard normal) to enable a simple sampling scheme. The evolution of \(x_{i}\) is described by the "forward SDE" \(dx_{i}=f_{i}(x_{i})\,dt+g_{t}\,dw_{t}\), where \(w_{t}\) is the standard Wiener process, the function \(f_{i}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is called the drift coefficient and \(g_{t}\in\mathbb{R}\) is called the diffusion coefficient. Specific choices of \(f_{t}\) and \(g_{t}\) completely determine the process and give rise to the transition kernel \(p_{st}(x_{t}|x_{s})\) for \(0\leq s<t\leq T\) (see [34] for examples).
Generative process.Song et al. [34] invoke a result from the theory of stochastic processes [38] which gives the expression for the reverse-time process or "backward SDE":
\[dx_{t}=\left[f_{t}(x_{t})-g_{t}^{2}\nabla_{x}\log p_{t}(x_{t})\right]dt+g_{t} \,d\overline{w}_{t}, \tag{1}\]
where \(\overline{w}_{t}\) is the standard Wiener process in reversed time.
The backward SDE includes the known coefficients \(f_{t}\), \(g_{t}\), and the unknown score function \(\nabla_{x}\log p_{t}(\cdot)\) of the marginal distribution \(p_{t}(\cdot)\) at time \(t\). This score function is estimated by a deep neural network \(s_{t}(x;\theta)\approx\nabla_{x}\log p_{t}(x)\) (called "score-network") with parameters \(\theta\). Once the score-network \(s_{t}(\cdot;\theta)\) is trained, samples can be generated via numerical integration of (1).
### Classifier Guidance in Diffusion Models
Classifier guidance [32, 39] is a technique for controllable generation in diffusion models. Suppose that each example \(x_{0}\) is accompanied by a discrete class label \(y\). The goal is to sample from the conditional distribution \(p_{0}(x_{0}|y)\). The Bayes rule \(p_{t}(x_{t}|y)\propto p_{t}(x_{t})p_{t}(y|x_{t})\) implies the score-function decomposition \(\nabla_{x_{t}}\log p_{t}(x_{t}|y)=\nabla_{x_{t}}\log p_{t}(x_{t})+\nabla_{x_{t }}\log p_{t}(y|x_{t})\), where the first term is already approximated by a pre-trained unconditional diffusion model and the second term can be derived from a time-dependent classifier \(p_{t}(y|x_{t})\). Therefore, the stated goal can be achieved by first training the classifier \(p_{t}(y|x_{t})\) using noisy samples \(x_{t}\) from the intermediate steps of the process, and then plugging in the expression for the conditional score into the sampling process (1).
### "Energy" Operations
Prior work introduced energy operations, "product" and "negation", for energy-based [21, 23] and diffusion [27] models. Given a pair of distributions \(p_{1}(x)\propto\exp\{-E_{1}(x)\}\), \(p_{2}(x)\propto\exp\{-E_{2}(x)\}\) corresponding to the respective energy functions \(E_{1}\) and \(E_{2}\), the "product" and "negation" operations are defined as
\[(p_{1}\,\text{prod}\,\,p_{2})(x)\propto\exp\left\{-\left(\,E_{1}(x)+E_{2}(x) \,\right)\,\right\}\propto p_{1}(x)p_{2}(x), \tag{2}\]
\[(p_{1}\,\text{neg}_{j}\,\,p_{2})(x)\propto\exp\left\{-\left(\,E_{1}(x)-\gamma \,E_{2}(x)\,\right)\,\right\}\propto\frac{p_{1}(x)}{\left(p_{2}(x)\right)^{ \gamma}}. \tag{3}\]
The product distribution \((p_{1}\,\text{prod}\,\,p_{2})(x)\): (a) assigns relatively high likelihoods to points \(x\) that have sufficiently high likelihoods under both base distributions at the same time; (b) assigns relatively low likelihoods to points \(x\) that have close-to-zero likelihood under one (or both \(p_{1},p_{2}\). The negation distribution \((p_{1}\,\text{neg}_{j}\,\,p_{2})(x)\) (a) assigns relatively high likelihood to points \(x\) that are likely under \(p_{1}\) but unlikely under \(p_{2}\); (b) assigns relatively low likelihood to points \(x\) that have low likelihood under \(p_{1}\) and high likelihood under \(p_{2}\). The parameter \(\gamma>0\) controls the strength of negation. Informally, the product concentrates on points that are common in both \(p_{1}\) and \(p_{2}\), and the negation concentrates on points that are common in \(p_{1}\) and uncommon in \(p_{2}\). If \(p_{1}\) and \(p_{2}\) capture objects demonstrating two distinct concepts
(e.g. \(p_{1}\): images of circles; \(p_{2}\) images of green shapes), it is fair to say (again, informally) that the product and the negation resemble the logical operations of concept-intersection ("circle" AND "green") and concept-negation ("circle" AND NOT "green") respectively.
The "product" and "negation" can be realized in a natural way in energy-based models through simple algebraic operations on energy functions. However, realizing these operations on diffusion models is not as straightforward. The reason is that sampling in diffusion models requires the coordination of multiple steps of the denoising process. The simple addition of the time-dependent score functions does not result in a score function that represents the diffused product distribution2. Du et al. [27] develop a method that corrects the sum-of-score-networks sampling via additional MCMC iterations nested under each step of the generation process.
Footnote 2: formally, \(\nabla_{x}\log\left(\int p_{\text{\tiny{0}}}(x_{i}|x_{0})p_{1}(x_{0})p_{2}(x_ {0})\,dx_{0}\right)\neq\nabla x_{t}\log\left(\int p_{\text{\tiny{0}}}(x_{i}|x_ {0})p_{1}(x_{0})\,dx_{0}\right)+\nabla x_{t}\log\left(\int p_{\text{\tiny{0}}}( x_{i}|x_{0})p_{2}(x_{0})\,dx_{0}\right)\); we refer the reader to [27] for more details on the issue.
## 3 Related Work
Generative model composition.Hinton [28] developed a contrastive divergence minimization procedure for training products of tractable energy-based models. Learning mixtures of Generative Adversarial Networks has been addressed in [40], where the mixture components are learned simultaneously, and in [41], where the components are learned one by one in an adaptive boosting fashion. Grover and Ermon [42] developed algorithms for additive and multiplicative boosting of generative models. Following up on energy-based model operations [21, 28], Du et al. [23] studied the composition of deep energy-based models. Du et al. [27] developed algorithms for sampling from energy-based compositions (products, negations) of diffusion models, related to the focus of our work. The algorithm in [27] introduces additional MCMC sampling steps at each diffusion generation step to correct the originally biased sampling process (based on an algebraic combination of individual score functions) toward the target composition.
Our work proposes a new way to compose pre-trained diffusion models and introduces an unbiased sampling process based on classifier guidance to sample from the compositions. This avoids the need for corrective MCMC sampling required in prior work. Our work further applies to GFlowNets, and is, to the best of our knowledge, the first to address the composition of pre-trained GFlowNets.
This work focuses on the composition of pre-trained models. Assuming that each pre-trained model represents the distribution of examples demonstrating certain concepts (e.g. molecular properties), the composition of models is equivalent to concept composition (e.g. property "A" and property "B" satisfied simultaneously). The inverse problem is known as "unsupervised concept discovery", where the goal is to automatically discover composable concepts from data. Unsupervised concept discovery and concept composition methods have been proposed for energy-based models [24] and for text-to-image diffusion models [43].
Controllable generation.Generative model composition is a form of post-training control of the generation process, an established area of research in generative modeling. A simple approach to control is conditional generation, which can be achieved by training a conditional generative model \(p_{\theta}(x|c)\) on pairs \((x,c)\) of objects \(x\) and conditioning information \(c\). Types of conditioning information can include class labels [39] or more structured data such as text prompts [4, 6, 44], semantic maps, and other images for image-to-image translation [4]. This approach assumes that the generation control operations are specified at training time and the training data is annotated with conditioning attributes. Classifier guidance [32] provides a way to generate samples from conditional distributions that need not be specified at training time. The guidance is realized by a classifier that is trained on examples \(x_{t}\) (both clean and noisy) accompanied by conditioning labels \(c\). Dhariwal and Nichol [39] apply classifier guidance on top of unconditional or conditional diffusion models to improve the fidelity of generated images. Ho and Salimans [45] develop classifier-free guidance where the conditional and unconditional score functions are trained simultaneously and combined at inference time to guide the generation. In ControlNet [17], an additional network is trained to enable a pre-trained diffusion model to incorporate additional, previously unavailable, conditioning information. Meng et al. [46] and Couairon et al. [47] develop semantic image editing methods based on applying noise to the original image and then running the reverse denoising process to generate an edited image, possibly conditioned on a segmentation mask [47].
Similar to conditional diffusion models, conditional GFlowNets have been used to condition generation on reward exponents [36] or combinations of multiple predefined reward functions [31].
Note that the methods developed in this work can be combined with conditional generative models, for example, conditional diffusion models (or GFlowNets) \(p(x|c_{1})\),..., \(p(x|c_{m})\) can act as base generative models to be composed.
Compositional generalization.The notion of compositionality has a broad spectrum of interpretations across a variety of disciplines including linguistics, cognitive science, and philosophy. Hupkes et al. [48] collect a list of aspects of compositionality from linguical and philosophical theories and designs practical tests for neural language models
covering all aspects. Conwell and Ullman [49] empirically examine the relational understanding of DALL-E 2 [50], a text-guided image generation model, and point out limitations in the model's ability to capture relations such as "in", "on", "hanging over", etc. In this work, we focus on a narrow but well-defined type of composition where we seek to algebraically combine (comppose) probability densities in a controllable fashion, such that we can emphasize or de-emphasize regions in the data space where specific base distributions have high density. Our methods are developed for the setting where we are given access to GFlowNets or diffusion models which can generate samples from the probability distributions we wish to compose.
Connections between GFlowNets and diffusion models.We develop composition operations and methods for sampling from composite distributions for both GFlowNets and diffusion models. The fact that similar methods apply to both is rooted in deep connections between the two modeling frameworks. GFlowNets were initially developed for generating discrete (structured) data [36] and diffusion models were initially developed for continuous data [5; 32]. Lahlou et al. [51] develop an extension of GFlowNets for DAGs with continuous state-action spaces. Zhang et al. [52] point out unifying connections between GFlowNets and other generative model families, including diffusion models. Diffusion models in a fixed-time discretization can be interpreted as continuous GFlowNets of a certain structure. Zhang et al. [52] notice that the discrete DAG flow-matching condition, central to mathematical foundations of GFlowNets [35], is analogous to the Fokker-Planck equation (Kolmogorov forward equation), underlying mathematical analysis of continuous-time diffusion models [34]. In this work, we articulate another aspect of the relation between GFlowNets and diffusion models: in Section 5.1 we derive the expressions for mixture GFlowNet policies and classifier-guided GFlowNet policies analogous to those derived for diffusion models in prior work [32; 39; 53; 54].
## 4 Compositional Sculpting of Generative Models
Consider a scenario where we can access a number of pre-trained generative models. Each of these "base models" gives rise to a generative distribution \(p_{i}(x)\) over a common domain \(\mathcal{X}\). We may wish to compose these distributions such that we can, say, draw samples that are likely to arise from \(p_{1}(x)\) and \(p_{2}(x)\), or that are likely to arise from \(p_{1}(x)\) but not from \(p_{2}(x)\). In other words, we wish to specify a composition whose generative distribution we can shape to emphasize and de-emphasize specific base models.
### Binary Composition Operations
For the moment, let us focus on controllably composing two base models. One option is to specify the composition as a weighted combination \(\widetilde{p}(x)=\sum_{i=1}^{2}\omega_{i}p_{i}(x)\) with positive weights \(\omega_{1},\omega_{2}\) which sum to one. These weights allow us to set the prevalence of each base model in the composition. However, beyond that our control over the composition is limited. We cannot emphasize regions where, say, \(p_{1}\) and \(p_{2}\) both have high density, or de-emphasize regions where \(p_{2}\) has high density.
A much more flexible method for shaping a prior distribution \(\widetilde{p}(x)\) to our desires is conditioning. Following Bayesian inference methodology, we know that when we condition \(x\) on some observation \(y\), the resulting posterior takes the form \(\widetilde{p}(x|y)\propto\widetilde{p}(y|x)\widetilde{p}(x)\). Points \(x\) that match observation \(y\) according to \(\widetilde{p}(y|x)\) will have increased density, whereas the density of points that do not match it decreases. Intuitively, the term \(\widetilde{p}(y|x)\) has shaped the prior \(\widetilde{p}(x)\) according to \(y\).
If we define the observation \(y_{1}\in\{1,2\}\) as the event that \(x\) was generated by a specific base model, we can shape the prior based on the densities of the base models. We start by defining a uniform prior over \(y_{k}\), and by defining the conditional density \(p(x|y_{1}=i)\) to represent the fact that \(x\) was generated from \(p_{i}(x)\). This gives us the following model:
\[\widetilde{p}(x|y_{1}=1)=p_{1}(x),\quad\widetilde{p}(x|y_{1}=2)=p_{2}(x),\quad \widetilde{p}(y_{1}=1)=\widetilde{p}(y_{1}=2)=\frac{1}{2},\quad\widetilde{p}( x)=\sum_{i=1}^{2}\widetilde{p}(x|y_{1}=i)\widetilde{p}(y_{1}=i), \tag{4}\]
Notice that under this model, the prior \(\widetilde{p}(x)\) is simply a uniform mixture of the base models. The posterior probability \(\widetilde{p}(y_{1}=1|x)\) implied by this model tells us how likely it is that \(x\) was generated by \(p_{1}(x)\) rather than \(p_{2}(x)\):
\[\widetilde{p}(y_{1}=1|x)=1-\widetilde{p}(y_{1}=2|x)=\frac{p_{1}(x)}{p_{1}(x) +p_{2}(x)}, \tag{5}\]
Note that \(\widetilde{p}(y_{1}=1|x)\) is the output of the optimal classifier trained to tell apart distributions \(p_{1}(x)\) and \(p_{2}(x)\).
The goal stated at the beginning of this section was to realize compositions which would generate samples likely to arise from both \(p_{1}(x)\) and \(p_{2}(x)\) or likely to arise from \(p_{1}(x)\) but not \(p_{2}(x)\). To this end we introduce a second observation \(y_{2}\in\{1,2\}\) such that \(y_{1}\) and \(y_{2}\) are independent and identically distributed given \(x\). The resulting full model and inferred
posterior are:
\[\widetilde{p}(x,y_{1},y_{2})=\widetilde{p}(x)\widetilde{p}(y_{1}|x)\widetilde{p} (y_{2}|x),\quad\widetilde{p}(x)=\frac{1}{2}p_{1}(x)+\frac{1}{2}p_{2}(x),\quad \widetilde{p}(y_{k}=i|x)=\frac{p_{i}(x)}{p_{1}(x)+p_{2}(x)},\ k,i\in\{1,2\}, \tag{6}\]
\[\widetilde{p}(x|y_{1}=i,y_{2}=j)\propto\widetilde{p}(x)\widetilde{p}(y_{1}=i |x)\widetilde{p}(y_{2}=j|x)\propto\frac{p_{i}(x)p_{j}(x)}{p_{1}(x)+p_{2}(x)}. \tag{7}\]
The posterior \(\widetilde{p}(x|y_{1}=i,y_{2}=j)\) shows clearly how conditioning on the observations \(y_{1},y_{2}\) has shaped the prior mixture into a new expression which accentuates regions in the posterior where the observed base models \(i,j\) have high density.
Conditioning on observations \(y_{1}=1\) ("\(x\) is likely to have been drawn from \(p_{1}\) rather than \(p_{2}\)") and \(y_{2}=2\) ("\(x\) is likely to have been drawn from \(p_{2}\) rather than \(p_{1}\)"), or equivalently \(y_{1}=2,y_{2}=1\), results in the posterior distribution
\[(p_{1}\otimes p_{2})(x):=p(x|y_{1}=1,y_{2}=2)\propto\frac{p_{1}(x)p_{2}(x)}{p _{1}(x)+p_{2}(x)}. \tag{8}\]
We will refer to this posterior as the "harmonic mean of \(p_{1}\) and \(p_{2}\)", and denote it as a binary operation \(p_{1}\otimes p_{2}\). Its value is high only at points that have high likelihood under both \(p_{1}(x)\) and \(p_{2}(x)\) at the same time (Figure 1(c)). Thus, the harmonic mean is an alternative to the product operation for EBMs. The harmonic mean is commutative (\(p_{1}\otimes p_{2}=p_{2}\otimes p_{1}\)) and is undefined when \(p_{1}\) and \(p_{2}\) have disjoint supports, since then the RHS of (8) is zero everywhere.
Conditioning on observations \(y_{1}=1\) ("\(x\) is likely to have been drawn from \(p_{1}\) rather than \(p_{2}\)") and \(y_{2}=1\) (same) results in the posterior distribution
\[(p_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{2})(x):=\widetilde{p}(x)|y_{1}=1,y_{2}=1)\propto\frac{ \big{(}p_{1}(x)\big{)}^{2}}{p_{1}(x)+p_{2}(x)}. \tag{9}\]
We refer to this operation, providing an alternative to the negation operation in EBMs, as the "contrast of \(p_{1}\) and \(p_{2}\)", and will denote it as a binary operator (\(p_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{2})(x)\). The ratio in equation (9) is strictly increasing as a function of \(p_{1}(x)\) and strictly decreasing as a function of \(p_{2}(x)\), so that the ratio is high when \(p_{1}(x)\) is high and \(p_{2}(x)\) is low (Figure 1(d)). The contrast is not commutative (\(p_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{2}\neq p_{2}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{1}\), unless \(p_{1}=p_{2}\)). We will denote the reverse contrast as \(p_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{2}=p_{2}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{1}\).
Note that the original distributions \(p_{1}\) and \(p_{2}\) can be expressed as mixtures of the harmonic mean and the contrast distributions:
\[p_{1}=Z_{\otimes}(p_{1}\otimes p_{2})+Z_{\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{1}}(p_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{2}),\quad p_{2}=Z_{\otimes}(p_{1}\otimes p_{2})+Z_{\mathbin{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{2}}(p_{2}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{1}),\quad Z_{\otimes}=\sum_{x}\frac{p_{1}(x)p_{2}(x)}{p_{1}(x)+p_{2}(x)}=1-Z_ {\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p}.\]
Controlling the individual contributions of \(p_{1}\) and \(p_{2}\) to the composition.We modify model (6) and introduce an interpolation parameter \(a\) in order to have more control over the extent of individual contributions of \(p_{1}\) and \(p_{2}\) to the composition:
\[\widetilde{p}(x,y_{1},y_{2};\alpha)=\widetilde{p}(x)\widetilde{p}(y_{1}|x) \widetilde{p}(y_{2}|x;a),\ \ \widetilde{p}(x)=\frac{1}{2}p_{1}(x)+\frac{1}{2}p_{2}(x), \tag{10a}\] \[\widetilde{p}(y_{1}=i|x)=\frac{p_{i}(x)}{p_{1}(x)+p_{2}(x)},\ \ \widetilde{p}(y_{2}=i|x;a)=\frac{ \big{(}\alpha p_{1}(x)\big{)}^{[i=1]}\cdot\big{(}(1-\alpha)p_{2}(x)\big{)}^{[i=2]} }{\alpha p_{1}(x)+(1-\alpha)p_{2}(x)}, \tag{10b}\]
where \(\alpha\in(0,1)\) and \([\cdot]\) denotes the indicator function. Conditional distributions in this model give **harmonic interpolation3** and **parameterized contrast**:
Footnote 3: the harmonic interpolation approaches \(p_{1}\) when \(\alpha\to 0\) and \(p_{2}\) when \(\alpha\to 1\)
\[(p_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{2})(x)\propto\frac{p_{1}(x)p_{2}(x)}{\alpha p_{1}(x)+(1-\alpha)p_{2}(x)}, \quad(p_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}p_{1}(-\alpha)\ p_{2})(x)\propto\frac{ \big{(}p_{1}(x)\big{)}^{2}}{\alpha p_{1}(x)+(1-\alpha)p_{2}(x)}. \tag{11}\]
Comparison with "energy" operations.The harmonic mean and contrast operations we have introduced here are analogous to the product and negation operations for EBMs respectively. Although the harmonic mean and product operations are quite similar in practice, unlike the negation operation our proposed contrast operation always results in a valid probability distribution. Figure 2 shows the results of these operations applied to two Gaussian distributions. The harmonic mean and product, shown in panel (b), are both concentrated on points that have high probability under both Gaussians. Figure 2(c) shows parameterized contrasts \(p_{1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}(1-\alpha)p_{2}\) at different values of \(\alpha\), and panel (d) shows negations \(p_{1}\operatorname{neg}_{j}\,p_{2}\) at different values of \(\gamma\). The effect of negation at \(\gamma=0.1\) resembles the effect of the contrast operation: the density retreats from the high likelihood region of \(p_{2}\). However, as \(\gamma\) increases to \(0.5\) the distribution starts to concentrate excessively on the values \(x<-3\). This is due to the instability of division \(p_{1}(x)/(p_{2}(x))^{\gamma}\) in regions where \(p_{2}(x)\to 0\). Proposition B.1 in Appendix B shows that the negation \(p_{1}\operatorname{neg}_{j}\,p_{2}\) in many cases results in an improper (non-normalizable)distribution.
**Operation chaining.** As the binary operations we have introduced result in proper distributions, we can create new \(N\)-ary operations by chaining binary (and \(N\)-ary) operations together. For instance, chaining binary harmonic means gives the harmonic mean of three distributions
\[((p_{1}\otimes p_{2})\otimes p_{3})(x)=(p_{1}\otimes(p_{2}\otimes p_{3}))(x) \propto\frac{p_{1}(x)p_{2}(x)p_{3}(x)}{p_{1}(x)p_{2}(x)+p_{1}(x)p_{3}(x)+p_{2} (x)p_{3}(x)}. \tag{12}\]
### Compositional Sculpting: General Approach
The approach we used above for specifying compositions of two base models controlled by two observations can be generalized to compositions of \(m\) base models \(p_{1}(x),\ldots,p_{m}(x)\) controlled by \(n\) observations. At the end of the previous section we showed that operator chaining can already realize compositions of \(m\) base models. However, our generalized method allows us to specify compositions more flexibly, and results in different compositions from operator chaining. We propose to define an augmented probabilistic model \(\widetilde{p}(x,y_{1},\ldots,y_{n})\) as a joint distribution over the original objects \(x\in\mathcal{X}\) and \(n\) observation variables \(y_{1}\in\mathcal{Y},\ldots,y_{n}\in\mathcal{Y}\) where \(\mathcal{Y}=\{1,\ldots,n\}\). By defining appropriate conditionals \(p(y_{k}|x)\) we will be able to controllably shape a prior \(\widetilde{p}(x)\) into a posterior \(\widetilde{p}(x)\)\(y_{1},\ldots,y_{n}\)) based on the base models.
As in the binary case, we propose to use a uniformly-weighted mixture of the base distributions \(\widetilde{p}(x)=\frac{1}{m}\sum_{i=1}^{m}p_{i}(x)\). The support of this mixture is the union of the supports of the base models: \(\bigcup_{i=1}^{m}\operatorname{supp}\{p_{i}(x)\}=\operatorname{supp}\{ \widetilde{p}(x)\}\). This is essential as the prior can only be shaped in places where it has non-zero density. As before we define the conditionals \(p(y_{k}=i|x)\) to correspond to the observation that \(x\) was generated by base model \(i\). This resulting full model is
\[\widetilde{p}(x,y_{1},\ldots,y_{n})=\widetilde{p}(x)\prod_{k=1}^{n}\widetilde {p}(y_{k}|x),\qquad\widetilde{p}(x)=\frac{1}{m}\sum_{i=1}^{m}p_{i}(x), \tag{13}\]
\[\widetilde{p}(y_{k}\!=\!i)=\frac{1}{m}\quad\forall k\in\{1,\ldots,n\},\qquad \widetilde{p}(y_{k}\!=\!i|x)=\frac{p_{i}(x)}{\sum_{j=1}^{m}p_{j}(x)}\quad \forall k\in\{1,\ldots,n\}. \tag{14}\]
Note that under this model the mixture can be represented as the marginal of the joint distribution \(\widetilde{p}(x,y_{k})=\widetilde{p}(x|y_{k})\widetilde{p}(y_{k})\) where \(y\in\{1,\ldots,m\}\) for any one of the observations \(y_{k}\).
The inferred posterior over \(x\) for this model is
\[\widetilde{p}(x|y_{1}\!=\!i_{1},\ldots,y_{n}\!=\!i_{n}) \propto\widetilde{p}(x)\widetilde{p}(y_{1}\!=\!i_{1},\ldots,y_{n}\! =\!i_{n}|x) \tag{15}\] \[\propto\widetilde{p}(x)\prod_{k=1}^{n}\widetilde{p}(y_{k}\!=\!i_{ k}|x)\propto\left(\prod_{k=1}^{n}p_{i_{k}}(x)\right)\bigg{/}\left(\sum_{j=1}^{m}p_{ j}(x)\right)^{n-1}. \tag{16}\]
The posterior \(\widetilde{p}(x|y_{1}\!=\!i_{1},\ldots,y_{n}\!=\!i_{n})\) is a composition of distributions \(\{p_{i}(x)\}_{i=1}^{m}\) that can be adjusted by choosing values for \(y_{1},\ldots,y_{n}\). By adding or omitting an observation \(y_{k}\!=\!i\) we can _sculpt_ the posterior to our liking, emphasizing or de-emphasizing regions of \(\mathcal{X}\) where \(p_{i}\) has high density. The observations can be introduced with multiplicities (e.g., \(y_{1}=1,y_{2}=1,y_{3}=2\)) to further strengthen the effect. Moreover, one can choose to introduce all observations simultaneously as in (15) or sequentially as in (16). As we show below (Section 5.1 for GFlowNets; Section 5.3 for diffusion models), the composition (15) can be realized by a sampling policy that can be expressed as a function of the pre-trained (base) sampling policies.
Figure 2: **Compositional sculpting and energy operations applied to 1D Gaussian distributions.** (a) Densities of base 1D Gaussian distributions \(p_{1}(x)=\mathcal{N}(x;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
Special instances and general formulation.The general approach outlined in this section is not limited to choices we made to construct the model in equation (13), i.e. \(\widetilde{p}(x)\) does not have to be a uniformly weighted mixture of the base distributions, \(y_{1},\ldots,y_{n}\) do not have to be independent and identically distributed given \(x\), and different choices of the likelihood \(\widetilde{p}(y\!=\!|x|)\) are possible. For instance, the parameterized binary operations (11) are derived from a model where the likelihoods of the observations \(\widetilde{p}(y_{1}|x)\), \(\widetilde{p}(y_{2}|x)\) differ.
Sampling from conditional distributions via classifier guidance.In Section 5 we introduce a method that allows us to sample from compositions of distributions \(p_{1},\ldots,p_{m}\) implied by a chosen set of variables \(y_{1},\ldots,y_{n}\). To do this, we note the similarity between (15) and classifier guidance. Indeed, we can sample from the posterior by applying classifier guidance to \(\widetilde{p}(x)\). Classifier guidance can either be applied in a single shot as in (15), or sequentially as in (16). Any chain of operations can be realized via sequential guidance with a new classifier trained at each stage of the chaining. The classifier can be trained on samples generated from the pre-trained base models \(p_{1},\ldots,p_{m}\). We show how to apply this idea to GFlowNets (Sections 5.1, 5.2) and diffusion models (Sections 5.3, 5.4).
## 5 Compositional Sculpting of Iterative Generative Processes
### Composition of GFlowNets
We will now cover how the model above can be applied to compose GFlowNets, and how one can use classifier guidance to sample from the composition. Besides a sample \(x\) from \(p_{i}(x)\), a GFlowNet also generates a trajectory \(\tau\) which ends in the state \(x\). Thus, we extend the model \(\widetilde{p}(x,y_{1},\ldots,y_{n})\), described above, and introduce \(\tau\) as a variable with conditional distribution \(\widetilde{p}(\tau|y_{k}\!=\!i)=\prod_{t=0}^{|t|-1}p_{i,F}(s_{t+1}|s_{t})\), where \(p_{i,F}\) is the forward policy of the GFlowNet that samples from \(p_{i}\).
Our approach for sampling from the composition is conceptually simple. Given \(m\) base GFlowNets that sample from \(p_{1},\ldots,p_{m}\) respectively, we start by defining the prior \(\widetilde{p}(x)\) as the uniform mixture of these GFlowNets. Proposition 5.1 shows that this mixture can be realized by a GFlowNet policy which can be constructed directly from the forward policies of the base GFlowNets. We then apply classifier guidance to this mixture to sample from the composition. Proposition 5.2 shows that classifier guidance results in a new GFlowNet policy which can be constructed directly from the GFlowNet being guided.
**Proposition 5.1** (GFlowNet mixture policy).: _Suppose distributions \(p_{1}(x),\ldots,p_{m}(x)\) are realized by GFlowNets with forward policies \(p_{1,F}(\cdot|\cdot),\ldots,p_{m,F}(\cdot|\cdot)\). Then, the mixture distribution \(p_{M}(x)=\sum_{i=1}^{m}\omega_{i}p_{i}(x)\) with \(\omega_{1},\ldots,\omega_{m}\geq 0\) and \(\sum_{i=1}^{m}\omega_{i}=1\) is realized by the GFlowNet forward policy_
\[p_{M,F}(s^{\prime}|s)=\sum_{i=1}^{m}p(y=i|s)p_{i,F}(s^{\prime}|s), \tag{17}\]
_where \(y\) is a random variable such that the joint distribution of a GFlowNet trajectory \(\tau\) and \(y\) is given by \(p(\tau,y\!=\!i)=\omega_{i}p_{i}(\tau)\) for \(i\in\{1,\ldots,m\}\)._
The proof of Proposition 5.1 is provided in Appendix C.1.
**Proposition 5.2** (GFlowNet classifier guidance).: _Consider a joint distribution \(p(x,y)\) over a discrete space \(\mathcal{X}\times\mathcal{Y}\) such that the marginal distribution \(p(x)\) is realized by a GFlowNet with forward policy \(p_{F}(\cdot|\cdot)\). Further, assume that the joint distribution of \(x\), \(y\), and GFlowNet trajectories \(\tau=(s_{0}\rightarrow\ldots\to s_{n}=x)\) decomposes as \(p(\tau,x,y)=p(\tau,x)p(y|x)\), i.e. \(y\) is independent of the intermediate states \(s_{0},\ldots,s_{n_{1}}\) in \(\tau\) given \(x\). Then,_
1. _For all non-terminal nodes_ \(s\in\mathcal{S}\setminus\mathcal{X}\) _in the GFlowNet DAG_ \((\mathcal{S},\mathcal{A})\)_, the probabilities_ \(p(y|s)\) _satisfy_ \[p(y|s)=\sum_{s^{\prime}:(s=s^{\prime})\in\mathcal{A}}p_{F}(s^{\prime}|s)p(y|s^{ \prime}).\] (18)
2. _The conditional distribution_ \(p(x|y)\) _is realized by the classifier-guided policy_ \[p_{F}(s^{\prime}|s,y)=p_{F}(s^{\prime}|s)\frac{p(y|s^{\prime})}{p(y|s)}.\] (19)
Note that (18) ensures that \(p_{F}(s^{\prime}|s,y)\) is a valid policy, i.e. \(\sum_{s^{\prime}:(s\to s^{\prime})\in\mathcal{A}}p_{F}(s^{\prime}|s,y)=1\). The proof of Proposition 5.2 is provided in Appendix C.2.
Proposition 5.1 is an analogous to results on mixtures of diffusion models (Peluchetti 53, Theorem 1, Lipman et al. 54, Theorem 1). Proposition 5.2 is analogous to classifier guidance for diffusion models [32; 39]. To the best of our knowledge, our work is the first to derive both results for GFlowNets.
Both equations (17) and (19) involve the inferential distribution \(p(y|s)\). Practical implementations of both mixture and conditional forward policies, therefore, require training a classifier on trajectories sampled from the given GFlowNets.
Theorem 5.3 summarizes our approach.
**Theorem 5.3**.: _Suppose distributions \(p_{1}(x),\ldots,p_{m}(x)\) are realized by GFlowNets with forward policies \(p_{1,F}(\cdot|\cdot),\ldots,\)\(p_{m,F}(\cdot|\cdot)\) respectively. Let \(y_{1},\ldots,y_{n}\) be random variables defined by (13). Then, the conditional \(\widetilde{p}(x|y_{1},\ldots,y_{n})\) is realized by the forward policy_
\[p_{F}(s^{\prime}|s,y_{1},\ldots,y_{n})=\frac{\widetilde{p}(y_{1},\ldots,y_{n}| s^{\prime})}{\widetilde{p}(y_{1},\ldots,y_{n}|s)}\sum_{i=1}^{m}p_{i,F}(s^{ \prime}|s)\widetilde{p}(y\!=\!i|s) \tag{20}\]
Note that the result of conditioning on observations \(y_{1},\ldots,y_{n}\) is just another GFlowNet policy. Therefore, to condition on more observations and build up the composition further, we can simply apply classifier guidance again to the policy constructed in Theorem 5.3.
### Classifier Training (GFlowNets)
The evaluation of policy (20) requires knowledge of the probabilities \(\widetilde{p}(y_{1},\ldots,y_{n}|s)\). The probabilities \(\widetilde{p}(y|s)\) required for constructing the mixture can be derived from \(\widetilde{p}(y_{1},\ldots,y_{n}|s)\). These probabilities can be estimated by a classifier fitted to trajectories sampled from the base GFlowNets \(p_{1},\ldots,p_{m}\). Below we specify the sampling scheme and the objective for this classifier.
Let \(\widetilde{Q}_{\phi}(y_{1},\ldots,y_{n}|s)\) be a classifier with parameters \(\phi\) that we wish to train to approximate the ground-truth conditional: \(\widetilde{Q}_{\phi}(y_{1},\ldots,y_{n}|s)\approx\widetilde{p}(y_{1},\ldots,y _{n}|s)\). Note that \(\widetilde{Q}_{\phi}\) represents the joint distribution of \(y_{1},\ldots,y_{n}\) given a state \(s\). Under the model (13) the variables \(y_{1},\ldots,y_{n}\) are dependent given a state \(s\in S\setminus\mathcal{X}\), but, are independent given a terminal state \(x\in\mathcal{X}\). This observation motivates separate treatment of terminal and non-terminal states.
Learning the terminal state classifier.For a terminal state \(x\), the variables \(y_{1},\ldots,y_{n}\) are independent and identically distributed. Hence we can use the factorization \(\widetilde{Q}_{\phi}(y_{1},\ldots,y_{n}|x)=\prod_{k=1}^{n}\widetilde{Q}_{\phi }(y_{k}|x)\). Moreover, all distributions on the _r.h.s._ must be the same. In other words, for the terminal classifier it is, therefore, enough to learn just \(\widetilde{Q}_{\phi}(y_{1}|x)\). This marginal classifier can be learned by minimizing the cross-entropy loss
\[\mathcal{L}_{\mathrm{T}}(\phi)=\operatorname*{\mathbb{E}}_{(\widehat{x}, \widehat{y}_{1})\!=\!\widetilde{p}(x,y_{1})}\left[-\log\widetilde{Q}_{\phi}( y_{1}\!=\!\widehat{y}_{1}|x\!=\!\widehat{x})\right]. \tag{21}\]
Sampling from \(\widetilde{p}(x,y_{1})\) can be performed according to the factorization \(\widetilde{p}(y_{1})\widetilde{p}(x|y_{1})\). First, \(\widehat{y}_{1}\) is sampled from \(\widetilde{p}(y_{1})\), which is uniform under our choice of \(\widetilde{p}(x)\). Then, \(\widehat{x}|(y_{1}\!=\!\widehat{y}_{1})\) is generated from the base GFlowNet \(p_{\widehat{y}_{1}}\). For our choice of \(\widetilde{p}(x)\), we can derive from (13) that \(\widetilde{p}(x|y\!=\!\widehat{y}_{1})=p_{\widehat{y}_{1}}(x)\).
Learning the non-terminal state classifier.Given a non-terminal state \(s\in S\setminus\mathcal{X}\), we need to model \(y_{1},\ldots,y_{n}\) jointly. In order to train the classifier one needs to sample tuples \((\widehat{s},\widehat{y}_{1},\ldots,\widehat{y}_{n})\). Non-terminal states \(s\) can be generated as intermediate states in trajectories \(\tau=(s_{0}\to s_{1}\rightarrow\ldots\to x)\). Given a sampled trajectory \(\widehat{\tau}\) and a set of labels \(\widehat{y}_{1},\ldots,\widehat{y}_{n}\) we denote the total cross-entropy loss of all non-terminal states in \(\widehat{\tau}\) by
\[\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ \mathcal{ \mathcal{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\,\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ {\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\}\}\\\\\}\\\\\\}\\\\\\\}\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\
After putting all components together the training loss for the non-terminal state classifier is
\[\mathcal{L}_{N}(\phi,\overline{\phi})=\mathop{\mathbb{E}}_{(\widehat{\tau}, \widehat{y}_{1})\sim\widehat{p}(x,y_{1})}\left[\sum_{\widehat{y}_{2}=1}^{m}....\sum_{\widehat{y}_{n}=1}^{m}\left(\prod_{k=2}^{n}w_{\widehat{y}_{k}}(\widehat {x};\overline{\phi})\right)\ell(\widehat{\tau},\widehat{y}_{1},...,\widehat{y} _{n};\phi)\right]. \tag{23}\]
We refer the reader to Appendix C.4 for a more detailed derivation of the loss (23).
Note that equation (23) involves summation over \(\widehat{y}_{2},...\,\widehat{y}_{n}\) with \(m^{n-1}\) terms in the sum. If values of \(n\) and \(m\) are small, the sum can be evaluated directly. In general, one could trade off estimation accuracy for improved speed by replacing the summation with Monte Carlo estimation. In this case, the values \(\widehat{y}_{k}\) are sampled from the categorical distributions \(Q_{\overline{\phi}}\langle y|x\rangle\). Note that labels can be sampled in parallel since \(y_{i}\) are independent given \(x\).
Algorithm 1 shows the complete classifier training procedure.
```
1:Initialize \(\phi\) and set \(\overline{\phi}=\phi\)
2:for step \(=1,\ldots,\text{num\_steps}\)do
3:for\(i=1,\ldots,m\)do
4: Sample \(\widehat{\tau}_{i}\sim p_{i}(\tau)\)
5:endfor
6:\(\mathcal{L}_{T}(\phi)=-\sum_{i=1}^{m}\log\widetilde{Q}_{\phi}(y_{1}=i|x= \widehat{x}_{i})\quad\{\text{Terminal state loss (\ref{eq:L1})}\}\)
7:\(w_{i}(\widehat{x}_{j};\overline{\phi})=\widetilde{Q}_{\overline{\phi}}\left( y_{k}=i|x=\widehat{x}_{j}\right)\), \(i,j\in\{1,\ldots m\}\quad\{\text{Probability estimates}\}\)
8:\(\mathcal{L}_{N}(\phi,\overline{\phi})=\sum_{\widehat{y}_{1}=1}^{m}....\sum_{\widehat{y}_{n}=1}^{m}\left(\prod_{k=2}^{n}w_{\widehat{y}_{k}}( \widehat{x}_{\widehat{y}_{1}};\overline{\phi})\right)\,\ell(\widehat{\tau}_{ \widehat{y}_{1}},\widehat{y}_{1},...\,\widehat{y}_{n};\phi)\quad\{\text{ Non-terminal state loss (\ref{eq:L1})-(\ref{eq:L1})}\}\)
9:\(\mathcal{L}(\phi,\overline{\phi})=\mathcal{L}_{T}(\phi)+\gamma(\text{step}) \cdot\mathcal{L}_{N}(\phi,\overline{\phi})\)
10: Update \(\phi\) using \(\nabla_{\phi}\mathcal{L}(\phi,\overline{\phi})\); update \(\overline{\phi}=\beta\overline{\phi}+(1-\beta)\phi\)
11:endfor
```
**Algorithm 1** Compositional Sculpting: classifier training
### Composition of Diffusion Models
In this section, we show how the method introduced above can be applied to diffusion models. First, we adapt the model we introduced in (13)-(16) to diffusion models. A diffusion model trained to sample from \(p_{i}(x)\) generates a trajectory \(\tau=\{x_{i}\}_{t=0}^{T}\) over a range of time steps which starts with a randomly sampled state \(x_{T}\) and ends in \(x_{0}\), where \(x_{0}\) has distribution \(p_{i,t=0}(x)=p_{i}(x)\). Thus, we must adapt our model to reflect this. We introduce a set of mutually dependent variables \(x_{i}\) for \(t\in(0,T]\) with as conditional distribution the transition kernel of the diffusion model \(p_{i}(x_{t}|x_{0})\).
Given \(m\) base diffusion models that sample from \(p_{1},\ldots,p_{m}\) respectively, we define the prior \(\widetilde{p}(x)\) as a mixture of these diffusion models. Proposition 5.4 shows that this mixture is a diffusion model that can be constructed directly from the base diffusion models. We then apply classifier guidance to this mixture to sample from the composition. We present an informal version of the proposition below. The required assumptions and the proof are provided in Appendix C.5.
**Proposition 5.4** (Diffusion mixture SDE).: _Suppose distributions \(p_{1}(x),\ldots,p_{m}(x)\) are realized by diffusion models with forward SDEs \(dx_{i,t}=f_{i,t}(x_{i,t})\,dt+g_{i,t}\,dw_{i,t}\) and score functions \(s_{i,t}(\cdot)\), respectively. Then, the mixture distribution \(p_{M}(x)=\sum_{i=1}^{m}\alpha_{i}p_{i}(x)\) with \(\omega_{1}\,\ldots\,\omega_{m}\geq 0\) and \(\sum_{i=1}^{m}\alpha_{i}\,=\,1\) is realized by a diffusion model with forward SDE_
\[dx_{t}=\underbrace{\left[\sum_{i=1}^{m}p(y\!=\!i|x_{t})f_{i,t}(x_{t})\right]}_ {f_{M,t}(x_{t})}dt+\underbrace{\sqrt{\sum_{i=1}^{m}p(y\!=\!i|x_{t})g_{i,t}^{2} }}_{g_{M,t}(x_{t})}dw_{t}, \tag{24}\]
_and backward SDE_
\[dx_{t}=\left[\sum_{i=1}^{m}p(y\!=\!i|x_{t})\Big{(}f_{i,t}(x_{t})-g_{i,t}^{2}s_ {i,t}(x_{t})\Big{)}\right]\,dt+\sqrt{\sum_{i=1}^{m}p(y\!=\!i|x_{t})g_{i,t}^{2} }\,d\overline{w}_{t}, \tag{25}\]
_with_
\[p(y\!=\!i|x_{t})=\frac{\omega_{t}p_{i,t}(x_{t})}{\sum_{j=1}^{m}\omega_{ j}p_{j,t}(x_{t})}. \tag{26}\]
If the base diffusion models have a common forward SDE \(dx_{i,t}=f_{t}(x_{t,t})\,dt+g_{t}\,dw_{i,t}\), equations (24)-(25) simplify to
\[dx_{t}=f_{t}(x_{t})dt+g_{t}dw_{t},\quad dx_{t}=\left[f_{t}(x_{t})-g_{t}^{2} \left(\sum_{i=1}^{m}p(y\!=\!i|x_{t})s_{i,t}(x_{t})\right)\right]\,dt+g_{t}d \overline{w}_{t}. \tag{27}\]
Theorem 5.5 summarizes the overall approach.
**Theorem 5.5**.: _Suppose distributions \(p_{1}(x),\ldots,p_{m}(x)\) are realized by diffusion models with forward SDEs \(dx_{i,t}=f_{i,t}(x_{i,t})\,dt+g_{i,t}\,dw_{i,t}\) and score functions \(s_{i,t}(\cdot)\), respectively. Let \(y_{1},\ldots y_{n}\) be random variables defined by (13). Then, the conditional \(\widetilde{p}(x|y_{1},\ldots,y_{n})\) is realized by a classifier-guided diffusion with backward SDE_
\[dx_{t}=\left[\sum_{i=1}^{m}\widetilde{p}(y\!=\!i|x_{t})\Big{(} f_{i,t}(x_{t})-g_{i,t}^{2}\Big{(}s_{i,t}(x_{t})+\nabla_{x_{t}}\log \widetilde{p}(y_{1},\ldots,y_{n}|x_{t})\Big{)}\Big{)}\right]dt+\sqrt{\sum_{i= 1}^{m}\widetilde{p}(y\!=\!i|x_{t})g_{i,t}^{2}}\,d\overline{w}_{t}. \tag{28}\]
The proof of Theorem 5.5 is provided in Appendix C.6.
### Classifier Training (Diffusion Models)
We approximate the inferential distributions in equations (27) and (28) with a time-conditioned classifier \(\widetilde{Q}_{\phi}(y_{1},\ldots,y_{n}|x_{t})\) with parameters \(\phi\). Contrary to GFlowNets, which employed a terminal and non-terminal state classifier, here we only need a single time-dependent classifier. The classifier is trained with different objectives on terminal and non-terminal states. The variables \(y_{1},\ldots,y_{n}\) are dependent given a state \(x_{t}\) for \(t\in[0,T)\), but are independent given the terminal state \(x_{T}\). Thus, when training on terminal states we can exploit this independence. Furthermore, we generally found it beneficial to initially train only on terminal states. The loss for the non-terminal states depends on classifications of the terminal state of the associated trajectories, thus by minimizing the classification error of terminal states first, we reduce noise in the loss calculated for the non-terminal states later.
For a terminal state \(x_{0}\), the classifier \(\widetilde{Q}_{\phi}(y_{1},\ldots,y_{n}|x_{t})\) can be factorized as \(\prod_{k=1}^{n}\widetilde{Q}_{\phi}(y_{k}|x_{0})\). Hence we can train \(\widetilde{Q}\) by minimizing the cross-entropy loss
\[\mathcal{L}_{\mathrm{T}}(\phi)=\mathop{\mathbb{E}}_{(\widehat{x} _{0},\widehat{y}_{1})\sim\widetilde{p}(x,y_{1})}\left[-\log\widetilde{Q}_{ \phi}(y_{1}\!=\!\widehat{y}_{1}|x_{0}\!=\!\widehat{x}_{0})\right]. \tag{29}\]
Samples \(\widetilde{p}(x_{0},y_{1})\) can be generated according to the factorization \(\widetilde{p}(y_{1})\widetilde{p}(x_{0}|y_{1})\). First, \(\widehat{y}_{1}\) is sampled from \(\widetilde{p}(y_{1})\), which is uniform under our choice of \(\widetilde{p}(x_{0})\). Then, \(\widehat{x}_{0}[(y_{1}\!=\!\widehat{y}_{1})\) is generated from the reverse SDE of base diffusion model \(p_{\widehat{y}_{1}}(x)\). Note that equation (14) implies that all observations have the same conditional distribution given \(x\). Thus, \(\widetilde{Q}_{\phi}(y_{1}|x_{0})\) is also a classifier for observations \(y_{2}\),..., \(y_{n}\).
For a non-terminal state \(x_{t}\) with \(t\in(0,T]\), we must train \(\widetilde{Q}\) to predict \(y_{1},\ldots,y_{n}\) jointly. For a non-terminal state \(\widehat{x}_{t}\) and observations \(\widehat{y}_{1},\ldots,\widehat{y}_{n}\), the cross-entropy loss is
\[\ell(\widehat{x}_{t},\widehat{y}_{1},\ldots,\widehat{y}_{n};\phi)=-\log \widetilde{Q}_{\phi}(y_{1}\!=\!\widehat{y}_{1},\ldots,y_{n}\!=\!\widehat{y}_{n} |x_{t}\!=\!\widehat{x}_{t}). \tag{30}\]
Tuples \((\widehat{x}_{t},\widehat{y}_{1},\ldots,\widehat{y}_{n})\) are obtained as follows: 1) \(\widehat{y}_{1}\sim\widetilde{p}(y_{1})\); 2) A trajectory \(\tau=\{x_{t}\}_{t=0}^{T}\) is sampled from the reverse SDE of diffusion model \(y_{1}\). At this point, we would ideally sample \(\widehat{y}_{2},\ldots,\widehat{y}_{n}\) given \(\widehat{x}_{0}\) but this requires access to \(\widetilde{p}(y_{k}\!=\!\widehat{y}_{k}|\widehat{x}_{0})\). Instead, we approximate this with \(w_{i}(\widehat{x};\phi)=\widetilde{Q}_{\phi}(y_{1}\!=\!i|x_{0}\!=\!\widehat{x}_ {0})\) and marginalize over \(\widehat{y}_{2},\ldots,\widehat{y}_{n}\) to obtain the cross-entropy loss
\[\mathcal{L}_{N}(\phi,\overline{\phi})=\mathop{\mathbb{E}}_{( \widehat{x},\widehat{y}_{1})\sim\widetilde{p}(\tau,y_{1})}\left[\sum_{ \widehat{x}_{t}\in\widehat{\nabla}(\widehat{x}_{0})}\sum_{\widehat{y}_{2}=1}^ {m}\cdots\sum_{\widehat{y}_{n}=1}^{m}\left(\prod_{k=2}^{n}w_{\widehat{y}_{k}}( \widehat{x}_{0};\overline{\phi})\right)\epsilon(\widehat{x}_{t},\widehat{y}_{1},\ldots,\widehat{y}_{n};\phi)\right]. \tag{31}\]
## 6 Experiments
### 2D Distributions via GFlowNet
We validate GFlowNet compositions obtained with our framework on 2D grid domain [36]. The goal of this experiment is to validate our approach in a controlled setting, where the ground-truth composite distributions can be evaluated directly.
In the 2D grid domain, the states are the cells of an \(H\times H\) grid. The starting state is the upper-left cell \(s_{0}=(0,0)\). At each state, the allowed actions are: 1) move right; 2) move down; 3) a stop action that indicates termination of the trajectory at the current position. For this experiment, we first trained GFlowNets \(p_{i}(x)\propto R_{i}(x)\) with reward functions \(R_{i}(x)>0\) defined on the grid, and then trained classifiers and constructed GFlowNet compositions following 5.3.
Figure 3 (top row) shows the distributions obtained by composing two pre-trained GFlowNets (top row; left). The harmonic mean \(p_{1}\bigotimes p_{2}\), covers the regions that have high probability under both \(p_{1}\) and \(p_{2}\) and excludes locations where either of the distributions is low. \(p_{1}\bigotimes p_{2}\) resembles \(p_{1}\) but the relative masses of the modes of \(p_{1}\) are modulated by \(p_{2}\): regions with high \(p_{2}\) have lower probability under contrast. The parameterized contrast \(p_{1}\bigotimes 0.95\)\(p_{2}\) with \(\pi=0.05\) magnifies the contrasting effect: high \(p_{2}(x)\) implies very low (\(p_{1}\bigotimes 0.95\)\(p_{2}(x)\)).
The bottom row of Figure 3 shows the operations on 3 distributions. The conditional \(\widetilde{p}(x|y_{1}=1,y_{2}=2)\) is concentrated on the points that have high likelihood under both \(p_{1}\) and \(p_{2}\). Similarly, the value \(\widetilde{p}(x|y_{1}=1,y_{2}=2,y_{3}=3)\) is high if \(x\) is likely to be observed under all three distributions at the same time. The conditions \(\widetilde{p}(x|y_{1}=2,y_{2}=2)\) and \(\widetilde{p}(x|y_{1}=2,y_{2}=2,y_{3}=2)\) highlight the points with high \(p_{2}(x)\) but low \(p_{1}(x)\) and \(p_{2}(x)\). Conditioning on three labels results in a sharper distribution compared to double-conditioning. Note that the operations can be thought of as generalized
Figure 4: **Reward distributions in the molecular generation domain. (a) Base GFlowNets at \(\beta=32\): \(p_{\text{SEH}}\) and \(p_{\text{SA}}\) are trained with \(R_{\text{SEH}}(x)^{32}\) and \(R_{\text{SA}}(x)^{32}\). (b) harmonic mean of \(p_{\text{SEH}}\) and \(p_{\text{SA}}\), (c) contrasts. (d) base GFlowNets at \(\beta=96\). (e) harmonic mean. The contours indicate the level sets of the kernel density estimates in the (\(R_{\text{SEH}}\), \(R_{\text{SA}}\)) plane.**
Figure 3: **Composed GFlowNets on \(32\times 32\) grid domain. Color indicates cell probability, darker is higher. (Top) operations on two distributions. (Bottom) operations on three distributions. The red circles indicate the high probability regions of \(p_{1}\), \(p_{2}\), \(p_{3}\).**
set-theoretic operations (set intersection and set difference). We provide quantitative results and further details in Appendix E.1. The classifier learning curves are provided in Appendix F.4.
### Molecule Generation via GFlowNet
Next, we evaluate our method for GFlowNet composition on a large and highly structured data space, and asses the effect that composition operations have on resulting data distributions in a practical setting. To that end, we conducted experiments with GFlowNets trained for the molecular generation task proposed by Bengio et al. [36].
Domain.In the molecule generation task, the objects \(x\in\mathcal{X}\) are molecular graphs. The non-terminal states \(s\in S\setminus\mathcal{X}\) are incomplete molecular graphs. The transitions from a given non-terminal state \(s\) are of two types: 1) fragment addition \(s\to s^{\prime}\): new molecular graph \(s^{\prime}\) is obtained by attaching a new fragment to the molecular graph \(s\); 2) stop action \(s\to x\): if \(s\neq s_{0}\), then the generation process can be terminated at the molecular graph corresponding to the current state (note that new terminal state \(x\in\mathcal{X}\) is different from \(s\in S\setminus\mathcal{X}\), but both states correspond to the same molecular graph).
Rewards.We trained GFlowNets using 3 reward functions: **SEH**, a reward computed by an MPNN [56] that was trained by Bengio et al. [36] to estimate the binding energy of a molecule to the soluble epoxide hydrolase protein; **SA**, an estimate of synthetic accessibility [57] computed with tools from RDKit library [58]; **QED**, a quantitative estimate of drug-likeness [59] which is also computed with RDKit. We normalized all reward functions to the range \([0,1]\). Higher values of SEH, SA, and QED correspond to stronger binding, higher synthetic accessibility, and higher drug-likeness respectively. Following Bengio et al. [36], we introduced the parameter \(\beta\) which controls the sharpness (temperature) of the target distribution: \(p(x)\propto R(x)^{\beta}\), increasing \(\beta\) results in a distribution skewed towards high-reward objects. We experimented with two \(\beta\) values, 32 and 96 (Figure 4(a),4(d)).
Training and evaluation.After training the base GFlowNets with the reward functions described above, we trained classifiers with Algorithm 1. The classifier was parameterized as a graph neural network based on a graph transformer architecture [60]. Further details of the classifier parameterization and training are provided in Appendix E.2. Compared to the 2D grid domain (Section 6.1), we can not directly evaluate the distributions obtained by our approach. Instead, we analyzed the samples generated by the composed distributions. We sampled 5 000 molecules from each composed distribution obtained with our approach as well as the base GFlowNets. We evaluated the sample collections with the two following strategies. **Reward evaluation**: we analyzed the distributions of rewards across the sample collections. The goal is to see whether the composition of GFlowNets trained for different rewards leads to noticeable changes in reward distribution. **Distribution distance evaluation**: we used the samples to estimate the pairwise distances between the distributions. Specifically, for a given pair of distributions represented by two collections of samples \(D_{A}=\{x_{A,j}\}_{i=1}^{n}\), \(D_{B}=\{x_{B,j}\}_{i=1}^{n}\) we computed the earth mover's distance \(d(D_{A},D_{B})\) with ground molecule distance given by \(d(x,x^{\prime})=(\max\{s(x,x^{\prime}),10^{-3}\})^{-1}-1\), where \(s(x,x^{\prime})\in[0,1]\) is the Tanimoto similarity over Morgan fingerprints of molecules \(x\) and \(x^{\prime}\).
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{SEH} & \multicolumn{3}{c}{low} & \multicolumn{3}{c}{high} \\ \cline{3-10} & SA & \multicolumn{2}{c}{low} & \multicolumn{2}{c}{high} & \multicolumn{2}{c}{low} & \multicolumn{2}{c}{high} \\ \cline{3-10} & QED & low & high & low & high & low & high & low & high \\ \hline \(\rho_{\text{SEH}}\) & 0 & 0 & 0 & 0 & **62** & **9** & **24** & **5** \\ \(\rho_{\text{SA}}\) & 0 & 0 & **73** & **4** & 0 & 0 & **18** & **5** \\ \(\rho_{\text{QED}}\) & 0 & **40** & 0 & **26** & 0 & **21** & 0 & **13** \\ \hline (a) \(y\)= \{\text{seh, sa}\} & 1 & 0 & 16 & 2 & 6 & 3 & **54** & **18** \\ (b) \(y\)= \{\text{seh, qed}\} & 0 & 11 & 0 & 4 & 1 & **48** & 4 & **32** \\ (c) \(y\)= \{\text{sa, qed}\} & 0 & 15 & 1 & **42** & 0 & 8 & 2 & **32** \\ (d) \(y\)= \{\text{seh, sa, qed}\} & 0 & 7 & 2 & 11 & 2 & 19 & 10 & **49** \\ \hline (e) \(y\)= \{\text{seh, sa, sa}\} & 0 & 0 & 0 & 0 & **63** & 9 & 24 & 4 \\ (f) \(y\)= \{\text{sa, sa, sa}\} & 0 & 0 & **74** & 5 & 0 & 0 & 17 & 4 \\ (g) \(y\)= \{\text{qed, qed, qed}\} & 0 & **40** & 0 & 23 & 0 & 23 & 0 & 14 \\ \hline \hline \end{tabular} In each row, the numbers show the percentage of the samples from the respective model that fall into one of 8 bins according to rewards. The “low” and “high” categories are decided by thresholding SEH: 0.5, SA: 0.6, QED: 0.25.
\end{table}
Table 1: Reward distributions of composite GFlowNets.
Results.Figure 4 shows reward distributions of base GFlowNets (trained with SEH, SA, and QED at \(\beta=32\)) and their compositions. Base GFlowNet distributions are concentrated on examples that score high in their respective rewards. For each model, there is considerable variation in the reward that was not used for training. The harmonic mean operation (Figures 4(b), 4(e)) results in distributions that are concentrated on the samples scoring high in both rewards. The contrast operation (Figure 4(c)) has the opposite effect: the distributions are skewed towards the examples scoring high in only one of the original rewards. Note that the tails of the contrast distributions are retreating from the area covered by the harmonic mean.
We show reward distribution statistics of three GFlowNets (trained with SEH, SA, and QED at \(\beta=32\)) and their compositions in Table 1. Each row of the table gives a breakdown (percentages) of the samples from a given model into one of \(2^{3}=8\) bins according to rewards. For all three base models, the majority of the samples fall into the "high" category according to the respective reward, while the rewards that were not used for training show variation. Conditioning on two different labels (e.g. \(y\!=\!\{\text{SEH},\text{QED}\}\)) results in concentration on examples that score high in two selected rewards, but not necessarily scoring high in the reward that was not selected. The conditional \(y\!=\!\{\text{SEH},\text{QED},\text{SA}\}\) shifts the focus to examples that have all three properties.
Figure 5 shows 2D embeddings of the distributions appearing in Table 1. The embeddings were computed with t-SNE based on the pairwise earth mover's distances. The configuration of the embeddings gives insight into the configuration of base models and conditionals in the distribution space. We see that points corresponding to pairwise conditionals lie in between the two base models selected for conditioning. Conditional \(y\!=\!\{\text{SEH},\text{SA},\text{QED}\}\) appears to be near the centroid of the triangle \((p_{\text{SEH}},p_{\text{SA}},p_{\text{QED}})\) and lies close the the pairwise conditionals. The distributions obtained by repeated conditioning on the same label value (e.g. \(y\!=\!\{\text{SEH},\text{SEH},\text{SEH}\}\)) are spread out to the boundary and lie closer to the respective base distributions while being relatively far from pairwise conditionals. We provide a complete summary of the distribution distances in Table F.5. The classifier learning curves are provided in Appendix F.4. The sample diversity statistics of base GFlowNets at different values of \(\beta\) are provided in Appendix F.5.
### Colored MNIST Generation via Diffusion Models
Finally, we empirically test our method for the composition of diffusion models on image generation task.
In this experiment, we composed three diffusion models that are pre-trained to generate colored MNIST digits [61]. Model \(p_{1}\) was trained to generate cyan digits less than \(4\), \(p_{2}\) to generate cyan and beige digits less than \(2\), and \(p_{3}\) to generate cyan and beige even digits less than \(4\). In essence, each model was trained to generate digits with a specific property: \(p_{1}\) generates cyan digits, \(p_{2}\) generates digits less than \(2\), and \(p_{3}\) generates even digits.
We built the composition iteratively by factorizing the posterior as \(\widetilde{p}(x|y_{1},y_{2},y_{3})\propto\widetilde{p}(x)\widetilde{p}(y_{1}, y_{2}|x)\widetilde{p}(y_{3}|x,y_{1},y_{2})\). To this end, we first trained a classifier \(\widetilde{Q}(y_{1},y_{2}|x_{t})\) on trajectories sampled from the base models. This allows us to generate
Figure 6: **Composed diffusion models on colored MNIST. Samples from 3 pre-trained diffusion models and their various compositions.**
samples from \(\widetilde{p}(x|y_{1},y_{2})\). We then trained an additional classifier \(\widetilde{Q}(y_{3}|x_{r},y_{1},y_{2})\) on trajectories from compositions defined by \((y_{1},y_{2})\) to allow us to sample from \(\widetilde{p}(x|y_{1},y_{2},y_{3})\). Additional details can be found in Appendix E.3.
Figure 6 shows samples from the pre-trained models and from selected compositions. The negative effect of _not_ conditioning on observations is clearly visible in the compositions using two variables. For example, \(\widetilde{p}(x|y_{1}=1,y_{2}=1)\) only generates cyan 3 digits. Because there we _do not_ condition on \(p_{2}\) or \(p_{3}\), the composition excludes digits that have high probability under \(p_{2}\) or \(p_{3}\), i.e. those that are less than 2 or even. We can make a similar analysis of \(\widetilde{p}(x|y_{1}=1,y_{2}=3)\). Cyan even digits have high density under both \(p_{1}\) and \(p_{3}\), but because \(p_{2}\) is not conditioned on, the composition excludes digits less than two (i.e. cyan 0's). Finally, \(\widetilde{p}(x|y_{1}=1,y_{2}=2,y_{3}=3)\) generates only cyan 0 digits, on which all base models have high density.
## 7 Conclusion
We introduced Compositional Sculpting, a general approach for composing iterative generative models. Compositions are defined through "observations", which enable us to emphasize or de-emphasize the density of the composition in regions where specific base models have high density. We highlighted two binary compositions, harmonic mean and contrast, which are analogous to the product and negation operations defined on EBMs. A crucial feature of the compositions we have introduced is that we can sample from them directly. By extending classifier guidance we are able to leverage the generative capabilities of the base models to produce samples from the composition. Through empirical experiments, we validated our approach for composing diffusion models and GFlowNets on toy domains, molecular generation, and image generation.
## Acknowledgements
TG and TJ acknowledge support from the Machine Learning for Pharmaceutical Discovery and Synthesis (MLPDS) consortium, DARPA Accelerated Molecular Discovery program, the NSF Expeditions grant (award 1918839) "Understanding the World Through Code", and from the MIT-DSTA collaboration.
SK and SDP were supported by the Technology Industries of Finland Centennial Foundation and the Jane and Aatos Erkko Foundation under project Interactive Artificial Intelligence for Driving R&D, the Academy of Finland (flagship programme: Finnish Center for Artificial Intelligence, FCAI; grants 328400, 345604 and 341763), and the UKRI Turing AI World-Leading Researcher Fellowship, EP/W002973/1.
VG acknowledges support from the Academy of Finland (grant decision 342077) for "Human-steered next-generation machine learning for reviving drug design", the Saab-WASP initiative (grant 411025), and the Jane and Aatos Erkko Foundation (grant 7001703) for "Biodesign: Use of artificial intelligence in enzyme design for synthetic biology".
GY acknowledges support from the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, [http://iaifi.org](http://iaifi.org)).
We thank Sammie Katt and Pavel Izmailov for the helpful discussions and assistance in making the figures.
We thank NeurIPS 2023 anonymous reviewers for the helpful feedback on our work.
## Broader Impact
We proposed a mathematical framework and methods for the composition of pre-trained generative models. While the primary emphasis of our work is on advancing foundational research on generative modeling methodology and principled sampling techniques, our work inherits ethical concerns associated with generative models such as creation of deepfake content and misinformation dissemination, as well as reproduction of biases present in the datasets used for model training. If not carefully managed, these models can perpetuate societal biases, exacerbating issues of fairness and equity.
Our work contributes to research on the reuse of pre-trained models. This research direction promotes eco-friendly AI development, with the long-term goal of reducing energy consumption and carbon emissions associated with large-scale generative model training. |
2309.07018 | Perfect Roman Domination and Unique Response Roman Domination | The idea of enumeration algorithms with polynomial delay is to polynomially
bound the running time between any two subsequent solutions output by the
enumeration algorithm. While it is open for more than four decades if all
minimal dominating sets of a graph can be enumerated in output-polynomial time,
it has recently been proven that pointwise-minimal Roman dominating functions
can be enumerated even with polynomial delay. The idea of the enumeration
algorithm was to use polynomial-time solvable extension problems. We use this
as a motivation to prove that also two variants of Roman dominating functions
studied in the literature, named perfect and unique response, can be enumerated
with polynomial delay. This is interesting since Extension Perfect Roman
Domination is W[1]-complete if parameterized by the weight of the given
function and even W[2]-complete if parameterized by the number vertices
assigned 0 in the pre-solution, as we prove. Otherwise, efficient solvability
of extension problems and enumerability with polynomial delay tend to go
hand-in-hand. We achieve our enumeration result by constructing a bijection to
Roman dominating functions, where the corresponding extension problem is
polynomimaltime solvable. Furthermore, we show that Unique Response Roman
Domination is solvable in polynomial time on split graphs, while Perfect Roman
Domination is NP-complete on this graph class, which proves that both
variations, albeit coming with a very similar definition, do differ in some
complexity aspects. This way, we also solve an open problem from the
literature. | Henning Fernau, Kevin Mann | 2023-09-13T15:19:03Z | http://arxiv.org/abs/2309.07018v1 | # Perfect Roman Domination and Unique Response Roman Domination
###### Abstract
The idea of enumeration algorithms with polynomial delay is to polynomially bound the running time between any two subsequent solutions output by the enumeration algorithm. While it is open for more than four decades if all minimal dominating sets of a graph can be enumerated in output-polynomial time, it has recently been proven that pointwise-minimal Roman dominating functions can be enumerated even with polynomial delay. The idea of the enumeration algorithm was to use polynomial-time solvable extension problems.
We use this as a motivation to prove that also two variants of Roman dominating functions studied in the literature, named perfect and unique response, can be enumerated with polynomial delay. This is interesting since Extension Perfect Roman Domination is \(\mathsf{W}[1]\)-complete if parameterized by the weight of the given function and even \(\mathsf{W}[2]\)-complete if parameterized by the number vertices assigned \(0\) in the pre-solution, as we prove. Otherwise, efficient solvability of extension problems and enumerability with polynomial delay tend to go hand-in-hand. We achieve our enumeration result by constructing a bijection to Roman dominating functions, where the corresponding extension problem is polynomially-time solvable. Furthermore, we show that Unique Response Roman Domination is solvable in polynomial time on split graphs, while Perfect Roman Domination is \(\mathsf{NP}\)-complete on this graph class, which proves that both variations, albeit coming with a very similar definition, do differ in some complexity aspects. This way, we also solve an open problem from the literature.
## 1 Introduction
### Roman Domination
Historically, Roman Domination is motivated by the defense strategy of the Roman Empire. The idea was to position the armies on regions in such a way that either (1) there is one army in this region or (2) there are two armies on one neighbored region. Translated to graphs we map each vertex to \(0\), \(1\) or \(2\). Such a function is called a Roman dominating function if each vertex with value \(0\) has a neighbor of value \(2\). Roman Domination has as input a graph \(G\) and a positive number \(k\), and the question is if there exists a Roman dominating
function such that the sum of the values of all vertices is at most \(k\). In the last decades, this problem received notable attention [10; 14; 18; 20; 28; 29; 30; 33; 34; 37].
As for dominating set, there are also many variants of Roman dominating functions which were considered in the literature. Examples are Roman-\(\{2\}\)-domination (also know as Italian domination) [16], double Roman domination [1; 7; 9] and total Roman domination [2].
### Perfect Variations
Here, we will consider two further variations of Roman domination, perfect and unique response Roman domination. A _perfect Roman dominating function_ (introduced by Henning _et al._[24]) is a Roman dominating function where each vertex with value \(0\) has _exactly_ one neighbor with value \(2\). If such a function additionally satisfies that all vertices with at least \(1\) as value have no neighbor with value \(2\), then it is a _unique response Roman dominating function_ (introduced by Rubalcaba and Slater [35]). Both variations can be seen as a way to translate the idea of perfect domination into the realm of Roman dominating functions. As a further motivation, we can also consider the idea of positioning armies on regions: If the armies are placed according to a perfect Roman dominating function and a region without any army is attacked, then it is clear from which region an army moves to secure the attacked region, so no time is wasted to first agree on who is the one to take action and move to the endangered region.
Supplementing results from the literature, we study the underlying minimization problems on split and on cobipartite graphs, which shows that these two seemingly very similar notions give raise to a different complexity behavior. However, the main focus of this paper is on enumeration, both from an input-sensitive and from an output-sensitive perspective.
### Enumeration
Enumeration is wide area of research, as also testified by specialized workshops like [21]. For some examples, we refer to the survey [40]. From a practical point of view, enumeration can be interesting if not all aspects of the problem have been satisfyingly modeled. For instance, it is possible to enumerate all (inclusion-wise) minimal dominating sets of a graph of order \(n\) in time \(\mathcal{O}(1.7159^{n})\)[23]. As here only the size of the input graph is taken into consideration, one also speaks of an input-sensitive analysis. In contrast to classical complexity, one can often also give lower-bound examples, which are families of graphs that possess, when taking an \(n\)-vertex representative thereof, \(\Omega(1.5704^{n})\) many minimal dominating sets [23]. When lower and upper bounds match, we can consider this type of enumeration as being optimal. However, even such an optimal enumeration algorithm can be dissatisfying for a user, as she might have to wait exponential time between two output solutions. This motivates to study enumeration algorithms from an output-sensitive perspective. The most important notions have been introduced by D. S. Johnson _et al._[25]. We focus on the most restricted variant. An enumeration algorithm has _polynomial delay_ if the time between two
subsequent outputs of the algorithm can be bounded by some polynomial, as well as the time from the start to the first output and the time between the last output and the termination of the algorithm. This is a very desirable property if two processes work as in a production line: one process generates the solutions, while the other one works on the generated solution. After finishing its work on one solution, the second process does not want to work for a long time to start working on the next solution. It is open since decades if all minimal dominating sets can be enumerated with polynomial delay.
We motivate the enumeration of perfect/unique response Roman dominating functions by the main result of F. N. Abu-Khzam _et al._[3]. There, it is proven that all minimal Roman dominating functions of a graph of order \(n\) can be enumerated in time \(\mathcal{O}(1.9332^{n})\) with polynomial delay. In this case, minimality is defined with respect to a pointwise order, _i.e._, for \(f,g:V\rightarrow\{0,1,2\}\), \(f\leq g\) if and only if \(f(v)\leq g(v)\) for all \(v\in V\). To ensure polynomial delay for enumerating minimal Roman dominating functions, F. N. Abu-Khzam _et al._ used polynomial-time solvable extension problems.
### Extension Problems
For a general definition of extension problems, we refer to [12]. Here, we will only discuss the extension version of minimization problems on graphs. Therefore, we can be more specific. Depending on the concrete problem (we choose to illustrate this in the following with the classical problem Dominating Set in parentheses), a graph \(G=(V,E)\) defines the search space \(\mathsf{presol}(G)\) of _pre-solutions_ (in our example, \(\mathsf{presol}(G)=2^{V(G)}\)) and a set of _solutions_\(\mathsf{sol}(G)\subseteq\mathsf{presol}(G)\) (dominating sets). For the extension version, we also need to define a partial order \(\preceq\) on \(\mathsf{presol}(G)\) (which is \(\subseteq\) for domination). The notion of a _minimal solution_ is understood with respect to \(\preceq:s\in\mathsf{sol}(G)\) is called minimal if, for each \(p\in\mathsf{presol}(G)\setminus\{s\}\), \(p\preceq s\) implies \(p\notin\mathsf{sol}(G)\). An instance of the extension version consists, apart from the graph \(G\), in a pre-solution \(p\in\mathsf{presol}(G)\) (some set of vertices in our example). The question is if there exists a minimal solution \(s\in\mathsf{sol}(G)\) with \(p\preceq s\). Notice that a typical branching algorithm used for the enumeration of all minimal solutions will implicitly create a pre-solution \(p\), and then efficiently determining if any minimal solution exists that extends \(p\) would be very beneficial. In the special case of Extension Roman Domination, \(\mathsf{presol}(G)\) is the set of all mappings \(f:V\rightarrow\{0,1,2\}\) and \(\mathsf{sol}(G)\) is the set of all Roman dominating functions. Now, \(\preceq\) is given by the partial order \(\leq\) described above: \(f\leq g\) holds if \(f(v)\leq g(v)\) for each \(v\in V\).
### Organization of the Paper
We start by introducing important notations and definitions in section 2. Then we consider the optimization problems Perfect Roman Domination and Unique Response Roman Domination on split and cobipartite graphs. We show that Perfect Roman Domination is \(\mathsf{NP}\)-complete while Unique Response Roman Domination is polynomial-time solvable. To our knowledge,
this is the first graph class where this is shown. Section 4 provides a polynomial-delay enumeration algorithm for unique response Roman dominating functions. In section 5 we demonstrate that Extension Perfect Roman Domination is NP-complete, W[1]-complete when parameterized by the pre-solution size and W[2]-complete when parameterized by the number of vertices assigned \(0\) by the pre-solution. Nonetheless, we present a way to enumerate all minimal perfect Roman dominating functions with polynomial delay in section 6 by showing a one-to-one correspondence between minimal perfect Roman dominating functions and minimal Roman dominating functions.
## 2 Preliminaries
### General Notions
Let \(\mathbb{N}\) denote the set of all nonnegative integers (including \(0\)). For \(n\in\mathbb{N}\), we will use the notation \([n]\coloneqq\{1,\ldots,n\}\). Let \(G=(V,E)\) be a graph. \(N_{G}(v)\) describes the _open neighborhood_ of \(v\in V\) with respect to \(G\). The _closed neighborhood_ of \(v\in V\) with respect to \(G\) is defined by \(N_{G}[v]\coloneqq N_{G}(v)\cup\{v\}\). For a set \(A\subseteq V\) the open neighborhood is defined as \(N_{G}(A)\coloneqq\left(\bigcup_{v\in A}N_{G}(v)\right)\). The closed neighborhood of \(A\) is given by \(N_{G}[A]\coloneqq N_{G}(A)\cup A\). Furthermore, the private neighborhood of a vertex \(v\in A\) with respect to \(G\) and \(A\) is denoted by \(P_{G,A}(v)\coloneqq N_{G}[v]\setminus N_{G}[A\setminus\{v\}]\).
Let \(G=(V,E)\) be a graph. We say \(D\subseteq V\) is a dominating set if \(N[D]=D\). A set \(D\subseteq V\) is called _perfect dominating_ if \(D\) is a dominating set and for all \(v,u\in D\) with \(v\neq u\), \(N[v]\cap N[u]=\emptyset\).
Let \(A,B\) be two sets. \(B^{A}\) is the set of all function \(f:A\to B\). For a functions \(f:A\to\mathbb{N}\) on a finite set \(A\), we define \(\omega(f)=\sum_{a\in A}f(a)\). For \(A\subseteq B\)\(\chi_{A}:B\to\mathbb{N}\) denotes the characteristic function (\(\chi(a)=1\) if and only if \(a\in A\); \(\chi(a)=0\) otherwise)
### Basic Decision Problems
**Problem name:** Unique Response Roman Domination
**Given:** A graph \(G=(V,E)\) and \(k\in\mathbb{N}\)
**Question:** Is there a unique response Roman dominating function \(f\) on \(G\) with \(\omega(f)\leq k\)?
**Problem name:** Perfect Roman Domination
**Given:** A graph \(G=(V,E)\) and \(k\in\mathbb{N}\)
**Question:** Is there a perfect Roman dominating function \(f\) on \(G\) with \(\omega(f)\leq k\)?
Let \(u_{R}(G)\) denote the smallest weight of any unique response Roman dominating function \(f\) on \(G\). Furthermore, \(\gamma_{R}^{p}(G)\) denotes the smallest weight of any perfect Roman dominating function on \(G\). The following is known about the complexity of these decision problems.
* Unique Response Roman Domination is \(\mathsf{NP}\)-complete even on regular bipartite graphs [11]. Furthermore, Banerjee _et al._[6] showed that this problem is \(\mathsf{NP}\)-complete on chordal graphs, and polynomial-time solvable on distance-hereditary and interval graphs. They also prove that there is no Unique Response Roman Domination polynomial-time approximation algorithm within a factor of \(n^{1-\epsilon}\) for any constant \(\epsilon>0\) and any input graph of order \(n\), unless \(\mathsf{NP}=\mathsf{P}\).
* Perfect Roman Domination is \(\mathsf{NP}\)-complete on chordal graphs, planar graphs, and bipartite graphs and polynomial time solveable on block graphs, cographs, series-parallel graphs, and proper interval graphs [8].
### Characterizing Perfect Roman Dominating Functions
Consider a graph \(G=(V,E)\) and a function \(f:V\rightarrow\{0,1,2\}\). We define \(V_{i}(f):=\{v\in V\mid f(v)=i\}\) for each \(i\in\{0,1,2\}\). If \(v\in V\) obeys \(f(v)=2\), then \(u\in N_{G}(v)\) with \(f(u)=0\) is called a _private neighbor_ of \(v\) (with respect to \(f\)) if \(|N_{G}(u)\cap V_{2}(f)|=1\).
**Observation 1**: _Let \(G=(V,E)\) be a graph and \(f:V\rightarrow\{0,1,2\}\)._
1. _If_ \(f\) _is a perfect Roman dominating function, then every neighbor_ \(u\) _with_ \(f(u)=0\) _of some_ \(v\) _with_ \(f(v)=2\) _is a private neighbor of_ \(v\)_._
2. _If_ \(f\) _is a Roman dominating function such that every neighbor_ \(u\) _with_ \(f(u)=0\) _of some_ \(v\) _with_ \(f(v)=2\) _is a private neighbor of_ \(v\)_, then_ \(f\) _is a perfect Roman dominating function._
Proof: Let us first prove the first item. If it were not true, then there would be some neighbor \(u\) with \(f(u)=0\) of some \(v\) with \(f(v)=2\) that is non-private, _i.e._, there exists some \(v^{\prime}\in N_{G}(u)\), \(v^{\prime}\neq v\), with \(f(v^{\prime})=2\). This contradicts the assumption that \(f\) is a perfect Roman dominating function. To see the second implication, observe that if \(f\) is a Roman domination function that is not perfect, then there must be a vertex \(u\) with \(f(u)=0\) such that \(u\) has two neighbors \(v_{1},v_{2}\) with \(f(v_{1})=f(v_{2})=2\). Hence, \(u\) is not a private neighbor of \(v_{1}\).
## 3 Optimization on Split and Cobipartite Graphs
A split graph \(G=(V,E)\) is a graph such that the vertex set \(V\) can be partitioned into \(C,I\), where \(C\) is a clique and \(I\) is an independent set of \(G\). It seems to be open in which complexity class Perfect Roman Domination and Unique Response Roman Domination are on split graphs, as explicitly asked in [6]. The main result of this section will give the answers.
### Unique Response Roman Domination on Split Graphs
Lemma 1: _Let \(G=(V,E)\) be a connected split graph with a clique \(C\) and an independent set \(I\) such that \(V=C\cup I\) and \(C\cap I=\emptyset\). For each unique response Roman dominating function \(f:V\rightarrow\{0,1,2\}\), one of the following conditions holds:_
* \(V_{2}(f)\cap I=\emptyset\) _and_ \(|V_{2}(f)\cap C|\leq 1\)_; or_
* \(V_{2}(f)\subseteq I\)_._
Proof: Suppose that \(V_{2}(f)\) is not a subset of \(I\). By the definition of unique response Roman dominating function, two vertices of \(C\) cannot have the value \(2\), since \(C\) is a clique. Hence, \(|V_{2}(f)\cap C|\leq 1\). For the sake of contradiction, assume there are \(v\in I\) and \(u\in C\) with \(f(u)=f(v)=2\). As \(f\) is a unique response Roman dominating function, \(u\) and \(v\) cannot be neighbors. Since \(\emptyset\subsetneq N_{G}(v)\subseteq C\) holds as \(G\) is connected, \(v\) has a neighbor \(w\in C\). This contradicts the assumption on \(f\), as \(|N_{G}(w)\cap V_{2}(f)|\geq 2\).
Let us take a look at the unique response Roman dominating function for the two cases of Lemma 1. Without loss of generality, we can assume that for the given split graph \(G=(V,E)\), we have a decomposition \(V=C\cup I\) into a clique \(C\) and an independent set \(I\) such that \(C\) is inclusion-wise maximal. Consider the unique response Roman dominating function \(f:V\rightarrow\{0,1,2\}\).
First assume \(f\) fulfills \(V_{2}(f)\cap I=\emptyset\) and \(|V_{2}(f)\cap C|\leq 1\). Since \(V_{2}(f)\cap C=\emptyset\) would imply \(f=1\) (constant) and \(\omega(f)=|C|+|I|\), let us assume \(\{v\}=V_{2}(f)\cap C\). Therefore, \(C\cap V_{1}(f)=\emptyset\) holds. Hence, \(\omega(f)=2+|I\setminus N_{G}(v)|\). This implies \(u_{R}(G)\leq\min_{v\in C}\left(2+|I\setminus N_{G}(v)|\right)\leq 2+|I|\).
Secondly, let us assume \(V_{2}(f)\subseteq I\). As \(I\) is an independent set and there is no vertex in \(C\) with the value \(2\), \(f(v)\geq 1\) for each \(v\in I\). This implies
\[\omega(f)=|I|+|V_{2}(f)|+|C\setminus N_{G}(V_{2}(f))|\geq|I|+1.\]
Equality holds if and only if there exists a vertex \(v\in I\) with \(N_{G}(v)=C\) and, for all \(u\in I\), \(f(u)=2\iff u=v\). In this case \(C\) is not maximal, contradicting our assumption. Thus, this second case can never give a smaller value than the first one, _i.e._, \(u_{R}(G)\leq\min_{v\in C}\left(2+|I\setminus N_{G}(v)|\right)\). In order to solve Unique Response Roman Domination on \(G\), we only have to find the vertex with the highest degree, as it will be in a maximal clique.
Corollary 1: Unique Response Roman Domination _can be solved in time \(\mathcal{O}(n+m)\) on split graphs._
### Perfect Roman Domination on Split Graphs
Interestingly, the seemingly very similar problem Perfect Roman Dominationon split graphs is harder to solve. To see this, we will use a problem called Perfect Domination that is defined next.
**Problem name:** Perfect Domination
**Given:** A graph \(G=(V,E)\) and \(k\in\mathbb{N}\)
**Question:** Is there a perfect dominating set \(D\) on \(G\) with \(|D|\leq k\)?
It is known that Perfect Roman Dominationis NP-complete, see [19].
Theorem 3.1: Perfect Roman Domination _on split graphs is NP-complete._
Proof: Membership follows from Perfect Roman Domination on general graphs. To prove NP-hardness, we will use Perfect Domination. Let \(G=(V,E)\) be a graph and \(k\in\mathbb{N}\). To avoid trivialities, \(k\leq|V|\).
Define \(t=3\cdot|V|+4\), \(k^{\prime}=k+|V|+4\) and \(G^{\prime}=(V^{\prime},E^{\prime})\) with
\[V^{\prime}= \{x,y\}\cup\{v^{\prime},v_{i}\mid v\in V,i\in[t]\}\cup\{a_{1},b_{ 1},\ldots,a_{t},b_{t}\},\] \[E^{\prime}= \{\{x,a_{i}\},\{y,b_{i}\}\mid i\in[t]\}\cup\{\{v^{\prime},u_{i} \}\mid v,u\in V,u\in N[v],i\in[t]\}\cup\] \[\binom{\{x,y\}\cup\{v^{\prime}\mid v\in V\}}{2}.\]
The instance \((G^{\prime},k^{\prime})\) of Perfect Roman Domination can be constructed in polynomial time. Moreover, \(G^{\prime}\) is easily seen to be a split graph. We still have to prove that \((G,k)\) is a yes-instance of Perfect Domination if and only if \((G^{\prime},k^{\prime})\) is a yes-instance of Perfect Roman Domination.
Let \(S\subseteq V\) be a perfect dominating set of \(G\) with \(|S|\leq k\). Define \(f\in\{0,1,2\}^{V}\) by the sets
\[V_{0}(f)= \{v_{i}\mid i\in[t],v\in V\}\cup\{a_{i},b_{i}\mid i\in[t]\},\] \[V_{1}(f)= \{v^{\prime}\mid v\notin S\},\] \[V_{2}(f)= \{x,y\}\cup\{v^{\prime}\mid v\in S\}.\]
This implies \(\omega(f)=|V\setminus S|+4+2\cdot|S|=|V|+4+|S|\leq k^{\prime}\). The set \(\{a_{1},\ldots,a_{t}\}\cup\{v^{\prime}\mid f(v)\neq 2\}\) is only dominated by \(x\) and the vertices \(b_{1},\ldots,b_{t}\) are only dominated by \(y\). Assume there is a \(v\in V\) and \(i\in[t]\) such that \(|N_{G^{\prime}}(v_{i})\cap V_{2}(f)|\neq 1\). By \(N_{G^{\prime}}(v_{i})=\{u^{\prime}\mid u\in N_{G}[v]\}\), this implies \(|N_{G^{\prime}}(v_{i})\cap V_{2}(f)|=|N_{G}(v)\cap S|\neq 1\). This contradicts that \(S\) is a perfect dominating set.
Assume that \(f\in\{0,1,2\}^{V^{\prime}}\) is a minimum perfect Roman dominating function with \(\omega(f)\leq k^{\prime}\).
Claim: For each \(z\in\{v_{i},a_{i},b_{i}\mid i\in[t],v\in V\}\), it holds that \(f(z)=0\).
Proof: For the sake of contradiction, assume that there is some \(v\in V\) and \(i\in[t]\) with \(f(v_{i})=2\). If \(v_{i}\) would not have a private neighbor \(u^{\prime}\) with \(f\)-value \(0\), then \(f^{\prime}=f-\chi_{\{v_{i}\}}\) would be a smaller perfect Roman dominating function. This contradicts the minimality of \(f\). This implies \(\{x,y\}\cup\{z^{\prime}\mid z\in V\}\subseteq N_{G^{\prime}}(u^{\prime})\) has no vertex with \(f\)-value \(2\). Thus, \(N_{G^{\prime}}(v_{i})\cap V_{2}(f)=\emptyset\). Therefore, \(f(v_{j})\neq 0\) for each \(j\in[t]\). Hence \(\omega(f)\geq t+1>k^{\prime}\), contradicting the choice of \(f\).
Assume there is some \(v\in V\) and \(i\in[t]\) with \(f(v_{i})=1\). Furthermore, we can assume \(|N_{G^{\prime}}(v_{i})\cap V_{2}(f)|\neq 1\), as otherwise, \(f-\chi_{\{v_{i}\}}\) would be a smaller perfect Roman dominating function. As by construction \(N_{G^{\prime}}(v_{i})=N_{G^{\prime}}(v_{j})\) for each \(j\in[t]\), we find \(|N_{G^{\prime}}(v_{i})\cap V_{2}(f)|=|N_{G^{\prime}}(v_{j})\cap V_{2}(f)|\). If \(|N_{G^{\prime}}(v_{i})\cap V_{2}(f)|=0\), then \(f(v_{j})\neq 0\) holds for each \(j\in[t]\), since \(v_{j}\) is not dominated. Hence, \(\omega(f)\geq t\cdot|V|\geq t>k^{\prime}\), contradicting the choice of \(f\). Assume \(|N_{G^{\prime}}(v_{i})\cap V_{2}(f)|\geq 2\). Thus, for each \(j\in[t]\), \(f(v_{j})\neq 0\), as otherwise, \(f\) is not perfect. This contradicts the construction as above.
We can discuss the claims concerning \(a_{i}\) and \(b_{i}\) along very similar lines.
Further, we know \(f(x)=f(y)=2\). Define \(S=\{v\mid f(v^{\prime})=2\}\). As \(f(v_{i})=0\) for each \(v\in V,i\in[t]\), \(|N_{G^{\prime}}(v_{i})\cap V_{2}(f)|=1\). This implies that \(|N_{G}[v]\cap S|=1\) for each \(v\in V\). Therefore, \(S\) is a perfect dominating set.
Since \(f(x)=f(y)=2\), \(f(v^{\prime})\geq 1\) for all \(v\in V\). This implies that \(\omega(f)=|V|+|V_{2}(f)|+4=|V|+|S|+4\leq k^{\prime}\). Hence, \(|S|\leq k^{\prime}-4-|V|=k\).
Theorem 3.2: \(\omega\left(f\right)-\textsc{Perfect}\) _Roman Domination can be solved in FPT time on split graphs._
Proof: Let \(G=(V,E)\) be a split graph with the vertex set partition \(V=C\cup I\), where \(C\) is a clique and \(I\) an independent set. Further, let \(f:V\rightarrow\{0,1,2\}\) be a minimum perfect Roman dominating function on \(G\). Then \(V_{2}(f)\cap I=\emptyset\) or \(V_{2}(f)\cap C=\emptyset\). For \(v\in V_{2}(f)\cap I\) and \(c\in V_{2}(f)\cap C\), \(f-\chi_{\{v\}}\) would be a perfect Roman dominating function with \(\omega\left(f-\chi_{\{v\}}\right)\leq\omega\left(f\right)\).
For \(V_{2}(f)\cap C=\emptyset\), we just branch on each vertex in \(I\) if gets the value \(1\) or \(2\). The vertices cannot get the value \(0\) as they cannot be dominated by a other vertex. After deciding this we can delete the vertex and the parameter goes down by either \(1\) or \(2\). This gives us the branching vector \((1,2)\). When we have branched on each vertex in \(I\), then we assign \(0\) to each \(c\in C\) with \(|V_{2}(f)\cap N(c)|=1\) and \(1\) otherwise (this is important to get the perfect Roman dominating function property). Now, we only need to check if \(\omega\left(f\right)\) is smaller than the given parameter value.
Now, we consider the perfect Roman dominating functions \(f\in\{0,1,2\}^{V}\) with \(V_{2}(f)\cap I=\emptyset\). As \(C\) is a clique, if \(|V_{2}(f)\cap C|\geq 2\), then \(C\subseteq V_{1}(f)\cup V_{2}(f)\). First we consider the perfect Roman dominating functions with \(|V_{2}(f)\cap C|=1\). Since these functions are unique response Roman dominating functions, this can be done in polynomial time. After this, we guess two vertices from \(C\) and assign the value \(2\) to these vertices. From now on, we can branch on the remaining vertices in \(C\) if these will be assigned to \(1\) or \(2\). Analogously to \(V_{2}(f)\cap C=\emptyset\), then we assign \(0\) to each \(v\in I\) with \(|V_{2}(f)\cap N(v)|=1\) and \(1\), otherwise. In this case, we also have to check if \(\omega\left(f\right)\) is smaller than the given parameter value.
Our algorithm runs in time \(\mathcal{O}^{*}(\Phi^{k})\), where \(\Phi\) is the golden ratio and \(k\) the parameter.
### Perfect Roman Domination on Cobipartite Graphs
For the remaining section, we consider cobipartite graphs, which are the complementary graphs of bipartite graphs. These graphs can be also characterized as graph for which the vertex set can be partitioned into two cliques. On this graph class, Perfect Roman Domination is solvable polynomial-time.
Theorem 3.3: _Perfect Roman Domination is polynomial-time solvable on cobipartite graphs._
Proof: Let \(G=(V,E)\) be a cobipartite graph with the partition into the two cliques \(C_{1},C_{2}\subseteq V\). For \(v\in V\) define the perfect Roman dominating function
\[g_{v}:V\rightarrow\{0,1,2\},x\mapsto\begin{cases}0,&x\in N(v)\\ 1,&x\in V\setminus N[v]\\ 2,&x=v.\end{cases}\]
Let \(f\in\{0,1,2\}^{V}\) be a perfect Roman dominating function with \(|V_{2}(f)\cap C_{1}|\geq 2\) and \(v\in C_{2}\). For each \(u\in C_{1}\setminus V_{2}(f)\), \(|V_{2}(f)\cap N(u)|\geq 2\). Hence \(C_{1}\subseteq V_{1}(f)\cup V_{2}(f)\). Since \(C_{2}\subseteq N[v]=V_{0}(g_{v})\cup\{v\}\), \(\omega\left(g_{v}\right)\leq|C_{1}|+2\leq\omega\left(f\right)\). Symmetrically, \(\omega\left(g_{v}\right)\leq|C_{1}|+2\leq\omega\left(f\right)\) for \(v\in C_{1}\) and a perfect Roman dominating function \(f\) with \(|V_{2}(f)\cap C_{2}|\geq 2\). Let \(f\) be a minimum perfect Roman dominating function on \(G\). We can assume \(|V_{2}(f)\cap C_{i}|\leq 1\) for each \(i\in\{1,2\}\). This leaves only \(|V|^{2}+|V|\) many possibilities.
### Relating to 2-Packings
As mentioned before, Cabrera _et al._[11] proved that Unique Response Roman Domination is NP-complete on bipartite graphs. For cobipartite graphs, this problem is polynomial-time solvable. For this result, we use the idea of 2-packings. Let \(G=(V,E)\) be a graph. A set \(S\subseteq V\) is called _2-packing_ if the distance between two vertices in \(S\) is at least 3. Targhi _et al._[39] presented a proof for \(u_{r}(G)=\min\{2|S|+|V(G)\setminus N_{G}[S]|\mid S\) is a 2-packing}. Analogously to this, we can prove that, for each graph \(G=(V,E)\), there exists a bijection \(\psi_{G}\) between all 2-packings of the graph and all unique response Roman dominating functions. Here, for a 2-packing \(S\) and for \(x\in V\),
\[\psi_{G}(S)(x)=\begin{cases}0,&x\in N(S)\setminus S\\ 1,&x\in V\setminus N(S)\\ 2,&x\in S.\end{cases}\]
This also yields that, for each unique response Roman dominating function \(f\), \(V_{2}(f)\) is a 2-packing. This is the main idea of our algorithm.
Lemma 2: Unique Response Roman Domination _is polynomial-time solvable on cobipartite graphs. Furthermore, there are at most \(\frac{|V|^{2}}{4}\) many 2-packings, or unique response Roman dominating functions, on \(G=(V,E)\)._
Proof: Let \(G=(V,E)\) be a cobipartite graph, together with the partition into the two cliques \(C_{1},C_{2}\subseteq V\). Clearly, two vertices from the same clique cannot be in a 2-packing at the same time. Therefore, from each clique, there can be at most one vertex in the 2-packing. This implies that there are at most \(|C_{1}|\cdot|C_{2}|\) many 2-packings, or unique response Roman dominating functions. As \(C_{1},C_{2}\) is a partition of \(V\), this leads to \(|C_{1}|\cdot|C_{2}|=|C_{1}|\cdot|V\setminus C_{1}|=|C_{1}|\cdot(|V|-|C_{1}|)\) many 2-packings. This expression is maximal if \(|C_{1}|=|C_{2}|\), which implies that there are at most \(\frac{|V|^{2}}{4}\) many 2-packings or unique response Roman dominating functions on \(G\).
The bound \(\frac{|V|^{2}}{4}\) is tight: we get the number for the complement of a complete bipartite graph \(K_{t,t}\) where both classes have the same size \(t\). This is interesting as there could be exponentially many unique response Roman dominating functions even on connected split graphs, which is quite a related class of graphs. To this end, we only need to consider the split graph \(G=(V,E)\) with \(V\coloneqq C\cup I\), \(C\coloneqq\{c_{1},\ldots,c_{t}\}\), \(I\coloneqq\{v_{1},\ldots,v_{2t}\}\) and
\[E\coloneqq\binom{C}{2}\cup\{\{c_{i},v_{2i-1}\},\{c_{i},v_{2i}\}\mid i\in\{1, \ldots,t\}\}.\]
Clearly, \(|V|=3t\). By the arguments from 1, we know that for each unique response Roman dominating function \(f\) with \(|V_{2}(f)\cap C|=1\), \(|V_{2}(f)|=1\). There are \(t\) many such unique response Roman dominating functions. Let \(f\) be a unique response Roman dominating function with \(V_{2}(f)\cap C=\emptyset\). In this situation, for each \(i\in\{1,\ldots,t\}\), there three ways to dominate \(c_{i}\):
* \(f(c_{i})=f(v_{2i-1})=f(v_{2i})=1\),
* \(f(c_{i})=f(v_{2i})=0\) and \(f(v_{2i-1})=2\),
* \(f(c_{i})=f(v_{2i-1})=0\) and \(f(v_{2i})=2\).
This implies that there are \(t+3^{t}=\frac{|V|}{3}+\sqrt[3]{3}^{|V|}\) many unique response Roman dominating function. \(\Omega(\sqrt[3]{3}^{|V|})\) is even a tight bound for connected split graphs. This is the case as each unique response Roman dominating function on a graph without isolated vertices is a minimal Roman dominating function. Therefore, we could use the enumeration algorithm for minimal Roman dominating functions on split graph from [5] which runs in \(\mathcal{O}(\sqrt[3]{3}^{|V|})\).
Remark 1: It should be mentioned that the polynomial delay property of this algorithm is not inherited for enumerating unique response Roman dominating function, as not each minimal Roman dominating function is a unique response Roman dominating function. Nonetheless, we will present a sketch of a polynomial-delay branching enumeration algorithm. For this purpose, we consider each 2-packing on a connected split graph and use the bijection between unique response Roman dominating functions and 2-packings. The measure is for each vertex the same.
Let \(G=(V,E)\) be a connected split graph with the partion \(V=C\cup I\) where \(C\) is a clique and \(I\) is an independent set. With the arguments from above, we can first enumerate each of the \(|C|\) many 2-packings \(S\) with \(S\cap C\neq\emptyset\). From now on, we branch on the vertices in \(I\) if they will be in \(S\) or not. If the vertex from \(I\) is not in \(S\), then we delete this vertex. If we put a vertex from \(v\) into \(S\), then we delete, for each \(u\in N(v)\), the whole set \(\{u\}\cup(N(u)\cap I)\). Furthermore, we delete vertices from \(C\) which have no neighbors in \(I\). We branch on the vertex \(v\in I\) with the highest degree. If \(\deg(v)\geq 3\), then by putting the vertex in \(S\) we delete 4 vertices. This leads to a branching vector \((4,1)\) (the branching value is below \(1.3804\)). For \(\deg(v)=2\), we have to make a case distinction in the analysis: If \(v\) is the only neighbor in \(I\) for each of the neighbors of \(v\), then in both cases
these vertices will be deleted after this branch. This implies the branching vector \((3,3)\) (The branching value is \(\sqrt[3]{2}<\sqrt[3]{3}\)). Otherwise, at least 4 vertices will be deleted after putting \(v\) into \(S\). This would again lead to the branching vector \((4,1)\).
Therefore, we can assume that each vertex in \(I\) has exactly one neighbor. Let \(c\) be this neighbor. If \(|N(c)\cap I|\geq 3\), after putting \(v\) into \(S\), we would delete at least 4 vertices and the branching would be again \((4,1)\). For \(N(c)\cap=\{v\}\), we would delete \(c\) in each case. This leads to the branching vector \((2,2)\) (the branching value is \(\sqrt{2}\leq\sqrt[3]{3}\)). The remaining case is \(|N(c)\cap I|=2\). This is considered in the \(\Omega(\sqrt[3]{3}^{|V|})\) example above. This implies the claimed running time of \(\mathcal{O}^{*}(\sqrt[3]{3}^{n})\).
This algorithm is a modification of the recursive backtracking algorithm of [31] and is polynomial delay with the same arguments. The algorithm of [31] is a general approach to enumerate all sets of \(\mathcal{F}\subseteq 2^{U}\) for a universe \(U\), where \(\mathcal{F}\) fulfills the downward closure property: if \(X\in\mathcal{F}\) and \(Y\subseteq X\), then \(Y\in\mathcal{F}\).
Corollary 2: _There exists an algorithm which enumerates all unique response Roman dominating functions of a connected split graph of order \(n\) in time \(\mathcal{O}^{*}(\sqrt[3]{3}^{n})\) with polynomial delay. Furthermore, there are connected split graphs of order \(n\) that have at least \(\Omega(\sqrt[3]{3}^{n})\) many unique response Roman dominating functions._
## 4 Enumerating All Unique Response Roman Dominating Functions
In this section, we will enumerate unique response Roman dominating functions on graphs without isolated vertices. The reason for this restriction is that there are two choices for a unique response Roman dominating function to dominate an isolated vertex (either 1 or 2). This would result in \(2^{n}\) many unique response Roman dominating functions. In this case, an algorithm which decides for each vertex if it assigns the value 2 or not (at the end each vertex \(v\) without a value-2 vertex in its closed neighborhood will be assigned 1, and it will be assigned 0, otherwise) would have perfect running time. Furthermore, the value 2 on an isolated vertex is no good idea, as in the Unique Response Roman Domination problem, we try to minimize the value of the function. In this case, each unique response Roman dominating function that assigns a 2 to an isolated vertex would give rise to a smaller unique response Roman dominating function which dominates the remaining vertices in the same way. Therefore, as we are going to enumerate all unique response Roman dominating functions in this section, not only the minimal ones, we are considering graphs without isolated vertices in the following.
Recall that Junosza-Szaniawski and and Rzazewski [26] provided an enumeration algorithm for 2-packings on connected graphs of order \(n\) which runs in time \(\mathcal{O}(1.5399^{n})\). Remember that there exists a bijection between all 2-packings and all unique response Roman dominating functions of a graph. Even if this is an interesting algorithm for enumerating all unique response Roman dominating
functions on connected graphs, there are worse cases on graphs without isolated vertices:
On \(P_{2}=(\{v,u\},\{\{v,u\}\})\), there are the following three possible unique response Roman dominating functions: \(2\cdot\chi_{\{v\}},2\cdot\chi_{\{u\}}\) and \(\chi_{\{v,u\}}\).
Corollary 3: _There are graphs of order \(n\) without isolates that have at least \(\sqrt{3}^{n}\) many minimal unique response Roman dominating function._
We will also need the next observation in the following.
The remaining section will be used to show the following theorem that basically proves that the given simple example is optimal.
Theorem 4.1: _There is a polynomial-space algorithm that enumerates all unique response Roman dominating functions of a given graph (without isolated vertices) of order \(n\) with polynomial delay and in time \(\mathcal{O}^{*}(\sqrt{3}^{n})\)._
For the proof, we will construct a branching algorithm. To do this, we need the sets \(A,V_{0},V_{1},V_{2},\overline{V_{2}}\). In \(V_{0},V_{1},V_{2}\), we find the vertices with the respective values \(0\), \(1\), \(2\). Hence, \(V_{2}\) has to be a \(2\)-packing. \(\overline{V_{2}}\) contains the vertices which will not be \(2\) but it is not clear if they are \(0\) or \(1\) (\(\overline{V_{2}}\cap(V_{0}\cup V_{1})=\emptyset\)). \(A\) contains vertices which are completely undecided so far, which reflects the situation at the start of the algorithm. For the analysis, we use the measure \(\mu=|A|+\omega\cdot|\overline{V_{2}}|\). Notice that at the very beginning, \(V=A\), so that then \(\mu=|V|\).
For the polynomial delay part of the proof, we will consider the search tree representation of a run of a branching algorithm on a graph \(G\). This is a rooted tree where all edges are orientated away from the root. Each node represents a quintuple \((A,\overline{V_{2}},V_{0},V_{1},V_{2})\) of pairwise disjoint sets whose union is \(V\). There is an edge from one node to another one in this tree if the second node can be produced in one of the cases of the next branching step. Reduction rules can be executed in polynomial time and we hence stay within the same node of the search tree.
**Reduction Rule 1**: _If there are \(v,u\in V_{2}\), \(u\neq v\), with \(N_{G}[v]\cap N_{G}[u]\neq\emptyset\), then skip this branch._
In other words, we have detected a leaf in the search tree in which no solution is output. This reduction rule is sound as \(V_{2}\) would not be a \(2\)-packing anymore.
**Reduction Rule 2**: _If there is a vertex \(v\in A\) with \(N_{G}(v)\cap(\overline{V_{2}}\cup A)=\emptyset\), then put \(v\) into \(\overline{V_{2}}\)._
Lemma 3: _Reduction Rule 2 is sound._
Proof: Let \(f\) be a unique response Roman dominating function on \(G\) with \(V_{2}(f)\cap(V_{2}\cup\overline{V_{2}})=V_{2}\), \(V_{0}\subseteq V_{0}(f)\) and \(V_{1}\subseteq V_{1}(f)\). Since we assigned \(0\) to a vertex only if it has a neighbor in \(V_{2}\) and \(N_{G}(u)\cap V_{2}(f)\) is empty for each \(u\in V_{1}(f)\cup V_{2}(f)\), \(v\in V_{2}(f)\) would contradict the properties of unique response Roman dominating functions.
**Reduction Rule 3**: _If there is a vertex \(v\in\overline{V_{2}}\) with \(N_{G}[v]\cap A=\emptyset\), then put \(v\) into \(V_{1}\)._
Lemma 4: _Reduction Rule 3 is sound._
Proof: Since \(N_{G}[v]\cap V_{2}(f)\) will be empty for a unique response Roman dominating function \(f\) with \(V_{2}\subseteq V_{2}(f)\) and \(\overline{V_{2}}\cap V_{2}(f)=\emptyset\), \(f(v)\) must be \(1\).
**Branching Rule 1**: _Let \(v\in A\) with \(|N_{G}(v)\cap A|\geq 2\). Then branch as follows:_
1. _Put_ \(v\) _in_ \(\underline{V_{2}}\) _and all vertices of_ \(N_{G}(v)\) _in_ \(V_{0}\)_._
2. _Put_ \(v\) _in_ \(\overline{V_{2}}\)_._
Lemma 5: _The branching is a complete case distinction. Moreover, it leads at least to the following branching vector: \(\left(3,1-\omega\right).\)_
Proof: If a vertex is in \(V_{2}\), then each neighbor has to be in \(V_{0}\). Therefore, this is a complete case distinction. In the first case, we put three vertices from \(A\) into \(V_{2}\) or \(V_{0}\). Hence, the measure decreases by \(3\). The last case is decreasing the measure by \(1-\omega\), since we only move \(v\) from \(A\) to \(\overline{V_{2}}\).
**Branching Rule 2**: _Let \(v\in A\) with \(\left\{u\right\}=N_{G}(v)\cap A\) and \(\left\{v\right\}=N_{G}(u)\cap A\). Then branch as follows:_
1. _Put_ \(v\) _in_ \(V_{2}\) _and_ \(N_{G}(v)\) _in_ \(V_{0}\)_._
2. _Put_ \(u\) _in_ \(V_{2}\) _and_ \(N_{G}(u)\) _in_ \(V_{0}\)_._
3. _Put_ \(v,u\) _in_ \(V_{1}\)_._
Lemma 6: _The branching is a complete case distinction. Moreover, it leads at least to the following branching vector: \(\left(2,2,2\right).\)_
Proof: In this case \(v\) and \(u\) cannot be in \(V_{2}\) at the same time. If \(v,u\in\overline{V_{2}}\), then these vertices cannot be dominated, as \(N_{G}(v)\cap\left(V_{2}\cup A\right)=N_{G}(u)\cap\left(V_{2}\cup A\right)=\emptyset\). Since the values of \(v,u\) are fixed after this branch, the measure decreases by (at least) \(2\) in each case.
**Branching Rule 3**: _Let \(v\in A\) with \(N_{G}(v)\subseteq\overline{V_{2}}\). Then branch as follows:_
1. _Put_ \(v\) _in_ \(V_{2}\) _and all vertices of_ \(N_{G}(v)\) _in_ \(V_{0}\)_._
2. _Put_ \(v\) _in_ \(V_{1}\)_._
Lemma 7: _The branching is a complete case distinction. Moreover, it leads at least to the following branching vector: \(\left(1+\omega,1\right).\)_
Proof: Because of Reduction Rule 2, we know that \(v\) has at least one neighbor in \(\overline{V_{2}}\). If \(v\) is assigned to \(2\), then its neighbors have to be in \(V_{0}\) (this decreases the measure by at least \(1+\omega\)). \(v\in\overline{V_{2}}\) triggers Reduction Rule 3, which implies \(v\in V_{1}\). This reduces the measure by \(1\).
Proof (Theorem 4): First we will show that the algorithm covers each case. Branching Rule 1 implies that each vector in \(A\) has at most one neighbor in \(A\). By Branching Rule 2, there is no edge between two vertices in \(A\). If there is a vertex in \(A\) with a neighbor in \(\overline{V_{2}}\), then this triggers Branching Rule 3. For the remaining vertices in \(A\), we use Reduction Rule 2. If a vertex in \(\overline{V_{2}}\) has no neighbor in \(A\), we use Reduction Rule 3. Hence, the branching algorithm handles each case for \(V\nsubseteq V_{0}\cup V_{1}\cup V_{2}\). As each branching is complete the branching algorithm returns all unique response Roman dominating functions of \(G\). The running time of the algorithm follows by Table 1.
By an inductive argument, it can be shown that after putting a vertex into \(V_{0}\), \(V_{1}\) or \(V_{2}\) we will never remove it from these sets again. Further, after putting a vertex into \(\overline{V_{2}}\), it will only go into \(V_{0}\) or \(V_{1}\). Since we decide for \(A\) vertices to be in either \(V_{2}\) or \(\overline{V_{2}}\) (together with the use of Reduction Rule 3), this implies that if the algorithm returns two unique response Roman dominating functions, then they will not be the same.
Claim: Let \((A,\overline{V_{2}},V_{0},V_{1},V_{2})\) be a constellation which is represented by a node \(v\) in the branching tree. Then either we can use Reduction Rule 1, or there is a constellation \((\emptyset,\emptyset,V_{0}^{\prime},V_{1}^{\prime},V_{2}^{\prime})\) which is represented by a leaf of the branching tree below \(v\) such that \(f\in\{0,1,2\}^{V}\) is a unique response Roman dominating function with \(V_{i}(f)\coloneqq V_{i}^{\prime}\) for \(i\in\{0,1,2\}\), is a unique response Roman dominating function.
Proof: In each branching rule, the algorithm puts a vertex into \(V_{0}\) if and only if the vertex has a neighbor in \(V_{2}\). Further, the neighbors of \(V_{2}\) vertices are all in \(V_{0}\). The algorithm puts a vertex into \(V_{1}\) only if we use Reduction Rule 3. As the algorithm never puts a vertex from \(\overline{V_{2}}\) into \(V_{2}\), \(V_{1}\cap N_{G}(V_{2})\) is empty.
Define \(g:V\to\{0,1,2\}\) with \(V_{0}(g)\coloneqq V_{0}\), \(V_{1}(g)=A\cup\overline{V_{2}}\cup V_{1}\) and \(V_{2}(g)=V_{2}\). We get \(g\) by using the last case of each branching rule on \((A,\overline{V_{2}},V_{0},V_{1},V_{2})\). By our observations from above, \(g\) is a unique response Roman dominating function if and only if \(|N_{G}(v)\cap V_{2}|=1\) holds for each \(v\in V_{0}(g)=V_{0}\). If this would not hold, then we could use Reduction Rule 1. \(\Diamond\)
As we only consider each vertex at most once in a branching, we have to test Reduction Rule 1 at most \(|V|\) times. This test runs in polynomial time. Therefore, the branching algorithm runs with polynomial delay.
\begin{table}
\begin{tabular}{l|l|l} Branching Rule \# & Branching vector & Branching number \\ \hline Branching Rule 1 & \((3,1-\omega)\) & 1.7229 \\ Branching Rule 2 & \((2,2,2)\) & \(\sqrt{3}\) \\ Branching Rule 3 & \((1+\omega,1)\) & 1.7218 \\ \end{tabular}
\end{table}
Table 1: Collection of all branching vectors; the branching numbers are displayed for the different cases with \(\omega=0.6\).
## 5 Extension Perfect Roman Domination
In this section, we will consider Extension Perfect Roman Domination. More precisely, we will first provide a combinatorial result for deciding if a function is a (pointwise) minimal perfect Roman dominating function.
Lemma 8: _Let \(G=(V,E)\) be a graph. A function \(f:V\rightarrow\{0,1,2\}\) is a minimal perfect Roman dominating function if and only if the following conditions are true:_
1. \(\forall v\in V_{0}(f):\ |N_{G}(v)\cap V_{2}(f)|=1\)_,_
2. \(\forall v\in V_{1}(f):\ |N_{G}(v)\cap V_{2}(f)|\neq 1\)_,_
3. \(\forall v\in V_{2}(f):\ |N_{G}(v)\cap V_{0}(f)|\neq 0\)_._
Proof: Let \(f\) be a minimal perfect Roman dominating function. This implies the first condition. If there exists a \(v\in V_{1}(f)\) with \(|N_{G}(v)\cap V_{2}(f)|=1\), then \(f-\chi_{\{v\}}\) is also a perfect Roman dominating function, cf. Observation 1. For each \(v\in V_{2}(f)\), \(|N_{G}(v)\cap V_{0}(f)|\neq 0\), as otherwise \(f-\chi_{\{v\}}\) is also a perfect Roman dominating function.
Now assume \(f\) is a function that fulfills the three conditions. By the first condition, we know \(f\) is a perfect Roman dominating function. Let \(g\) be a minimal perfect Roman dominating function with \(g\leq f\). Therefore, \(V_{0}(f)\subseteq V_{0}(g)\) and \(V_{2}(g)\subseteq V_{2}(f)\) hold. Assume there exists a \(v\in V\) with \(g(v)<f(v)=2\). By the third condition, there exists some \(u\in N_{G}(v)\cap V_{0}(f)\). Because of condition 1, \(v\) is the only neighbor of \(u\) with value 2. This implies that \(N_{G}(u)\cap V_{2}(g)\subseteq N_{G}(u)\cap(V_{2}(g)\setminus\{v\})=\emptyset\). This contradicts the construction of \(g\) as a perfect Roman dominating function. Therefore, \(V_{2}(g)=V_{2}(f)\). If there were a \(v\in V\) with \(g(v)=0<1=f(v)\), then this contradicts that \(g\) is a perfect Roman dominating function, since \(|N_{G}(v)\cap V_{2}(g)|=|N_{G}(v)\cap V_{2}(f)|\neq 1\).
Remark 2: This lemma provides that the function \(\mathds{1}\colon V\rightarrow\{0,1,2\}\) is the maximum minimal perfect Roman dominating function with respect to \(\omega\) for each graph \(G=(V,E)\), similar to maximum minimal Roman dominating functions as discussed in [15]. This is the case as if there exists a vertex \(v\in V\) of value 2 then this vertex needs a neighbor \(u\) with value 0. Since \(u\) has exactly one neighbor in \(V_{2}(f)\), \(\omega\left(f\right)\leq|V|\) for each perfect Roman dominating function \(f\).
We will use this lemma to show some some hardness results for Extension Perfect Roman Domination.
**Problem name:** Extension Perfect Roman Domination
**Given:** A graph \(G=(V,E)\) and \(f\in\{0,1,2\}^{V}\).
**Question:** Is there a minimal perfect Roman dominating function \(g\) on \(G\) with \(f\leq g\)?
Remark 3: It should be mentioned that there are graphs \(G=(V,E)\) such that the set of perfect Roman dominating functions on \(G\) is not closed under \(\leq\). This means that there exists a perfect Roman dominating function \(f\in\{0,1,2\}^{V}\) and
a \(g\in\{0,1,2\}^{V}\) with \(f\leq g\) such that \(g\) is no perfect Roman dominating function. One example for this is the \(4\)-cycle
\[C_{4}=(\{v_{1},v_{2},v_{3},v_{4}\},\{\{v_{1},v_{2}\},\{v_{2},v_{3}\},\{v_{3},v_{4 }\},\{v_{4},v_{1}\}\})\]
with the two functions
\[\begin{array}{l}f(v_{1})=0,\,f(v_{2})=2,\,f(v_{3})=0,\,f(v_{4})=1,\\ g(v_{1})=0,\,g(v_{2})=2,\,g(v_{3})=0,\,g(v_{4})=2.\end{array}\]
This contradicts the idea of extension problems as they are defined in [12], because this violates the so-called upward closedness condition, but this condition is a matter of debate even in [12].
### W[1]-Hardness
In this subsection, we will show W[1]-hardness of Extension Perfect Roman Domination if parameterized by the weight of the pre-solution. As the reduction will also be polynomial, this will also imply NP-hardness of the underlying unparameterized problem. For the NP-membership, we simple guess the value of each vertex (at most \(|V|\) steps) and then check the conditions of Lemma 8. For the hardness result, we use Irredundant Set. A set \(I\subseteq V\) is called _irredundant_ if each vertex in \(I\) has a private neighbor.
**Problem name:** Irredundant Set
**Given:** A graph \(G=(V,E)\) and \(k\in\mathbb{N}\)
**Question:** Is there an irredundant set \(I\) on \(G\) with \(|I|=k\)?
In [17] it is shown that Irredundant Set, parameterized by \(k\), is W[1]-complete.
Theorem 5.2: Extension Perfect Roman Domination _is NP-complete. \(k\)-Extension Perfect Roman Domination is W[1]-hard (even on bipartite graphs)._
Proof: For the NP-membership, we simple guess the value of each vertex and check the conditions of Lemma 8.
Let \(G=(V,E)\) be a graph and \(k\in\mathbb{N}\). Define for each \(i\in[k+1]\), \(V_{i}=\{v_{i}\mid v\in V\}\) and \(G^{\prime}=(V^{\prime},E^{\prime})\) with
\[V^{\prime}= \{a,b,c,d\}\cup\{u_{1},\ldots,u_{k}\}\cup\bigcup_{i=1}^{k+1}V_{ i},\] \[E^{\prime}= \{\{a,b\},\{c,d\}\}\] \[\cup \{\{a,u_{i}\},\{v_{i},u_{i}\},\{v_{i},c\},\{v_{i},w_{k+1}\}\mid v \in V,w\in N_{G}[v],i\in[k]\}.\]
We also need \(f\in\{0,1,2\}^{V^{\prime}}\) with \(V_{0}(f)=\{b,d\}\cup\bigcup_{i=1}^{k+1}V_{i},V_{1}(f)=\{u_{1},\ldots,u_{k}\}\) and \(V_{2}(f)=\{a,c\}\). This implies \(\omega(f)=k+4\). \(G^{\prime}\) is a bipartite graph, as we can partition its vertex set into two independent sets \(I_{1}=\{a,d\}\cup\{v_{i}\mid v\in V,i\in[k]\}\) and \(I_{2}=\{b,c\}\cup\{u_{i}\mid i\in[k]\}\cup\{v_{k+1}\mid v\in V\}\).
Let \(I=\{w^{1},\ldots,w^{k}\}\subseteq V\) be an irredundant set with \(|I|=k\). Define \(g\in\{0,1,2\}^{V^{\prime}}\) with
\[V_{0}(g)= \{b,d\}\cup\{v_{k+1}\mid v\in V:|N_{G}[v]\cap I|=1\}\cup\{v_{i}\mid i \in[k],v\in V\setminus\{w^{i}\}\},\] \[V_{1}(g)= \{u_{1},\ldots,u_{k}\}\cup\{v_{k+1}\mid v\in V:|N_{G}(v)\cap I|\neq 1\},\] \[V_{2}(g)= \{a,c\}\cup\{w^{1}_{1},\ldots,w^{k}_{k}\}.\]
Clearly, \(f\leq g\) holds. The vertex \(b\) is a private neighbor of \(a\), and \(d\) is one of \(c\). For each \(i\in[k]\), \(g(w^{i}_{i})=2\) implies \(|N_{G^{\prime}}(u_{i})\cap V_{2}(g)|\geq 2\). Since \(I\) is irredundant, for each \(w^{i}\in I\), there is a private neighbor \(p^{i}\) in \(G\) and hence, for each \(w^{i}_{i}\), there is a private neighbor \(p^{i}_{k+1}\in\{v_{k+1}\mid v\in V\}\) in \(G^{\prime}\). As \(g\) clearly is a Roman domination function, it is hence also a perfect Roman dominating function by Observation 1. Furthermore, for all \(i\in[k]\) and \(v\in V\setminus\{w^{i}\},\{c\}=N_{G^{\prime}}(v_{i})\cap V_{2}(g)\) holds. By the construction of \(g\), all vertices in \(v_{k+1}\) with \(v\in V\) have the correct value for \(g\) to be a minimal perfect Roman dominating function.
Assume we have a minimal perfect Roman dominating function \(g\) on \(G^{\prime}\) with \(f\leq g\). For all \(i\in[k]\), \(N_{G^{\prime}}(u_{i})=\{a\}\cup\{v_{i}\mid v\in V\}\subseteq\{a\}\cup N_{G^{ \prime}}(c)\). As \(f(c)=g(c)=2\), this implies that \(g(u_{i})=1\) and that \(V_{i}\cap V_{2}(g)\) is not empty by Lemma 8. Let \(w^{i}_{i}\) be such a vertex for each \(i\in[k]\). Define \(I=\{w^{1},\ldots,w^{k}\}\subseteq V\). As for each \(i\in[k]\), all vertices in \(V_{i}\) are already dominated by \(c\), the private neighbor of \(w^{i}_{i}\) has to be \(p^{i}_{k+1}\in V_{k+1}\) and hence, \(p^{i}\) is a private neighbor of \(w^{i}\) in \(G\). Therefore, \(I\) is an irredundant set with \(|I|=k\).
This result implies that it is unlikely to have an \(\mathsf{FPT}\) algorithm for \(k\)-Extension Perfect Roman Domination (unless \(\mathsf{W}[1]=\mathsf{FPT}\)). Nonetheless, we can provide an \(\mathsf{XP}\) algorithm below.
As \(|V_{2}(f)|=2\) and \(|V_{1}(f)|=k\), we get the following corollary.
Corollary 4: \(|V_{1}(f)|\)_-Extension Perfect Roman Domination is \(\mathsf{W}[1]\)-hard and \(|V_{2}(f)|\)-Extension Perfect Roman Domination is \(\mathsf{para}\)-\(\mathsf{NP}\)-hard._
### \(\mathsf{XP}\)-Membership
As we ruled out membership in \(\mathsf{FPT}\) under standard complexity assumptions in the previous subsection, the next best algorithmic fact one could hope for is
Figure 1: Construction for Theorem 4.1, for each \(v\in V\) and \(i\in[k]\).
membership in \(\mathsf{XP}\), a class that is also dubbed 'poor man's \(\mathsf{FPT}\)'. In the following, we will provide an explicit \(\mathsf{XP}\)-algorithm, allowing a running time of \(\mathcal{O}^{*}(n^{k})\) on graphs of order \(n\). Notice that in the next subsection, we even prove membership in \(\mathsf{W}[1]\). From this membership, one can also deduce membership in \(\mathsf{XP}\), but without having an implementable algorithm in hands. Further, the \(\mathsf{XP}\) algorithm implies some theoretical results which we need for the \(\mathsf{W}[1]\) membership proof. As testified in [36] for quite related problems (also confer the discussions in [22]), also \(\mathsf{XP}\)-algorithms of the proposed form could be very helpful in practical implementations.
Lemma 9: _Let \(G=(V,E)\) be a graph and \(f\in\{0,1,2\}^{V}\) such that for each \(u\in V_{1}(f)\), \(|N_{G}(u)\cap V_{2}(f)|\neq 1\) and for each \(v\in V_{2}(f)\), there exists a private neighbor in \(V_{0}(f)\) with respect to \(G\) and \(V_{2}(f)\). Then there exists a minimal perfect Roman dominating function \(g\in\{0,1,2\}^{V}\) on \(G\) with \(f\leq g\) and \(V_{2}(f)=V_{2}(g)\)._
Proof: Let \(A=\{v\in V_{0}(f)\mid|N_{G}(v)\cap V_{2}(f)|\neq 1\}\). Define \(g=f+\chi_{A}\). Clearly, \(f\leq g\) and \(V_{2}(f)=V_{2}(g)\). This leaves to show that \(g\) is a minimal perfect Roman dominating function.
By definition of \(A\), for each \(v\in V_{0}(g)=V_{0}(f)\setminus A\), \(1=|N_{G}(v)\cap V_{2}(f)|=|N_{G}(v)\cap V_{2}(g)|\). Further for all \(v\in V_{1}(g)=V_{1}(f)\cup A\), \(1\neq|N_{G}(v)\cap V_{2}(f)|\). Let \(v\in V_{2}(g)\). By the requirement on \(V_{2}(f)=V_{2}(g)\), for \(v\in V_{2}(g)\) there exists a private neighbor \(u\in V_{0}(f)\). Thus, \(|N_{G}(u)\cap V_{2}(f)|=1\) and \(u\in V_{0}(f)\setminus A=V_{0}(g)\). Hence \(N_{G}(v)\cap V_{0}(f)\neq\emptyset\) for all \(v\in V_{2}(f)\). In total \(g\) is a minimal Roman dominating function.
The proof gives also an algorithm to compute the minimal perfect Roman dominating function from the given function \(f\). We only have to compute \(A\subseteq V_{0}(f)\) as specified in the proof, which can be done in polynomial time.
Lemma 10: _Let \(G=(V,E)\) be graph and \(f\in\{0,1,2\}^{V}\). Then there exists a minimal perfect Roman dominating function \(g\in\{0,1,2\}\) with \(f\leq g\) and \(V_{2}(f)=V_{2}(g)\) if and only if each \(v\in V_{2}(f)\) has a private neighbor in \(V_{0}(f)\) (with respect to \(G\) and \(V_{2}(f)\)) and \(|N_{G}(u)\cap V_{2}(f)|\neq 1\) for each \(u\in V_{1}(f)\)._
Proof: The if-part follows directly by Lemma 9.
For the only if part, assume there exists a minimal perfect Roman dominating function \(g\) on \(G\) with \(f\leq g\) and \(V_{2}(f)=V_{2}(g)\). Hence, \(V_{0}(g)\subseteq V_{0}(f)\) and \(V_{1}(f)\subseteq V_{1}(g)\). Since \(g\) is a minimal perfect Roman dominating function, Conditions 1 and 3 hold. This implies that each \(v\in V_{2}(g)=V_{2}(f)\) has a private neighbor in \(V_{0}(g)\subseteq V_{0}(f)\). For each \(u\in V_{1}(g)\), \(|N_{G}(u)\cap V_{2}(f)|=|N_{G}(u)\cap V_{2}(g)|\neq 1\). As \(V_{1}(f)\subseteq V_{1}(g)\), this also holds for all \(u\in V_{1}(f)\).
Now we can prove that Algorithm 1 is an \(\mathsf{XP}\) algorithm.
Theorem 4.1: _Algorithm 1 is an \(\mathsf{XP}\) algorithm for \(k\)-Extension Perfect Roman Domination for \(k\in\{\omega\left(f\right),|V_{1}(f)|\}\). Let \((G,f)\) with \(G=(V,E)\) and \(f\in\{0,1,2\}^{V}\) be an instance of Extension Perfect Roman Domination. If the algorithm returns a minimal perfect Roman dominating function \(g\), then \(|V_{2}(g)|\leq\omega\left(f\right)\)._
Proof: Let \(G=(V,E)\) be graph and \(f\in\{0,1,2\}^{V}\). Since \(|V_{1}|\leq\omega\left(f\right)\), we only need to consider \(|V_{1}(f)|\) as parameterization. At first we explain the algorithm. The idea is modify \(f\) such that it fulfills the conditions of Lemma 9. Therefore, we look if each \(v\in V_{2}(f)\) has a private neighbor in \(V_{0}(f)\). If this is not the case, then there exists no minimal perfect Roman dominating function bigger than \(f\). If each \(v\in V_{2}(f)\) has a private neighbor in \(V_{0}(f)\), we look if there exists a \(u\in V_{1}(f)\) with \(|N_{G}(u)\cap V_{2}(f)|=1\). If this is not the case, then \(f\) itself already fulfills the condition of Lemma 9. Otherwise, we go through all neighboring vertices \(v\in N_{G}(u)\setminus V_{2}(f)\) and try the algorithm with
\[f_{v}\colon V\to\{0,1,2\},\,x\mapsto\begin{cases}2,&x=v\\ f(x),&x\neq v\end{cases},\]
_i.e._, \(f_{v}=f+(2-f(v))\cdot\chi_{\{v\}}\). Since we only return \(\mathsf{yes}\) (or a minimal perfect Roman dominating function) if \(f\) verifies the condition of Lemma 9 and we only increase \(f\), there exists a minimal perfect Roman dominating function \(g\in\{0,1,2\}\) with \(f\leq g\) if we return \(\mathsf{yes}\) (or a minimal perfect Roman dominating function).
Assume there exists a minimal perfect Roman dominating function, say \(g\), with \(f\leq g\). This implies \(V_{2}(f)\subseteq V_{2}(g)\) and \(V_{0}(g)\subseteq V_{0}(f)\). We prove that the algorithm will return a minimal perfect Roman dominating function by an induction argument on \(\omega(g-f)\).
If \(\omega(g-f)=0\), \(f=g\) holds. Then each vertex \(V_{2}(f)\) has a private neighbor in \(V_{0}(f)\). Otherwise, it would contradict Lemma 8, Condition 3. Further, for \(f=g\)
there exists no vertex \(v\in V_{1}(f)\) with \(|N_{G}(v)\cap V_{2}(f)|=1\) because of Condition 2 of Lemma 8.
Assume \(\omega(g-f)=1\). Then there exists exactly one vertex in \(v\in V\) with \(f(x)\neq g(v)\). More precisely, \(f(v)=g(v)-1\). First, we consider the case \(f(v)=0\). This implies that \(V_{2}(f)=V_{2}(g)\). By Lemma 10, \(f\) fulfills the conditions of Lemma 9. Now assume \(f(v)=1\). This implies \(V_{0}(f)=V_{0}(g)\) and \(V_{2}(g)=V_{2}(f)\cup\{v\}\). Therefore, for each \(u\in V_{2}(f)\subseteq V_{2}(g)\), there exists a private neighbor in \(N_{G}(u)\cap V_{0}(g)=N_{G}(u)\cap V_{0}(f)\). If each \(u\in V_{1}(f)\) fulfills \(|N_{G}(u)\cap V_{2}(f)|\neq 1\), then our algorithm would make use of Lemma 9. Assume there exists a \(u\in V_{1}(f)\) with \(\{w\}=N_{G}(u)\cap V_{2}(f)\). If \(u\notin N_{G}(v)\), then \(|N_{G}(u)\cap V_{2}(g)|=1\) would contradict the minimality of \(g\). Therefore, \(v\in N_{G}(u)\). Since \(|N_{G}(u)\cap V_{2}(g)|=1\) holds, our algorithm would try, for each \(x\in N_{G}[u]\setminus\{w\}\), if there exists a minimal perfect Roman dominating function \(h\in\{0,1,2\}^{V}\) such that \(f_{x}\leq h\). Hence, our algorithm would also consider \(f_{v}=g\) (see above).
Let \(\omega(g-f)>2\) and and assume that \(f\) does not fulfill the conditions of Lemma 9. Hence, \(f\neq g\). Recall \(V_{2}(f)\subseteq V_{2}(g)\) and \(V_{0}(g)\subseteq V_{0}(f)\). Therefore, for each \(v\in V_{2}(f)\subseteq V_{2}(g)\), there exists a \(u\in N_{G}(v)\cap V_{0}(g)\subseteq N_{G}(v)\cap V_{2}(f)\) with \(N_{G}(u)\cap(V_{2}(f)\setminus\{v\})\subseteq N_{G}(u)\cap(V_{2}(g)\setminus\{ v\})=\emptyset\). This implies that each \(v\in V_{2}(f)\) has a private neighbor in \(V_{0}(f)\). Since the algorithm does not return a minimal perfect Roman dominating function, there exists a \(u\in V_{1}(f)\) with \(\{v\}=N_{G}(u)\cap V_{2}(f)\). As \(g\) is minimal, there must exist a \(w\in(N_{G}(v)\cap V_{2}(g))\setminus\{u\}\). Therefore, we consider \(f_{x}\) for each \(x\in N_{G}[u]\setminus\{v\}\) (so also for \(w\)). By induction and \(\omega(g-f_{w})<\omega(g-f)\leq\omega(g-f_{w})+2\), it follows that we find a minimal perfect Roman dominating function \(f^{\prime}\) on \(G\) with \(f\leq f^{\prime}\). Hence, the algorithm runs correctly.
Now, we consider the running time of the algorithm. Checking if each vertex in \(V_{2}(f)\) has a private neighbor in \(V_{0}(f)\) can be done in polynomial time (in time \(\mathcal{O}(|V|^{3})\) with a naive algorithm). Further, we can test in polynomial time if there exists a \(u\in V_{1}(f)\) with \(|N_{G}(u)\cap V_{2}(f)|=1\) (in time \(\mathcal{O}(|V|^{2})\) with a naive algorithm). If this is the case, then we go through \(w\in N_{G}[u]\setminus V_{2}(f)\) and run the algorithm on on \(f_{w}\). Clearly \(V_{0}(f_{w})\subseteq V_{0}(f)\) and \(V_{1}(f_{w})\subseteq V_{1}(f)\) and \(V_{2}(f)\cup\{w\}=V_{2}(f_{w})\). Together with \(|N_{G}(u)\cap V_{2}(f_{w})|>1\), this implies that \(|N_{G}(u)\cap V_{2}(h)|\neq 1\) will hold for all \(h\in\{0,1,2\}^{V}\) with \(f_{w}\leq h\). Thus, we add only one vertex to \(V_{2}(f)\) per vertex in \(V_{1}(f)\). As we never add a vertex to \(V_{1}(f)\) (unless we use Lemma 9, but then we already know that there is a solution), the recursion tree has at most \(|V_{1}(f)|\leq k\) many nodes between the root and a leaf. This also proves the bound on the size of \(V_{2}(h)\) for a solution \(h\) returned by the algorithm. As there are at most \(|V|\) choices for \(w\), we call the recursive function at most \(n^{k}\) times. Therefore, it is an XP algorithm.
The proof of the last theorem implies the following corollary.
Corollary 5: _Let \(G=(V,E)\) be a graph and \(f\in\{0,1,2\}^{V}\) with \(k\coloneqq\omega\left(f\right)\). If there exists a minimal perfect Roman domination function \(h\) on \(G\) with \(f\leq h\), then there exists a minimal perfect Roman domination function \(g\) with \(|V_{2}(g)\setminus V_{2}(f)|\leq|V_{1}(f)|\), \(|V_{2}(g)|\leq k\) and \(f\leq g\)._
The very existence of an \(\mathsf{XP}\)-algorithm with respect to \(|V_{1}(f)|\) is interesting, as the very similarly looking parameterized problem \(|V_{1}(f)|\)-Extension Roman Hitting Set is \(\mathsf{para}\)-\(\mathsf{NP}\)-hard (see [22]). Moreover, we need this corollary in order to prove \(\mathsf{W}[1]\)-membership in the next subsection, so that we cannot derive \(\mathsf{XP}\)-membership without the considerations of this subsection.
### W[1]-Membership
We do not only provide a hardness result (presented in the previous subsection) but also a complete classification of \(\omega\left(f\right)\)-Extension Perfect Roman Domination.
Lemma 11: _Let \(G=\left(V,E\right)\) be a graph and \(f\in\left\{0,1,2\right\}^{V}\) with \(k\coloneqq\omega\left(f\right)\). There exists a minimal perfect Roman dominating function \(g\) on \(G\) with \(f\leq g\) if and only if there exists a \(V^{\prime}\subseteq V\) with \(|V^{\prime}|\leq k\) and \(V_{2}\subseteq V^{\prime}\) such that each \(v\in V^{\prime}\) has a private neighbor in \(V_{0}(f)\setminus V^{\prime}\) with respect to \(G\) and \(|N_{G}(u)\cap V^{\prime}|\neq 1\) for each \(u\in V_{1}(f)\setminus V^{\prime}\)._
Proof: Let \(g\) be a minimal perfect Roman dominating function on \(G\) with \(f\leq g\). By Corollary 5, there exists a minimal perfect Roman dominating function \(g^{\prime}\) on \(G\) with \(f\leq g^{\prime}\) and \(|V_{2}(g^{\prime})|\leq k\). As \(f\leq g^{\prime}\), \(V_{2}(f)\subseteq V^{\prime}\coloneqq V_{2}(g^{\prime})\), \(V_{0}(g^{\prime})\subseteq V_{0}(f)\setminus V^{\prime}\) and \(V_{1}(f)\setminus V^{\prime}\subseteq V_{1}(g^{\prime})\). Since \(g^{\prime}\) is minimal perfect Roman dominating function, each \(v\in V^{\prime}\) has a neighbor in \(V_{0}(g^{\prime})\subseteq V_{0}(f)\setminus V^{\prime}\). By the definition of perfect Roman dominating function, this neighbor is private. For \(u\in V_{1}(f)\setminus V^{\prime}\subseteq V_{1}(g^{\prime})\), \(|N_{G}(u)\cap V^{\prime}|\neq 1\). Therefore \(V^{\prime}\) fulfills the conditions.
Let \(V^{\prime}\) be a set that verifies the condition. Then \(f^{\prime}:V\rightarrow\left\{0,1,2\right\}\) with \(V_{0}(f^{\prime})=V_{0}(f)\setminus V^{\prime}\), \(V_{1}(f^{\prime})=V_{1}(f)\setminus V^{\prime}\) and \(V_{2}(f^{\prime})=V^{\prime}\) fulfills the conditions of 10. Hence, there exists a minimal perfect Roman dominating function \(g\) on \(G\) with \(f\leq f^{\prime}\leq g\).
We will use this result to show \(\mathsf{W}[1]\)-membership by a reduction with Short Non-Deterministic Turing Machine Computation.
**Problem name:**Short Non-Deterministic Turing Machine Computation
**Given:** A nondeterministic one-tape Turing machine TM, a word \(w\) and \(k\in\mathbb{N}\)
**Parameter:**\(k\)
**Question:** Does TM accept \(w\) in at most \(k\) steps?
Theorem 5.3: \(k\)_-Extension Perfect Roman Domination is \(\mathsf{W}[1]\)-complete._
Proof: We only need to prove \(\mathsf{W}[1]\)-membership. Therefore, let \(G=\left(V,E\right)\) be a graph and \(f\in\left\{0,1,2\right\}^{V}\) be a function with \(V_{1}(f)\coloneqq\left\{u^{1},\ldots,u^{\ell}\right\}\) and \(V_{2}(f)=\left\{v^{1},\ldots,v^{\ell^{\prime}}\right\}\) (\(\ell+2\ell^{\prime}=k\) and \(0\leq\ell,\ell^{\prime}\)).
The idea of our nondeterministic Turing machine (NTM for short) is to guess at most \(k\) new vertices in \(V_{2}(g)\) for a perfect Roman dominating function \(g\) with \(f\leq g\). After this we check if the vertices satisfy the conditions of Lemma 10. For the private neighborhood condition we guess for each vertex which vertex is
the private neighbor and check if this vertex is the private neighbor and if the vertex is in \(V_{0}(f)\). For the second condition we go for each \(u\in V_{1}(f)\) through the vertices and count the number of neighbors up to two.
Now we will describe how the NTM works. The input alphabet is given by the vertex set. Then, \(\Gamma\coloneqq\{b\}\cup V\cup\{v_{i}\mid v\in V,i\in[k]\}\) (\(|\Gamma|=|V|\cdot(k+1)\)) denotes the tape alphabet for which \(b\) is the blank symbol. Define the set of states by
\[Q \coloneqq\{q_{f}\}\cup\{s_{1},\ldots,s_{\ell^{\prime}}\}\cup\{q_{ \operatorname{fill},\ell^{\prime}+1},\ldots,q_{\operatorname{fill},k+1}\} \cup\{q_{w}^{j,i}\mid w\in V,\,i,j\in[k],\,i<j\}\] \[\cup\{q_{L}^{j,i}\mid i,j\in[k],i<j\}\cup\{q_{j}^{0},q_{j}^{1},q_ {L,j}\mid j\in[\ell]\}.\]
There is only the final state \(q_{f}\) and \(s_{1}\) is the start state. The number of states is at most \(k^{2}\cdot(|V|+1)+\mathcal{O}(k)\). The input word is just \(\ldots,b,v^{1},\ldots,v^{\ell^{\prime}},b,\ldots\). Now we will present all transitions:
1. \(((s_{i},v^{i}),(s_{i+1},v_{i}^{i},R))\) for all \(v\in V_{2}(f)\) and \(i\in[\ell^{\prime}-1]\),1 Footnote 1: Let us briefly explain how we write down the transitions of a Turing machine using this example. This transition can be activated if the TM is in state \(s_{i}\) and currently reads the symbol \(v^{i}\). It can then move to state \(s_{i+1}\), replace the symbol \(v^{i}\) by \(v_{i}^{i}\) and then move to the right.
2. \(((s_{\ell^{\prime}},v^{\ell^{\prime}}),(q_{\operatorname{fill},\ell^{\prime}+ 1},v_{\ell^{\prime}}^{\ell^{\prime}},R))\) for all \(i\in[k]\setminus[\ell^{\prime}]\) and \(v\in V\),
3. \(((q_{\operatorname{fill},i},b),(q_{\operatorname{fill},i+1},v_{i},R))\) for all \(i\in[k]\setminus[\ell^{\prime}]\) and \(v\in V\),
4. \(((q_{\operatorname{fill},i},b),(q_{L}^{i-1},b,L))\) for all \(i\in[k+1]\setminus\{1\}\),
5. \(((q_{L}^{i},v_{p}),(q_{L}^{i},v_{p},L))\) for all \(v\in V\) and \(i,p\in[k]\) with \(p\leq i\),
6. \(((q_{L}^{i},b),(q_{w}^{1,i},b,R))\) for all \(w\in V\) and \(i\in[k]\),
7. \(((q_{w}^{t,i},v_{t}),(q_{w}^{t,i},v_{t},R))\) for all \(\{v,w\}\in E\) and \(t,i\in[k]\) with \(t\leq i\),
8. \(((q_{w}^{t,i},v_{j}),(q_{w}^{t,i},v_{j},R))\) for all \(v,w\in V\) and \(i,j,t\in[k]\) with \(v\notin N_{G}[w]\) and \(j,t\leq i\),
9. \(((q_{w}^{t,i},b),(q_{L}^{t,i},b,L))\) for all \(w\in V\setminus V_{1}(f)\) and \(i,t\in[k]\) with and \(t\leq i\),
10. \(((q_{L}^{t,i},v_{j}),(q_{L}^{t,i},v_{j},L))\) for all \(v\in V\) and \(i,j,t\in[k]\) with \(j,t\leq i\),
11. \(((q_{L}^{t,i},b),(q_{w}^{t+1,i},b,R))\) for all \(w\in V\) and \(i,t\in[k]\) with \(t<i\),
12. \(((q_{L}^{i,i},b),(q_{1}^{0},b,R))\) for all \(w\in V\) and \(i\in[k]\),
13. \(((q_{U}^{0},v_{j}),(q_{U}^{0},v_{j},R))\) for all \(v\in V\), \(t\in[\ell]\) and \(j\in[k]\) with \(v\notin N_{G}[u^{t}]\),
14. \(((q_{U}^{0},v_{j}),(q_{1}^{1},v_{j},R))\) for all \(v\in V\), \(t\in[\ell]\) and \(j\in[k]\) with \(\{v,u^{t}\}\in E\),
15. \(((q_{U}^{0},v_{j}),(q_{L,t},v_{j},L))\) for all \(v\in V\), \(t\in[\ell]\) and \(j\in[k]\) with \(v=u^{t}\),
16. \(((q_{t}^{0},b),(q_{L,t},b,L))\) for all \(t\in[\ell]\),
17. \(((q_{t}^{1},v_{j}),(q_{1}^{1},v_{j},R))\) for all \(v\in V\), \(t\in[\ell]\) and \(j\in[k]\) with \(v\notin N_{G}[u^{t}]\),
18. \(((q_{t}^{1},v_{j}),(q_{L,t},v_{j},L))\) for all \(v\in V\), \(t\in[\ell]\) and \(j\in[k]\) with \(v\in N_{G}[u^{t}]\),
19. \(((q_{L,t},v_{j}),(q_{L,t},v_{j},L))\) for all \(v\in V\), \(t\in[\ell]\) and \(j\in[k]\),
20. \(((q_{L,t},b),(q_{t+1}^{0},b,R))\) for all \(t\in[\ell]\),
21. \(((q_{L,\ell},b),(q_{f},b,L))\).
For the proof we will divide a run of the NTM into three phases and prove claims for each phase separately. The first phase is from the beginning to the (only) use of the transition 6. In this phase, we guess \(V_{2}(g)\) of our would-be solution \(g\).
At the beginning of the run, we are in the state \(s_{1}\) and at the leftmost position of the tape with a non-blank symbol (\(v^{1}\) is in this position). By an easy inductive argument, we can prove that after \(\ell^{\prime}\) steps, \(\ldots,b,v^{1}_{1},\ldots,v^{\ell^{\prime}}_{\ell^{\prime}},b,\ldots\) is written on the tape and the tape head is to the right of \(v^{\ell^{\prime}}_{\ell^{\prime}}\) and the TM is in the state \(q_{\mathrm{fill},\ell^{\prime}+1}\). Keep in mind that the states \(s_{1},\ldots,s_{\ell^{\prime}}\) occur only in the transitions 1, being mentioned in increasing order. Therefore, we will never get into these states again.
The next step is the first nondeterministic step. Here the Turing machine guesses, if it uses a transition of 4 or uses for a \(v\in V\) the transition
\[\left((q_{\mathrm{fill},\ell^{\prime}},b),(q_{\mathrm{fill},\ell^{\prime}+1}, v_{\ell^{\prime}+1},R)\right).\]
If the Turing machine uses the second case, _i.e._, when a new (encoded) vertex has been written on the tape, then it has to guess again if it moves into the state \(q^{i}_{L}\) or if it writes a new vertex on the tape (\(\ell^{\prime}\leq i\leq k\)). After at most \(k+1\) steps, we have to use a transition from 4, as for \(q_{\mathrm{fill},k+1}\) there is only one transition.
Assume that after \(i\) steps (with \(\ell^{\prime}\leq i\leq k\)) we use a transition from 4. Then we are in the state \(q^{i}_{L}\) and \(\ldots,b,v^{1}_{1},\ldots,v^{i}_{i},b\ldots\) is written on the tape. More precisely, the initial string \(v^{1}\cdots v^{\ell^{\prime}}\) that was the input of the TM has been converted into the prefix \(v^{1}_{1}\cdots v^{\ell^{\prime}}_{\ell^{\prime}}\), while the suffix \(v^{\ell^{\prime}+1}_{\ell^{\prime}+1}\cdots v^{i}_{i}\) is the part that was nondeterministically guessed. Since the NTM is never going in to the states \(s_{1},\ldots,s_{\ell^{\prime}}\) again and 2 and 3 are the only transitions with a \(q_{\mathrm{fill},j}\) on the right side for \(j\in[k]\setminus[\ell^{\prime}]\), the Turing machine never goes back into these states. From now on, the next \(i\) steps are deterministic. By an easy inductive argument again, the tape head moves to the position to the left of \(v^{1}_{1}\). At this point, the first phase ends. This phase needed \(2i+1\leq 2k+1\) steps. As we never use the transitions from 1 to 6 again, the symbols on the tape do not change anymore, _i.e._, during the remaining run, \(\ldots,b,v^{1}_{1},\ldots,v^{i}_{i},b\ldots\) will stay on the tape, where \(v^{1},\ldots,v^{i}\in V\) are not necessary different vertices. Since \(q_{\mathrm{fill},j}\) had to choose any vertex from \(V_{2}(f)\), \(V^{\prime}\coloneqq\{v^{1},\ldots,v^{i}\}\) could be any set of size at most \(i\) with \(V_{2}(f)\subseteq V^{\prime}\).
The next phase deals with the privacy condition of the vertices on the tape. This will also ensure that no vertex is written twice on the tape. This phase will end with the use of a transition from 12. The transitions in 6 and 11 are nondeterministic, as the Turing machine guesses the vertex \(w\). Let \(t\in[i]\) and \(w\in V_{1}(f)\). The idea of the state \(q^{t,i}_{w}\) is to check if \(w\) is the private neighbor of \(v^{t}\) (and \(v^{t}\neq w\)) with respect to \(V^{\prime}\). Let \(j\in[i]\). If \(j=t\), then there is a deterministic step if and only if \(w\in N_{G}(v_{t})\) (see 7). Otherwise, the Turing machine stops (in a non-final state), as there is no transition the NTM can use. For \(j\neq t\) the Turing machine will only stop (in a non-final state) if and only if \(w\in N_{G}[v_{j}]\). Otherwise, it performs a deterministic step and goes on with the next symbol (vertex) \(v_{j+1}\) (see 8). In other words, the Turing machine stops in the state \(q^{t,i}_{w}\) if and only if \(w\) is not a private neighbor of \(v_{t}\) and \(v_{t}\neq w\). This can be proven by induction again. If the Turing machine made \(i\) steps in the state \(q^{t,i}_{w}\), the head is at the end of the word \(v^{1}_{1},\ldots,v^{i}_{i}\) and goes back with the state \(q^{t,i}_{L}\) to the beginning (see 9 and 10). Then, the state changes to \(q^{t+1,i}_{w}\) for \(t\in[i-1]\) (see 11)
or it goes into the new state \(q_{1}^{0}\) for \(t=i\) (see 12). Therefore, after \(2i^{2}\) steps in this phase, the Turing machine is still running if and only if each vertex \(V^{\prime}\) has a private neighbor in \(V\setminus(V_{1}(f)\cap V^{\prime})\). This ends the second phase.
The third phase will check if each \(u\in V_{1}(f)\setminus V^{\prime}\) verifies \(|N_{G}(u)\cap V^{\prime}|\neq 1\). Then Lemma 11 implies that there exists a minimal perfect Roman dominating function \(g\) on \(G\) with \(f\leq g\). The idea of the state \(q_{t}^{2}\) for \(t\in[\ell]\) and \(z\in\{0,1\}\), is that the index \(z\) counts how many neighbors of \(u^{t}\) we have seen so far.
Let \(t\in[\ell]\). Assume we are in the state \(q_{t}^{0}\) and at the current position on the tape, there is \(v_{j}^{j}\) for \(j\in[i]\). If \(v^{j}=u^{t}\), then \(u^{t}\in V^{\prime}\cap V_{1}(f)\). Thus, we do not need to consider \(u^{t}\) anymore and we switch into the state \(q_{L,t}\) (see 15). If \(v_{j}\in N_{G}(u)\), the Turing machine has seen a neighbor and goes on with the state \(q_{t}^{1}\) (see 14). Otherwise, we just go to the right, staying in the same state (see 13). In the case when the tape head is at the right end of the word in the state \(q_{t}^{0}\), this implies that \(u^{t}\) has no neighbor in \(V^{\prime}\). Therefore, the NTM switches the state to \(q_{L,t}\) (see 19).
Assume we are in the state \(q_{t}^{1}\). Keep in mind that the NTM only goes into the state \(q_{t}^{1}\) if and only if the vertex under the current head position is in the neighborhood of \(u^{t}\). For \(v^{j}\in N_{G}[u^{t}]\), either \(u^{t}=v^{j}\in V^{\prime}\cap V_{1}(f)\) or \(|N_{G}(u^{t})\cap V^{\prime}|>1\). In both cases, we need not consider \(u^{t}\) anymore and the Turing machine can switch into the state \(q_{L,t}\) (see 18). If \(v_{j}\notin N_{G}(u^{t})\), the NTM only goes to the right on the tape (see 17). By an inductive argument, we can show that the NTM ends in the state \(q_{t}^{i}\) on a cell containing \(b\) if and only if \(u^{t}\) has exactly one neighbor in \(V^{\prime}\). In this case, the Turing machine would stop in a non-final state. This implies that the NTM uses the transition 21 after at most \(2\ell\cdot i\) steps in the last phase if and only if for each \(t\in[\ell]\), \(|N_{G}(u^{t})\cap V^{\prime}|\neq 1\).
In total, the Turing machine reaches \(q_{f}\) in at most \(2i+1+2i^{2}+2\ell\cdot i\leq k^{\prime}\coloneqq 4k^{2}+2k+1\) steps if and only if there exists a vertex set \(V^{\prime}\supseteq V_{2}(f)\) with \(|V^{\prime}|\leq k\) such that each \(v\in V^{\prime}\) has a private neighbor in \(V_{0}(f)\setminus V^{\prime}\) and for each \(u\in V_{1}(f)\setminus V^{\prime}\), \(|N_{G}(u)|\neq 1\). By Lemma 11, the Turing machine ends in a final state (after at most \(k^{\prime}\) steps) if and only if there exists a minimal perfect Roman dominating function \(g\) on \(G\) with \(f\leq g\).
### \(|V_{0}(f)|\)-Extension Perfect Roman Domination
The main goal of this subsection is to prove the \(\mathsf{W}[2]\)-completeness of this parameterized problem. We start with some small observations.
Lemma 12: \(|V_{0}(f)\cup V_{1}(f)|\)_-Extension Perfect Roman Domination, \(\omega\left(2-f\right)\)-Extension Perfect Roman Domination\(\in\) FPT._
The proof works analogously to Theorem 5.5 of [22].
Proof: For the proof we go trough all possible functions \(g\in\{0,1,2\}^{V}\) with \(f\leq g\) and check if \(g\) is a minimal perfect Roman dominating function (such a check runs in polynomial time). Let \(v\in V\). If \(f(v)=2\), then \(g(v)=2\). For \(v\in V_{1}(f)\), there are two choices for \(g(v)\). If \(f(v)=0\) then \(g(v)\in\{0,1,2\}\). Since \(|V_{0}(f)\cup V_{1}(f)|\leq\omega\left(2-f\right)\) and for each vertex in \(V_{0}(f)\cup V_{1}(f)\) there are at most
\(3\) choices, there up to \(3^{k}\) possibilities to check for \(k\in\{|V_{0}(f)\cup V_{1}(f)|,\omega\left(2-f\right)\}\). Hence, there is a \(\mathsf{FPT}\)algorithm for both parameterizations.
For \(\omega\left(2-f\right)\) as parameterization are even \(2^{\omega\left(2-f\right)}\) many possibilities. For the details take a look into [22].
Lemma 13: _Let \(G=\left(V,E\right)\) be a graph and \(f\in\{0,1,2\}^{V}\) a function. For a minimal perfect Roman dominating function \(g\) on \(G\) with \(f\leq g\), \(|V_{2}(g)|\leq|V_{0}(f)|\)._
Proof: As mentioned before, \(V_{0}(g)\subseteq V_{0}(f)\). Further, \(|N_{G}(v)\cap V_{2}(g)|=1\) for each \(v\in V_{0}(g)\), by Lemma 8. Therefore, there exists a function \(\phi:V_{0}(g)\to V_{2}(f)\), which maps a vertex to its unique neighbor in \(V_{2}(f)\). By Lemma 8, each vertex in \(V_{2}(g)\) has at least one neighbor in \(V_{0}(g)\). Thus, \(\phi\) is surjective. Hence, \(|V_{2}(g)|\leq|V_{0}(g)|\leq|V_{0}(f)|\).
From this lemma we can we can provide a simple \(\mathsf{XP}\) algorithm with the parameterization \(|V_{1}(f)|\). We know that if there exists a minimal perfect Roman dominating function \(g\) with \(f\leq g\) then \(|V_{2}(g)|\leq|V_{0}(f)|\). Therefore, we guess the at most \(|V_{0}(f)|\) many vertices in \(V_{2}(g)\) and use Lemma 10 on the new function.
Nonetheless, Algorithm 1 is also an \(\mathsf{XP}\) algorithm with respect to the parameterization \(|V_{0}(f)|\). The running time result follows as on one path of the branching tree we can only add \(|V_{0}(f)|\) vertices to \(V_{2}(f)\). Otherwise, not each vertex in \(V_{2}(f)\) will have private in \(V_{0}(f)\). Therefore, the depth of the branching tree is at most \(|V_{0}(f)|\).
For the membership, we use a reduction using the problem Short Blind Non-Deterministic Multi-Tape Turing Machine Computation which was introduced by Cattaneo and Perdrix in [13]. In that paper, they have also shown \(\mathsf{W}[2]\)-completeness of this Turing machine problem. The difference between a _blind_ multi-tape nondeterministic and a normal multi-tape nondeterministic Turing machine is that the transitions can be independent of the symbols that are in the cells under the current head positions, _i.e._, the Turing machine may, but need not read the cell contents, and in this sense, it may be blind.2
Footnote 2: Possibly, _oblivious_ would have been a better term for this property, but we stick to the notion _blind_ as introduced in the mentioned paper.
**Problem name:**Short Blind Non-Deterministic Multi-Tape Turing Machine Computation
**Given:** A nondeterministic multi-tape Turing machine TM, a word \(w\) and \(k\in\mathbb{N}\)
**Parameter:**\(k\)
**Question:** Does TM accept \(w\) in at most \(k\) steps?
Theorem 8: \(|V_{0}(f)|\)_-Extension Perfect Roman Domination\(\in\mathsf{W}[2]\)._
Proof: For the membership, we only sketch the proof, as most parts are analogous to the proof of Theorem 7. In this proof, we have \(|V_{1}(f)|+1\) many tapes and add the symbol \(\#\) to our work alphabet. On the first tape, the set \(V^{\prime}\) will
be enumerated (as in proof of Theorem 4.1, so that \(V_{2}(f)\subseteq V^{\prime}\) and hopefully \(V^{\prime}=V_{2}(g)\) for a perfect rdf \(g\) that extends \(f\)). Each remaining tape will represent a vertex in \(V_{1}(f)\coloneqq\{u^{1},\ldots,u^{\ell}\}\). At the beginning, \(\ldots,b,\#,b,\ldots\) is on each of these tapes and the head is on the \(b\)-occurrence immediately to the right of \(\#\).
In the first \(2i+1+2i^{2}\) (\(i\) is the cardinality of \(V^{\prime}\)) steps, we simulate the NTM of the proof of Theorem 4.1 on the first tape. The other tapes stay the same. Hence, before we start the third phase, we know that the first tape contains a list of vertices (without repetitions) that meet the privacy condition. The third phase is different. We go once again through the word on the first tape. Let \(v_{j}^{j}\) for \(j\in[i]\) be the symbol on the current cell. Then proceed for the \((t+1)^{\mathrm{st}}\) tape (for \(t\in[|V_{1}(f)|]\)) as follows:
* If \(v^{j}=u^{t}\), then write two \(\#\) on the \((t+1)^{\mathrm{st}}\) tape, moving to the right after writing \(\#\).
* If \(v^{j}\in N_{G}(u^{t})\), then write one \(\#\) on the \((t+1)^{\mathrm{st}}\) tape and go one step to the right.
* If \(v^{j}\notin N_{G}[u^{t}]\), then do nothing on the \((t+1)^{\mathrm{st}}\) tape.
When we are gone through the first tape, we (blindly) move on each tape (but the first tape) left twice. \(b\) is now in a current cell of these tapes if and only if the corresponding \(u^{t}\) has exactly one neighbor in \(V^{\prime}\) and is not in \(V^{\prime}\). So by Lemma 11, we go into the final state if and only if each head is on a \(\#\).
Altogether, the described NTM would make at most \(2i+1+2i^{2}+i+2\) many steps. As \(i=|V_{2}(g)|\), the claim follows with Lemma 11.
For proving \(\mathsf{W}[2]\)-hardness, we use \(k\)-Multicolored Dominating Set which is known to be \(\mathsf{W}[2]\)-complete, see [27].
**Problem name:**\(k\)-Multicolored Dominating Set
**Given:** A graph \(G=(V,E)\), \(k\in\mathbb{N}\) and a partition \(W_{1},\ldots,W_{k}\) of \(V\)
**Parameter:**\(k\)
**Question:** Is there a dominating set \(D\subseteq V\) with \(|W_{i}\cap D|=1\) for each \(i\in[k]\)?
Theorem 4.2: \(|V_{0}(f)|\)-Extension Perfect Roman Domination _is \(\mathsf{W}[2]\)-complete even on bipartite graphs._
Proof: Let \(G=(V,E)\) be a graph with the vertex set partition \(W_{1},\ldots,W_{k}\). Define \(X\coloneqq\{x_{1},\ldots,x_{k}\}\) and \(\widetilde{G}=(\widetilde{V},\widetilde{E})\) with \(a,b\notin V\),
\[\widetilde{V}\coloneqq \{v_{1},v_{2}\mid v\in V\}\cup X\cup\{a,b\}\] \[\widetilde{E}\coloneqq \{\{a,b\}\}\cup\{\{a,v_{2}\}\mid v\in V\}\cup\{\{v_{1},x_{j}\} \mid v\in W_{j}\}\cup\] \[\{\{v_{1},u_{2}\}\mid v,u\in V,u\in N_{G}[v]\}.\]
The graph is also visualized in Figure 2. \(\widetilde{G}\) is bipartite with two color classes \(A\coloneqq V_{1}\cup\{a\}\) and \(B\coloneqq X\cup V_{2}\cup\{b\}\). To make it easier to verify for the reader,
the vertices of \(A\) are mentioned first in the definition of \(\widetilde{E}\). Further, define \(f:\widetilde{V}\rightarrow\{0,1,2\}\) with \(V_{0}(f)\coloneqq X\cup\{b\}\), \(V_{1}(f)\coloneqq\{v_{1},v_{2}\mid v\in V\}\) and \(V_{2}(f)\coloneqq\{a\}\). Clearly, this is even a polynomial-time reduction and \(|V_{0}(f)|=k+1\).
Let \(D\) be a dominating set of \(G\) with \(|W_{i}\cap D|=1\) for each \(i\in[k]\). For \(i\in[k]\), \(u^{i}\) denotes the unique vertex in \(W_{i}\cap D\). Define \(U=\{u_{1}^{1},\ldots,u_{1}^{k}\}\subseteq V_{1}\) and \(g\in\{0,1,2\}^{V}\) such that \(V_{0}(g)\coloneqq V_{0}(f)\), \(V_{1}(g)\coloneqq V_{1}(f)\setminus U\) and \(V_{2}(g)\coloneqq\{a\}\cup U\).
By the use of 8, we will show that \(g\) is a minimal perfect Roman dominating function. First of all, \(N_{G}(a)\cap V_{0}(g)=\{b\}\) and \(N_{G}(b)\cap V_{2}(g)=\{a\}\). Furthermore, for each \(i\in[k]\), \(N_{G}(u_{1}^{i})\cap V_{0}(g)=\{x_{i}\}\) and \(N_{G}(x_{i})\cap V_{2}(g)=\{u_{1}^{i}\}\). This implies the first and last condition of Lemma 8. Let \(v\in V\). If \(v_{1}\in V_{1}(g)\), then \(N_{G}(v_{1})\cap V_{2}(g)=\emptyset\). As \(D\) is a dominating set, \(D\cap N_{G}[v]\) is not empty. This implies that \(|N_{G}(v_{2})\cap V_{2}(g)|>1\). Thus, \(g\) is a minimal perfect Roman dominating function.
Let \(g\in\{0,1,2\}^{V}\) be a minimal perfect Roman dominating function on \(\widetilde{G}\) with \(f\leq g\). Since \(N_{G}(v_{2})\cap V_{0}(g)\subseteq N_{G}(v_{2})\cap V_{0}(f)=\emptyset\) for all \(v\in V\), \(\{v_{2}\mid v\in V\}\subseteq V_{1}(g)\). \(V_{0}(f)\cap N_{G}(a)=\{b\}\) implies \(g(b)=0\). Define \(W^{\prime}_{i}\coloneqq\{v_{1}\mid v\in W_{i}\}\) for \(i\in[k]\). As \(N_{G}(w_{1})\cap V_{0}(f)=\{x_{i}\}\) holds for all \(i\in[k]\) and \(w_{1}\in W^{\prime}_{i}\), \(|V_{2}(g)\cap W_{i}|\leq 1\) for all \(i\in[k]\). Define \(D\coloneqq\{w\in V\mid\exists i\in[k]:\{w_{1}\}=V_{2}(g)\cap W^{\prime}_{i}\). Hence, \(|D\cap W_{i}|\leq 1\) for each \(i\in[k]\). Let \(v\in V\). Since \(g(v_{2})=1\) and \(a\in N_{G}(v_{2})\cap V_{2}(g)\), \(N_{G}(v_{2})\cap(V_{2}(g)\setminus\{a\})\) is not empty. \(N_{G}(v_{2})\setminus\{a\}\subseteq\{v_{1}\mid v\in V\}\) implies \(N_{G}[v]\cap D^{\prime}\neq\emptyset\). Therefore, \(D^{\prime}\) is a dominating set. By adding an arbitrary vertex from \(W_{i}\) with \(W^{\prime}_{i}\cap V_{2}(g)=\emptyset\) to \(D^{\prime}\), we get a solution to the Colored Dominating Set instance.
## 6 Minimal Perfect Roman Domination and Minimal Roman Domination
In this section, we will take a look at the connection between minimal Roman dominating functions and minimal perfect Roman dominating functions. To this end, we will use the following theorem from [4].
Theorem 6.1: _Let \(G=(V,E)\) be a graph, \(f:\,V\rightarrow\{0,1,2\}\) be a function and let \(G^{\prime}\coloneqq G\left[V_{0}\left(f\right)\cup V_{2}\left(f\right)\right]\). Then, \(f\) is a minimal Roman dominating function if and only if the following conditions hold:_
Figure 2: Construction for Theorem 6.1
1. \(N_{G}\left[V_{2}\left(f\right)\right]\cap V_{1}\left(f\right)=\emptyset\)_,_
2. \(\forall v\in V_{2}\left(f\right):\:P_{G^{\prime},V_{2}\left(f\right)}\left(v \right)\nsubseteq\left\{v\right\}\)_, and_
3. \(V_{2}\left(f\right)\) _is a minimal dominating set on_ \(G^{\prime}\)_._
Let \(G=\left(V,E\right)\) be graph. Define the sets of functions \(\mu-\mathcal{RDF}\left(G\right)=\left\{f:V\rightarrow\left\{0,1,2\right\} \mid f\text{ is a minimal Roman dominating function}\right\}\) and \(\mu-\mathcal{PRDF}\left(G\right)=\left\{f:V\rightarrow\left\{0,1,2\right\} \mid f\text{ is a minimal perfect Roman dominating function}\right\}\).
Theorem 3.1: _Let \(G=\left(V,E\right)\) be graph. There is a bijection \(B\colon\mu-\mathcal{RDF}\left(G\right)\rightarrow\mu-\mathcal{RDF}\left(G\right)\). Furthermore, \(B(f)\) and \(B^{-1}(g)\) can be computed in polynomial time (with respect to \(G\)) for each \(f\in\mu-\mathcal{RDF}\left(G\right)\) and \(g\in\mu-\mathcal{PRDF}\left(G\right)\)._
Proof: Let \(f\in\mu-\mathcal{RDF}\left(G\right)\) and \(g\in\mu-\mathcal{PRDF}\left(G\right)\). Define \(B(f)\) by the three sets
\[V_{0}(B(f))= \left\{v\in V_{0}(f)\mid\left|N_{G}(v)\cap V_{2}(f)\right|=1 \right\},\] \[V_{1}(B(f))= \left\{v\in V_{0}(f)\mid\left|N_{G}(v)\cap V_{2}(f)\right|\geq 2 \right\}\cup V_{1}(f),\] \[V_{2}(B(f))= \left.V_{2}(f).\right.\]
By definition of \(B\), \(\left|N_{G}(v)\cap V_{2}(f)\right|=1\) holds for all \(v\in V_{0}(B(f))\). Since each \(v\in V\) with \(f(v)=2\) needs a private neighbor except itself by Theorem 3.1, \(N_{G}(v)\cap V_{0}(B(f))\neq\emptyset\) holds. As any \(v\in V_{1}(f)\) has no neighbor in \(V_{2}(f)\) and any \(v\in V_{1}(B(f))\setminus V_{1}(f)\) has at least two neighbors in \(V_{2}(f)\), all conditions of Lemma 8 are met. Therefore, \(B(f)\in\mu-\mathcal{PRDF}\left(G\right)\).
Define \(B^{-1}(g)\) by the three sets
\[V_{0}(B^{-1}(g))= V_{0}(g)\cup\{v\in V_{1}(g)\mid\left|N_{G}(v)\cap V_{2}(g) \right|\geq 2\},\] \[V_{1}(B^{-1}(g))= \left\{v\in V_{1}(g)\mid\left|N_{G}(v)\cap V_{2}(g)\right|=0\right\},\] \[V_{2}(B^{-1}(g))= V_{2}(f).\]
By definition of \(B^{-1}\), \(N_{G}(v)\cap V_{2}(B^{-1}(g))=\emptyset\) for each \(v\in V_{1}(B^{-1}(g))\). By Lemma 8, each \(v\in V_{0}(g)\) has exactly one neighbor in \(V_{2}(g)\). Therefore, each vertex in \(V_{0}(B^{-1}(g))\) has a neighbor in \(V_{2}(B^{-1}(g))\) and \(V_{2}(B^{-1}(g))\) is a dominating set on \(G[V_{0}(B^{-1}(g))\cup V_{2}(B^{-1}(g))]\). Lemma 8 implies that each \(v\in V_{2}(g)\) has a \(u\in N_{G}(v)\cap V_{0}(g)\) with \(\{v\}=N_{G}(u)\cap V_{2}(g)\). Hence, \(u\) is a private neighbor of \(v\) and \(B^{-1}(g)\in\mu-\mathcal{RDF}\left(G\right)\).
This leaves to show that \(B^{-1}(B(f))=f\) and \(B(B^{-1}(g))=g\). Trivially, \(V_{2}(B^{-1}(B(f)))=V_{2}(f)\) and \(V_{2}(B(B^{-1}(g)))=V_{2}(g)\) hold. Let \(v\in V_{1}(f)\). By Theorem 3.1, we know \(N_{G}(v)\cap V_{2}(f)=\emptyset\). This implies \(v\in V_{1}(B(f))\cap V_{1}(B^{-1}(B(f)))\). Let \(v\in V_{0}(f)\) with \(\left|N_{G}(v)\cap V_{2}(f)\right|=1\). Thus, \(v\in V_{0}(B(f))\cap V_{0}(B^{-1}(B(f)))\). Let \(v\in V_{0}(f)\) with \(\left|N_{G}(v)\cap V_{2}(f)\right|\geq 2\). Therefore, \(v\in V_{1}(B(f))\) holds. The definition of \(B^{-1}\) implies \(v\in V_{0}(B^{-1}(B(f)))\). Hence, \(B^{-1}(B(f))=f\). Assume \(v\in V_{0}(g)\). Lemma 8 implies \(\left|N_{G}(v)\cap V_{2}(f)\right|=1\). Therefore, \(v\in V_{0}(B(B^{-1}(f)))\cap V_{0}(B^{-1}(f))\). For \(v\in V_{1}(g)\) with \(N_{G}(v)\cap V_{2}(g)=\emptyset\) the construction of the functions \(B,B^{-1}\) implies \(v\in V_{1}(B(B^{-1}(f)))\cap V_{1}(B^{-1}(f))\). Let \(v\in V_{1}(g)\) with \(\left|N_{G}(v)\cap V_{2}(g)\right|\geq 2\). This leads to \(v\in V_{0}(B^{-1}(g))\) and \(v\in V_{1}(B(B^{-1}(g)))\). Hence, \(B(B^{-1}(g))=g\). Therefore, the theorem holds.
This bijection is a so-called _parsimonious reduction_. This is a class of reductions designed for enumeration problems. For more information, we refer to [38]. Even without this paper, Theorem 11 implies some further results thanks to [4].
Corollary 6: _There are graphs of order \(n\) that have at least \(\sqrt[5]{16}^{n}\in\Omega(1.7441^{n})\) many minimal perfect rdf._
Theorem 12: _There is a polynomial-space algorithm that enumerates all minimal perfect rdf of a given graph of order \(n\) with polynomial delay and in time \(\mathcal{O}^{*}(1.9332^{n})\)._
According to [5], we get some further results for special graph classes: Let \(G=(V,E)\), with \(n\coloneqq|V|\) being the order of \(G\). If \(G\) is a forest or interval graph, then there is a perfect Roman dominating function enumeration algorithm that runs in time \(\mathcal{O}^{*}(\sqrt{3}^{n})\) with polynomial delay. For both graph classes, a graph with many isolated edges is example of a graph with \(\sqrt{3}^{n}\) many minimal perfect Roman dominating function. This is also the worst-case example for chordal graphs which is known so far. We can enumerate all perfect Roman dominating functions of these graphs in time \(\mathcal{O}(1.8940^{n})\). If \(G\) is a split or cobipartite graph, then we can enumerate all perfect Roman dominating functions of \(G\) in \(\mathcal{O}^{*}(\sqrt[3]{3}^{n})\) with polynomial delay. Further, both classes include graphs of order \(n\) with \(\Omega(\sqrt[3]{3}^{n})\) many perfect Roman dominating functions. There is also a polynomial-time recursive algorithm to count all perfect Roman dominating functions of paths.
Remark 4: It should be mentioned that the function \(B\) is inheriting the minimality but it does not mean that if \(f\) is a minimum Roman dominating function that \(B(f)\) is a minimum perfect Roman dominating function. For this we consider the graph \(G=(\overline{V,E})\) with \(V\coloneqq\{v_{1},\ldots,v_{8}\}\) and
\[E\coloneqq\{\{v_{1},v_{3}\},\{v_{2},v_{3}\},\{v_{4},v_{3}\},\{v_{5},v_{3}\}, \{v_{4},v_{6}\},\{v_{5},v_{6}\},\{v_{7},v_{6}\},\{v_{8},v_{6}\}\}.\]
Since \(v_{1},v_{2}\) and \(v_{7},v_{8}\) are pairs of false twins and \(v_{1},v_{7}\) have no common neighbors, for each Roman dominating function \(f\) on \(G\), \(\omega\left(f\right)\geq 4\). Therefore, \(f\in\{0,1,2\}^{V}\) with \(V_{0}(f)=\{v_{1},v_{2},v_{4},v_{5},v_{7},v_{8}\}\), \(V_{1}(f)=\emptyset\) and \(V_{2}(f)=\{v_{3},v_{6}\}\) is a minimum Roman dominating function. \(B(f)\) is given by \(V_{0}(B(f))=\{v_{1},v_{2},v_{7},v_{8}\}\)
Figure 3: Counter-example of Remark 4
\(V_{1}(B(f))=\{v_{4},v_{5}\}\) and \(V_{2}(f)=\{v_{3},v_{6}\}\). The weight is \(\omega\left(B(f)\right)=6\). The minimal perfect Roman dominating function \(g\in\{0,1,2\}\) with \(V_{0}(g)=\{v_{1},v_{2},v_{4},v_{5}\}\), \(V_{1}(g)=\{v_{6},v_{7},v_{8}\}\) and \(V_{2}(g)=\{v_{3}\}\) fulfills \(\omega\left(g\right)=5\). Hence, \(B(f)\) is no minimum perfect Roman dominating function.
Let \(h\in\{0,1,2\}^{V}\) be minimal perfect Roman dominating function. For \(h(v_{3})=h(v_{6})=2\), \(h=B(f)\), as \(v_{1},v_{2},v_{7},v_{8}\in N(V_{2}(h))\) are pendant. Assume \(h(v_{3})=2\neq h(v_{6})\). If \(h(v_{4})=2\) (respectively \(h(v_{5})=2\)), then \(h(v_{7})=h(v_{8})=1\) and \(\omega\left(h\right)\geq 6\). For \(h(v_{7})=2\) (respectively \(h(v_{8})=2\)), \(h(v_{8})=1\) (respectively \(h(v_{7})=1\)) and \(\omega\left(g\right)=5\geq\omega\left(h\right)\). As \(v_{1},v_{2}\notin V_{2}(h)\) for \(h(v_{3})=2\), the remaining possibility is \(h=g\). Thus, \(\omega\left(g\right)=5\leq\omega\left(h\right)\) holds for each minimal perfect Roman dominating function with \(h(v_{3})=2\neq h(v_{6})\). Analogously, for perfect Roman dominating functions \(h\) with \(h(v_{6})\neq 2=h(v_{3})\), \(\omega\left(g\right)=5\leq\omega\left(h\right)\). Let \(h(v_{3})\neq 2\) and \(h(v_{6})\neq 2\). Since \(N(v_{i})\subseteq\{v_{3},v_{6}\}\) for \(i\in\{1,2,4,5,7,8\}\), \(v_{1},v_{2},v_{4},v_{5},v_{7},v_{8}\notin V_{0}(h)\). Hence, \(\omega\left(h\right)\geq 6\) and \(g\) is a minimum perfect Roman dominating function. Furthermore, \(B^{-1}(g)=g\) is not a minimum Roman dominating function.
## 7 Conclusion
We presented polynomial-time algorithms for Unique Response Roman Domination on cobipartite and split graphs and one for Perfect Roman Domination on cobipartite graphs. On split graphs, Perfect Roman Domination is NP-complete but we provided an FPT-algorithm, parameterized by solution size. Then we gave an \(\mathcal{O}^{*}\left(\sqrt[3]{3}^{n}\right)\) enumeration algorithm for unique response Roman dominating functions on graphs of order \(n\) without isolated vertices. This is an optimal algorithm as we also found a family of graphs without isolated vertices of order \(n\) and \(\sqrt[3]{3}^{n}\) many unique response Roman dominating functions. Although the extension version of Perfect Roman Domination is NP-complete, proven to be even \(\mathsf{W}[1]\)-complete if parameterized by \(\omega\left(f\right)\) and \(\mathsf{W}[2]\)-complete if parameterized by \(|V_{0}(f)|\), we showed that all perfect Roman dominating functions of a graph of order \(n\) can be enumerated in \(\mathcal{O}^{*}(1.9332^{n})\) with polynomial delay. This is interesting, as most often polyomial delay is linked to a polynomial-time decision algorithm for the extension version of the corresponding property. The main technique is to devise bijections to objects that can be enumerated with polynomial delay, as (in our case) minimal Roman dominating functions. This technique can be also applied to enumerate all minimal unique response strong Roman dominating functions, as introduced in [32].
It could also be interesting to consider enumeration algorithms for other variations of Roman domination functions such as double/connected/total Roman dominating functions. In particular, it would be interesting to study under which circumstances polynomial-delay enumeration is possible. Another interesting research direction is to look into other graph classes which we did not consider in this paper, now focusing on an input-sensitive analysis. For example, we do not know of any enumeration algorithm for minimal Roman dominating functions/perfect Roman dominating functions on bipartite graphs which is better
than the general one, although this class looks similar to the ones of bipartite and split graphs where we could achieve considerable improvements over the general case, see [5].
|
2305.19695 | Causal discovery for time series with constraint-based model and PMIME
measure | Causality defines the relationship between cause and effect. In multivariate
time series field, this notion allows to characterize the links between several
time series considering temporal lags. These phenomena are particularly
important in medicine to analyze the effect of a drug for example, in
manufacturing to detect the causes of an anomaly in a complex system or in
social sciences... Most of the time, studying these complex systems is made
through correlation only. But correlation can lead to spurious relationships.
To circumvent this problem, we present in this paper a novel approach for
discovering causality in time series data that combines a causal discovery
algorithm with an information theoretic-based measure. Hence the proposed
method allows inferring both linear and non-linear relationships and building
the underlying causal graph. We evaluate the performance of our approach on
several simulated data sets, showing promising results. | Antonin Arsac, Aurore Lomet, Jean-Philippe Poli | 2023-05-31T09:38:50Z | http://arxiv.org/abs/2305.19695v1 | # Causal discovery for time series with constraint-based model and PMIME measure
###### Abstract.
Causality defines the relationship between cause and effect. In multivariate time series field, this notion allows to characterize the links between several time series considering temporal lags. These phenomena are particularly important in medicine to analyze the effect of a drug for example, in manufacturing to detect the causes of an anomaly in a complex system or in social sciences... Most of the time, studying these complex systems is made through correlation only. But correlation can lead to spurious relationships. To circumvent this problem, we present in this paper a novel approach for discovering causality in time series data that combines a causal discovery algorithm with an information theoretic-based measure. Hence the proposed method allows inferring both linear and nonlinear relationships and building the underlying causal graph. We evaluate the performance of our approach on several simulated data sets, showing promising results.
Causality, Constraint-based causal discovery, Information theory, Time series +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
causal stationarity [8], stating that all causal relationships remain constant in direction throughout time, it can reduce to a window causal graph. This representation only provides a temporal section of the full time causal graph, after having selected a maximal time lag \(\tau_{max}\) in the past and a maximal time ahead. The last well-known way to represent multivariate time series is the summary causal graph where all causal relationships in the graph are shown by keeping only variables and edges between them. Thus, in this type of graph, the lag between a cause \(X\) and its effect \(Y\) is not illustrated. However, the summary causal graph has the advantage of being less impacted by noisy data, as it makes less calculations. Figure 1 presents a window causal graph on the left and its corresponding summary causal graph on the right.
Under specific assumptions (presented in [6; 9]), those graphs encode conditional independence, leading to Directed Acyclic Graphs (DAGs). In a DAG \(\mathcal{G}\), arrows connecting two nodes stand for direct dependency, while no arrow between two nodes shows either independence or conditional independence. However, an issue that may occur is that two different DAGs can encode the same dependencies. In this context, we say that those graphs belong to the same Markov equivalence class [10].
Searching causal relationships in data can be seen as a statistical estimation of parameters describing a causal structure. Such a problem is categorized under the terms of _causal discovery_, which aims at using observational data to analyse and identify properties or causal relationships of a system. When dealing with temporally varying systems, causality is generally associated with Granger causality [11]. It states that variable \(A\) Granger-causes variable \(B\) if the prediction of the future of \(B\) is improved when the knowledge of the past of \(A\) becomes available.
A major restriction of this model is that it focuses on linear relations. Another issue with Granger causality is that by definition, it does not account for instantaneous effects. Despite this, there have been various formulations of Granger causality and methods have been developed to express this notion on the basis of observed time series. In a typical approach to check for Granger causality, two models are fitted: one full, composed of the past from every time series involved and the other one, reduced, that does not include the past of the investigated time series. The two models are then compared with respect to a measure of prediction. If the full model performs better than the reduced one, then the causality is inferred. This approach is called Parwise Granger Causality (PWGC) and extensions exist to process multivariate time series [12; 13]. Other methods have been developed and are grouped under three families: _functional causal models_ (FCMs), _score-based approaches_ and _constraint-based approaches_.
The FCMs are based on Structural Equation Models [1]. Their objective is to make a correspondence between a graph \(\mathcal{G}\) and a system of equations in which each variable is expressed in terms of its direct causes and an additional noise. In the frame of time series data, some of the most popular algorithms are the VarLiNGAM [14] and TiMINo [15]. The first one is an extension of LiNGAM [16] for time series, with the use of auto-regressive vectors, while the second one discovers causality through statistical tests to look for independence between residuals and noise of a certain fitted time series model. Both methods are originally developed considering simplifying assumptions on the relationships between variables.
Score-based models start with an empty graph and add (or remove) needed (or unnecessary respectively) edges in an iterative pattern. They consist in searching for graphs maximizing the goodness of fit to the data distribution. For time series, a recent algorithm, DYNOTEARS [17] considers dynamic Bayesian networks in which variables are time series. The major issue is that this approach does not guarantee that the inferred graph belongs to an equivalence class.
Lastly, constraint-based approaches try to efficiently search for graphs belonging to a Markov equivalent class that fit the most the set of conditional independence relations found in the data. The two main algorithms that hold in constraint-based methods are namely the PC algorithm and the Fast Causal Inference algorithm [6]. The first one makes the assumption of causal sufficiency while the latter does not. We will further explore the PC algorithm.
PC algorithm starts with a complete undirected graph. It ends with an oriented graph where only edges that connect two nodes that have a causal relation are kept. This algorithm is composed of three different parts. The first one intends to estimate the skeleton of the graph by testing (conditional) independence between variables. If two variables are independent or conditionally independent, then it removes the edge between them. The second one identifies the \(v\)-structures in the graph and directs the concerned edges, from the separation sets found in the first phase. The last one makes use of the knowledge brought by the previous steps to finish directing the graph. PC algorithm does not ever return a DAG, but a complete partially DAG where some edges may remain unoriented, which is sure to belong to the right Markov equivalence class.
For time series, several methods have been proposed based on the PC algorithm, such as oCSE [18] or PCMCI [19] and its extensions, using the (Conditional) Mutual Information ((C)MI) to measure dependencies between variables. Indeed, the information theory framework allows to limit assumptions on the data distribution as well as on the relationships between them. As such, some information theoretic based measures have been developed specifically to process temporal variables such as the Transfer Entropy [20] (TE), generalized as the partial Transfer Entropy [21] (PTE), or the Partial Mutual Information from Mixed Embedding [7] (PMIME).
PMIME is an asymmetric and non-parametric measure designed to detect direct couplings in time series. It is derived from an embedding scheme based on a selection criterion, the conditional
Figure 1. On the left (a), a window causal graph and its corresponding summary causal graph on the right (b).
mutual information. In the multivariate case, to assess whether a variable \(X\) leads a variable \(Y\), conditional on a set of variables \(Z=\{Z^{0},Z^{1},\ldots,Z^{g-2}\}\), it builds iteratively an embedding vector \(\mathbf{w}\) from the lagged components extracted from \((X,Y,Z)\) that explains the best the future of \(Y\), noted \(Y_{t}^{T}=(Y_{t+1},...,Y_{t+T})\). Each iteration is called an embedding cycle and uses a stopping criterion to accept or reject a component. A component is accepted if the information it brings strictly increases the information already contained in the embedding vector.
Hence \(\mathbf{w}\) is formed from \(k\) lagged variables, selected by the CMI and can be decomposed as \(\mathbf{w}_{t}=(w_{t}^{x},w_{t}^{y},w_{t}^{Z})\), where \(w_{t}^{x}\) are the components of \(X\) selected in the process, \(w_{t}^{y}\) those from \(Y\) and all the remaining ones are denoted as \(w_{t}^{Z}\), see more details on the embedding process in (Dong et al., 2017; Wang et al., 2018).
We then quantify the causal effect from \(X\) to \(Y\), conditional on \(Z\) by :
\[R_{X\to Y|Z}=\frac{I(Y_{t}^{T},\mathbf{w}_{t}^{x}|\mathbf{w}_{t}^{y}, \mathbf{w}_{t}^{Z})}{I(Y_{t}^{T};\mathbf{w}_{t})}.\]
The embedding process alone could be a measure of causality. Indeed, if \(\mathbf{w}_{t}^{x}\) is empty, it means that \(X\) has no influence on \(Y\), which translates in the measure \(R\): if \(\mathbf{w}_{t}^{X}\) is empty, then \(R\) equals 0. Furthermore, this measure is bounded between 0 and 1, 0 means independence and 1 means that the future of \(Y\) is totally driven by \(X\).
## 3. Proposed Approach
### Motivations and assumptions
In this work, we consider multivariate To not restrict real world assumptions, we focus on variables holding linear and non-linear, possibly lagged relationships. Additionally, time series data are suppose not to respect any particular probabilistic distribution model. Lastly, we make the assumption that every causes of each effect are observed, known as causal sufficiency. As such, most of the state-of-the-art algorithms are not adapted to our framework. In a recent survey (Zhou et al., 2017), all algorithms, except PCMCI, tsFCI (Zhou et al., 2017) and oCSE, work under linear relationships or with a particular data distribution (e.g. Gaussian or vector auto-regressive models). In the remaining ones, oCSE considers only a maximum lag of 1, which might not be sufficient to fully grasp temporal relationships in real-world time series. Based on the FCI algorithm, tsFCI aims at discovering hidden confounders, which is not in line with our causal sufficiency assumption. Therefore, the methods that meet our requirements the more are PCMCI and its derivatives. Those are constraint-based methods so they require a suited causality inference measure. One issue is that PCMCI uses either Partial Correlation or Mutual Information to find independence and conditional independence. Partial Correlation is a measure that makes the assumption of linear links and is therefore not suitable to our framework and (Conditional) Mutual Information alone might not be able to fully detect lagged relationships. Another issue is that PCMCI uses a window causal graph that can be more sensible to noise and costly due to the need to identify all causal relations at all lags in the window.
To address these problems, a combination of the PC algorithm, a constraint-based method and the PMIME measure is further developed to infer causality in time series data in our framework. Indeed, the PMIME measure is adapted to quantify temporal links with lagged relations. It is built for no particular probability distribution of the system, can process linear and non-linear links and has few free parameters to adjust.
The Partial Transfer Entropy also respects the precedent properties but the PMIME has been shown to be more efficient than the PTE in the frame of nonlinear systems (Zhou et al., 2017). Moreover, the fact that the PMIME measure is bounded has two advantages: firstly, it does not require additional significance test; secondly, this bounded score makes it easier to interpret the results.
The PC-PMIME method keeps some important assumptions. The first major one is the causal sufficiency due to PC algorithm limits: it cannot discover hidden confounders or selection bias variables. Additionally, PMIME requires stationary time series. Lastly, the estimation of the Entropy and through this, of the CMI, requires large time series. Also, due to the form of the PMIME measure, the PC-PMIME algorithm does not compute auto-correlation. Indeed, in PMIME, if \(X=Y\), then the variables of \(X\) are the same as those from \(Y\) in the embedding vector, thus \(w^{X}=w^{Y}\) and \(R_{X\to Y}=R_{Y^{-}\to Y}=0\). The measure of auto-correlation could be integrated in future work.
### PC-PMIME algorithm
To find the causal structure in time series data, a causal discovery algorithm, PC algorithm, is merged with the non-parametric measure PMIME. Only the first phase of the PC algorithm is used: starting from a full connected graph, it finds the skeleton by successively testing every edges between each node. For instance, an edge between \(X\) and \(Y\) is removed if \(R_{X\to Y}=0\), where \(R\) is the PMIME measure. When all edges have been tested and some have been removed, for the remaining edges, it continues by checking if the two connected nodes are conditionally independent. The conditioning set (or separation set) is first composed of one additional variable, connected to \(X\) or \(Y\) and its size increases until conditional independence is found or until all edges linked to \(X\) and \(Y\) have been tested. As the PMIME is asymmetric, the algorithm tests for both directions, from \(X\) to \(Y\) and from \(Y\) to \(X\), to make sure it does not make spurious links. In our implementation, an edge is not removed as soon as \(R=0\), but instead when the algorithm has tested all edges for one size of the conditioning set. This is known as the PC-stable method (Zhou et al., 2017) that allows to avoid PC to be order biased.
The algorithm is described in Algorithm 1. PC-PMIME1 takes as inputs the data (\(g\) time series of length \(n\)), a maximal lag \(\tau_{max}\), \(k\) the number of nearest neighbors for the estimation of the CMI and \(A\), the value of the stopping criterion in the building of the embedding vector. The algorithm returns \(\mathcal{G}\), the estimated oriented causal graph. We observe that in general, \(R\) is not equal to 0 in case of independence, but is more around \(10^{-15}\), due to estimation errors. So we consider that there is independence when \(R\) is close to 0 (we note \(R\approx 0\Leftrightarrow R<10^{-10}\)).
Footnote 1: PMIME and PC-PMIME are implemented in [https://github.com/AArasac/CD_for_TS_with_CSM_and_PMIME](https://github.com/AArasac/CD_for_TS_with_CSM_and_PMIME)
Although the PC-PMIME algorithm is not fitted to search for latent
variables, the implementation is made in a way that if \(R_{X^{j}\to X^{i}}>0\) and \(R_{X^{i}\to X^{j}}>0\), then it leads to a double headed arrow \((X^{j}\leftrightarrow X^{i})\). That provides the information that those two variables are mutually correlated and that there is potentially a common confounder between those two.
## 4. Experiments
Our implementation of PC-PMIME is tested on simulated data, from a recent survey (Kang et al., 2017). In this survey, the authors simulated basic causal structures often encountered, from time series. It contains a total of 5 different structures, each simulated 10 times over 4000 observations. From the five structures, only four are retained, as the last one contains latent variables. Those four are the _Fork_, the \(n-structure\), the \(Mediator\), and the \(Diamond\) structure, as shown in Figure 2. The Fork structure corresponds to a common confounder, the \(v-\)structure is implicit. Then, Mediator corresponds to a collider with one of its parent causing the other. Finally, the diamond structure is a common confounder leading to a \(v-\)structure. The data are simulated with linear relations for auto-correlation and nonlinear links between different time series, through simple nonlinear functions.
On each structure, PC-PMIME as well as four other methods are run. Those methods are the Pairwise Granger Causality (PWGC2), VarLiNGAM3, DYNOTEARS4 and PCMCI5 with Partial correlation as a measure of conditional independence. Those are only baselines to compare to, other algorithms exist but are not tested here. For each algorithm, including PC-PMIME, the maximal lag is defined at \(\tau_{max}=3\), selected empirically on these datasets. For PWGC, the statistical test to compare the full model and the restricted is the \(F-\)test, with a significance level set at \(\alpha=0.03\). VarLiNGAM uses a Lasso penalization for the estimation of the structural VAR model, whose parameters are selected by the Bayesian Information Criterion (BIC). The parametrization of DYNOTEARS is done with the recommended values \(\lambda_{w}=0.05=\lambda_{a}\), and \(w\_threshold=0.01\) by (Kang et al., 2017). Lastly, the significance value for the Partial Correlation measure in PCMCI is set to \(\alpha=0.03\). About PC-PMIME, for the estimation of the AMI, the number of nearest neighbors \(k\) is set to \(k=0.01n\) (see (Kang et al., 2017; Kang et al., 2017)). After several tests, it appears that if the stopping criterion of the embedding cycles in PMIME, \(A\) is close to 0, such as \(A=0.01\), it is too conservative, while if it is greater than 0.05, it is too permissive. Thus, the stopping criterion is fixed to \(A=0.03\).
Footnote 2: PWGC algorithm can be found on [https://www.statsmodels.org/stable/generated/statslab.statslab.statslab.statslab.statslab.statslab.statslab.statslab.statslab.statslab.statslab.stats.statslab.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats..stats.stats.stats.stats.stats.stats..stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats..stats.stats.stats.stats.stats.stats.stats.stats.stats.stats..stats.stats.stats.stats..stats.stats.stats.stats..stats.stats.stats..stats.stats.stats.stats..stats.stats.stats.stats.stats.stats.stats.stats.stats.stats..stats.stats..stats.stats..stats..stats](https://www.statsmodels.org/stable/generated/statslab.statslab.statslab.statslab.statslab.statslab.statslab.statslab.statslab.statslab.statslab.stats.statslab.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats..stats.stats.stats.stats.stats.stats..stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats.stats..stats.stats.stats.stats.stats.stats.stats.stats.stats.stats..stats.stats.stats.stats..stats.stats.stats.stats..stats.stats.stats..stats.stats.stats.stats..stats.stats.stats.stats.stats.stats.stats.stats.stats.stats..stats.stats..stats.stats..stats..stats)...
(under \(n=1000\)). This is due to the PMIME measure that is based on an estimation of the mutual information. Actually, the mutual information and more precisely, the \(knn\) estimation of the entropy, is robust asymptotically, hence behaves well for larger size of data.
## 5. Conclusion and Future Work
In this paper, we present a method to infer causal relationships between multivariate time series, making few assumptions about the data. As such, PC-PMIME is presented as a method to build a causal graph from nonlinear multivariate time series. It provides promising results on simulated data with no latent variables. However, it still contains several limitations that can be removed such as making a better orientation phase of edges, computing instantaneous relationships or taking into account hidden common confounders and testing on real data.
In the current PC-PMIME algorithm, edges are only oriented from the result of the asymmetrical PMIME measure. However, it may not catch true causality and may also lead to spurious conclusions in the building of the causal graph. Thus, adapting the orientation rules of the PC algorithm to time series might be a better approach and should lead to an improved version of our method.
Figure 3. \(F1-\)score of the 5 methods in function of \(n\), the length of time series. Each graph corresponds to one basic causal structure.
Figure 2. Simulated basic causal structures
Then, to measure instantaneous relationships, the idea could be to work on a different type of causal graphs. In this study, summary causal graphs are used, but extended summary causal graphs might be more appropriate [29] to consider those relationships. With such graphs, the PMIME measure could infer lagged causal relationships and a simple measure of (conditional) dependence for instantaneous causes.
Lastly, taking into account latent variables may be the highest challenge. The PC algorithm cannot be used anymore as it works under the assumption of causal sufficiency. To confirm this, we run the different methods on the last dataset proposed in [23] that contains two hidden confounders, hence does not respect anymore the causal sufficiency constraint. The parameters for each method are the same as before. Figure 4 presents the true graph and the results obtained on these simulations. As expected, the scores obtained by the different algorithms are below those found on the other datasets with no latent variables. The one with the higher score is still the PC-PMIME algorithm, barely reaching a score of 0.6. This is surely due to its ability to represent mutual correlations, as exposed at the end of section 3, but it is still not really satisfying and needs to be improved.
Thus, computing lagged variables requires the use of another algorithm, such as the FCI algorithm and adapting it to time series. The idea is then to merge an FCI-like algorithm with the PMIME measure in further works.
|
2309.10202 | Stabilizing RLHF through Advantage Model and Selective Rehearsal | Large Language Models (LLMs) have revolutionized natural language processing,
yet aligning these models with human values and preferences using RLHF remains
a significant challenge. This challenge is characterized by various
instabilities, such as reward hacking and catastrophic forgetting. In this
technical report, we propose two innovations to stabilize RLHF training: 1)
Advantage Model, which directly models advantage score i.e., extra reward
compared to the expected rewards and regulates score distributions across tasks
to prevent reward hacking. 2) Selective Rehearsal, which mitigates catastrophic
forgetting by strategically selecting data for PPO training and knowledge
rehearsing. Our experimental analysis on public and proprietary datasets
reveals that the proposed methods not only increase stability in RLHF training
but also achieve higher reward scores and win rates. | Baolin Peng, Linfeng Song, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu | 2023-09-18T23:06:32Z | http://arxiv.org/abs/2309.10202v1 | # Stabilizing RLHF through Advantage Model and Selective Rehearsal
###### Abstract
Large Language Models (LLMs) have revolutionized natural language processing, yet aligning these models with human values and preferences using RLHF remains a significant challenge. This challenge is characterized by various instabilities, such as reward hacking and catastrophic forgetting. In this technical report, we propose two innovations to stabilize RLHF training: (_i_) _Advantage Model_, which directly models advantage score _i.e._, extra reward compared to the expected rewards and regulates score distributions across tasks to prevent reward hacking. (_ii_) _Selective Rehearsal_, which mitigates catastrophic forgetting by strategically selecting data for PPO training and knowledge rehearsing. Our experimental analysis on public and proprietary datasets reveals that the proposed methods not only increase stability in RLHF training but also achieve higher reward scores and win rates1.
Footnote 1: Work in progress
## 1 Introduction
Large language models (LLMs) have become a fundamental element in advancing natural language processing (NLP) and artificial intelligence (AI), showcasing an impressive ability to generate text that is both semantically and contextually relevant (OpenAI, 2023; Kopf et al., 2023; Touvron et al., 2023). Despite these advancements, LLMs have the risk of engaging in undesirable behaviors, such as fabricating information or producing biased, toxic, or even dangerous content, since LLMs are trained on a wide array of data, which can include low-quality sources. This has highlighted the necessities of LLM Alignments with human values, intentions, and preferences (Brown et al., 2020; Ouyang et al., 2022; Bai et al., 2022a; Glaese et al., 2022).
Many approaches have been put forth to address the challenge LLM Alignments (Bai et al., 2022; OpenAI, 2023; Askell et al., 2021). Among these approaches, Reinforcement Learning from Human Feedback (RLHF) has demonstrated its efficacy in aligning language models with human preferences. RLHF serves as a key component of training SoTA LLMs including exemplars such as OpenAI's GPT-4 (OpenAI, 2023), Anthropic's Claude (Bai et al., 2022a), Google's Sparrow (Glaese et al., 2022), Bard, and Meta's Llama 2-Chat (Touvron et al., 2023). RLHF elevates the capabilities of LLMs beyond the mere modeling of the distribution of their training data. It endows LLMs with the capacity to adapt their text generation distribution in a manner that are preferred by humans.
However, training LLMs using RLHF is undoubtedly challenging, which demands an accurate and reliable reward model that approximates human judges, and a robust PPO algorithm for sustained policy improvements. Even with meticulous configurations, _instabilities_, _e.g._, gibberish responses (but high-reward) (Stiennon et al., 2020; Skalse et al., 2022), forgetting learned knowledge, are usually observed during training, which leads to recurring failures. These instabilities have several causes: (_i_) different reward score distributions are learned for various categories by the reward model, potentially leading to _reward hacking_ issues (Skalse et al., 2022), a phenomenon where the model finds unintended ways to maximize the reward. As depicted in Figure 0(a), the reward model learns noticeable disparity in reward score distributions for Code Generation and QA tasks, 2 out of
61 tasks present in the preference data. Even with reward score normalizations, the fluctuating means and variances can induce unexpected model behaviors, such as transferring the response patterns of Code Generations to QA examples due to the higher reward scores. (_ii_) over-optimizing with PPO on examples that were well-aligned with humans in the Supervised Fine-Tuning (SFT) stage triggers _catastrophic forgetting_ issues (McCloskey & Cohen, 1989; Gupta et al., 2023; Khetarpal et al., 2022). Models tend to overlook what was learned during the SFT stage, _i.e.,_ PPO model underperforms the SFT model on expert-aligned examples 2, as shown in Figure 0(b).
Footnote 2: Expert-aligned Examples are data samples that meet the standards and criteria delineated by experts and closely align with human preferences. These examples are used for SFT model training and evaluation.
Accordingly, in this technical report, we introduce two techniques to enhance the stability and effectiveness of the training of RLHF. Firstly, we propose _Advantage Model_ to balance the reward score distributions across various categories, thus averting the reward hacking dilemma that is often induced by noticeable differences score distributions. This is achieved by directly modeling the advantage score, _i.e.,_ the extra reward one response can obtain compared with the expected reward, and regulating the advantage score distribution dynamically during training, ensuring that the variances and means are maintained within a reasonable range. Secondly, we introduce the _Selective Rehearsal_ to alleviate the catastrophic forgetting issue. We posit that not all data should be optimized equally in PPO training. As such, we propose a robust and effective data selector that automatically identifies what examples could be utilized for PPO training and should be used to rehearsal knowledge accumulated in the SFT stage, preventing the depreciation of the model's performance on expert-aligned examples over time. Experiments on both public and proprietary data have demonstrated that our Advantage Model successfully balances reward score distributions across various examples while preserves ranking precision, and guide PPO training to achieve a higher reward score and win rate compared to the SFT model. Furthermore, Selective Rehearsal is able to avoid over-optimizing by selecting the most suitable examples for PPO training, thereby sustaining the performance on expert-aligned examples.
Our contributions are summarized as follows:
* We analyze and identify several causes of instability in RLHF training, namely, imbalanced learned reward score distributions and over-optimization of certain PPO training data, which lead to reward hacking and catastrophic forgetting issues.
* We introduce the _Advantage Model_ to balance reward score distributions across various categories, and the _Selective Rehearsal_ strategy to discern which examples should be used
Figure 1: _Left: The distribution of reward scores for both the QA and Code Generation tasks. There is a noticeable disparity in the learned reward score distributions between the two tasks, despite the expectation that the distributions should be similar. Right: The win/loss rate over the SFT model on the forget set exhibits a significant decline. This drop in the win rate can be attributed to reward hacking and the phenomenon of catastrophic forgetting._
for PPO training and which should be reserved for rehearsing knowledge accrued in the SFT stage.
* Through extensive experiments on both public and proprietary datasets, we demonstrate that the _Advantage Model_ and _Selective Rehearsal_ are able to stabilize RLHF training, achieving higher reward scores and win rates.
## 2 Preliminary
In recent machine learning research, RLHF (Ouyang et al., 2022; Bai et al., 2022a) has emerged as a pivotal strategy for aligning LLMs to human goals (e.g. being helpful and harmless). RLHF typically follows the SFT phase, where SFT aligns a LLM with human objectives using teacher forcing on (prompt, response) pairs. However, despite this alignment, the LLM may still struggle with generalization when faced with unseen tasks.
Learning a reward function from interaction between LLMs and humans and optimizing LLMs with the learned reward function using reinforcement learning has been shown as an effective approach to solving the LLM alignment problem. Leike et al. 2018; Stiennon et al. 2020; Ouyang et al. 2022 proposed a method involving reinforcement learning from human feedback, where RMs are trained on a dataset of comparisons between two model outputs generated from the same input. The goal is to assign higher rewards to outputs preferred by human labelers over others. Typically, this is achieved by adding a value head that outputs a scalar value on pre-trained transformer-baesd LMs with last unembedding layer removed. Specifically, the reward modeling loss is as follows:
\[\mathcal{L}_{\text{RM}}=-E_{(x,y_{c},y_{r})\sim D^{\text{\text{\text{min}}}}} [\log(\sigma(r_{\theta}(x,y_{c})-r_{\theta}(x,y_{r})))] \tag{1}\]
where \(r_{\theta}(x,y)\) denotes the reward score for prompt \(x\) and response \(y\) with parameters \(\theta\), \(y_{c}\) is the preferred response of the pair \(y_{c}\) and \(y_{r}\), and \(D^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text
In light of these considerations, we introduce the Advantage Model (AM) for reward modeling. Analogous to the concept of the advantage function in reinforcement learning, the Advantage Model, denoted as \(a(x,y)\), quantifies the additional reward that response \(y\) can achieve over the expected reward \(e\) for prompt \(x\). This is formally defined as:
\[a_{\theta}(x,y)=r_{\theta}(x,y)-\mathbb{E}_{y\sim r^{\prime}(x)}[\frac{\pi_{ \phi}(y|x)}{\pi^{\prime}(y|x)}r_{\theta}(x,y)] \tag{5}\]
Here, the notation \(y\sim\pi^{\prime}(x)\) signifies all possible responses generated by a policy \(\pi^{\prime}(x)\) when given the input prompt \(x\). Since the comparison data is typically collected in many batches with different SFT or PPO models, we introduce \(\frac{\pi^{\prime}(y|x)}{\pi^{\prime}(y|x)}\), the importance weight term to negate the bias introduced by the policy distribution shift. Intuitively, the extra reward gains of good response \(y_{c}\) and the reward losses of bad response \(y_{r}\) should be bounded by a margin \(m\). As such, the training objective of AM consists of two parts, _ranking loss_ that aligns with the formulation in Equation 1, and _bounding loss_ to ensure the well-calibrated bounding of AM scores. It is formally defined as follows:
\[\begin{split}\mathcal{L}_{\text{AM}}=-E_{(x,y_{c},y_{r})\sim D^{ \text{par}}}[\log(\sigma(a_{\theta}(x,y_{c})-a_{\theta}(x,y_{r})))\\ +\ \log(\sigma(m(x)-a_{\theta}(x,y_{c})))+\ \log(\sigma(m(x)+a_{ \theta}(x,y_{r})))]\end{split} \tag{6}\]
where \(m(x)\)3 is the function that defines the permitted margin for prompt \(x\). However, it is infeasible to list every potential response to calculate the expected reward. To address this, we propose parameterizing the expected reward of the current policy, denoted as:
Footnote 3: We think that \(m(x)\) may have a connection with the complexity or difficulty involved in learning the reward function for prompts similar to \(x\). However, this is speculative and requires further investigation. We leave this aspect as a topic for future study and exploration. Throughout our experiments, we set \(m(x)\) as 2.5.
\[e_{\tau}(x)=\mathbb{E}_{y\sim\pi_{\phi}(x)}[r_{\theta}(x,y)] \tag{7}\]
By integrating the term representing the importance weight, we can reformulate the equation as follows:
\[a_{\theta}(x,y)=r_{\theta}(x,y)-\tfrac{N-K}{N}e_{\tau}(x)-\sum_{k=1}^{K}\tfrac {1}{N}\tfrac{\pi^{\prime}(y|x)}{\pi^{\prime}_{k}(y|x)}r_{\theta}(x,y) \tag{8}\]
where \(N\) serves as a hyperparameter that harmonizes the emphasis placed on the current policy model relative to alternate policy models. \(K\) specifies the number of alternate policy models utilized for comparison data collection. Additionally, \(\pi^{\prime}_{k}(y|x)\) indicates the probability derived from the \(k\)th policy model.
### PPO with Selective Rehearsal
In addition, we propose Selective Rehearsal to maintain the skills that are already acquired before RLHF. Selective rehearsal takes two major steps: representative example discovery and rehearsal training.
Representative example discoveryGiven the policy \(\pi_{\phi}\) and PPO training prompts with policy outputs \(D^{\text{pPO}}=[(x_{1},y_{1}),(x_{2},y_{2})\dots]\), our goal is to select high-quality \((x,y)\) pairs from \(D^{\text{pPO}}\) that cover as many skills (e.g., solving algebra problems and writing resume) as possible. In order to let selected \((x,y)\) pairs represent as many skills as possible, we first adopt a clustering algorithm (e.g. KMeans or Gaussian mixture) to separate \(D^{\text{pPO}}\) into \(c\) clusters. To assure the representativeness and quality of the selected data, we only keep certain \((x,y)\) pairs within each cluster that satisfy certain criteria regarding aspects such as advantage (reward) model score, entropy (low entropy indicates high confidence), human satisfaction rate or response length (higher length may indicate redundancy).
Here we adopt the SimCSE (Gao et al., 2021) sentence embedding4 to represent the query \(x\) for each \((x,y)\) pair before running a KMeans algorithm on these embeddings to be grouped into \(c\) clusters. We briefly study the influence of cluster number \(c\) in Section 4.3. Within each cluster, here we simply choose the top-\(k\)\((x,y)\) pairs with the highest advantage model score (Eq. 3.1). We leave other strategies (e.g. combining advantage score with entropy score) in future work.
Footnote 4: [https://huggingface.co/princeton-nlp/sup-simcse-roberta-base](https://huggingface.co/princeton-nlp/sup-simcse-roberta-base)
One reason we select our rehearsal data from the PPO training data with each response \(y\) being generated from the initial policy model is to enable a more fair and nuanced comparison, as no additional information is introduced. In other scenarios, the rehearsal \((x,y)\) pairs could come from other important data sources representing specific skills (e.g. math-problem solving) the main policy are not expected to forget.
Rehearsal trainingAfter obtaining the rehearsal \((x,y)\) pairs of all clusters, we shuffle them together to form the rehearsal dataset \(D_{R}\) and compute NLL loss on \(D_{R}\) as a supplement to the standard PPO loss defined in Equation 2:
\[\mathcal{L}_{\text{PPO-SR}}=\mathcal{L}_{\text{PPO}}+\gamma\mathbb{E}_{(x,y) \sim D_{R}}\sum_{t=1}^{|y|}\pi_{\phi}(y_{t}|y_{<t},x) \tag{9}\]
where the coefficient for the NLL loss \(\gamma\) is empirically set to \(0.01\).
Rehearsal training is similar with rejection sampling and reinforced self-training (Gulcehre et al., 2023) by using self-generated \(y\)s of high reward model score for supervised training. However, rehearsal training captures multi-dimensional important aspects (e.g., diversity), while rejection sampling and reinforced self-training only consider reward model score.
Alternatively, one can view selective rehearsal as a means of amplifying the weight of the KL-divergence term in PPO training (Eq. 2) for crucial instances and their related counterparts.
## 4 Experiments
### Datasets and Models
RM datasetsWe conducted experiments on both English and Chinese datasets. For the English experiments, we utilized the HH-RLFH dataset (Bai et al., 2022; Ganguli et al., 2022), which comprises 118k helpful and 42k harmless examples for training, and 8.5k for testing. It is worth noting that many studies train different RMs separately for helpful and harmless examples to achieve better performance. However, in our experiments, we did not distinguish between helpful and harmless examples.
For the Chinese dataset, we collected comparison examples with quantities similar to those used in LLaMA 2 (Touvron et al., 2023). Our annotation procedure operates as follows: First, we ask annotators to generate prompts based on a task spectrum. Next, we sample five responses from the same SFT model using varied sampling hyper-parameters. Finally, we distribute these responses to five annotators for ranking based on provided criteria. Following Bai et al. (2022), the annotation criteria focuses on helpfulness and harmless.
PPO datasetWe sampled queries from two popular domain-general datasets, COIG5 and firefly6 to form our PPO dataset. Particularly, we obtained 64,364 and 2,623 for PPO training and testing, respectively7. There is no intersection between the training and testing sets. Additionally, we selected 1,704 examples from the SFT test data to create a _forget test set_, enabling us to evaluate the model's ability to retain learned knowledge.
Footnote 5: [https://huggingface.co/datasets/BAAI/COIG](https://huggingface.co/datasets/BAAI/COIG)
Footnote 6: [https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
Footnote 7: The PPO training and testing query sets could be shared upon request.
ModelsWe employed BLOOMZ (Muennighoff et al., 2022) as our pre-trained model backbone. More specifically, BLOOMZ\({}_{\text{7B}}\) was used for reward modeling and BLOOMZ\({}_{\text{176B}}\) was used for SFT and RLHF training.
### Training Setups
We initialized our models using pre-trained checkpoints. The architectural configuration and hyper-parameters were kept consistent with those of the pre-trained models, except that a value head is
added to produce a scalar reward. A learning rate of 5e-6 was employed, coupled with a warm-up strategy covering the initial 10% of training steps and a cosine learning rate schedule decreasing to 10% of the initial learning rate. For the English dataset, a global batch size of 180 was employed, whereas for the Chinese dataset, the batch size was set to 480. The Overfitting issue is observed in general after models are trained for one epoch. As such, we fixed the training epoch as 1 for the all the experiments.For PPO training, a learning rate of \(5\times 10^{-7}\) and a global batch size of 256 is employed. The actor model is trained for 100 steps for all experiments. The SFT model is trained on the proprietary dataset. We omit these details since these are not the focus of this paper.
### Evaluation
AM Evaluation ResultsFirstly, we present the overall accuracy and Expected Calibration Error (ECE) for both RM and AM on each dataset. For the English dataset, we additionally compare our method with the publicly available OpenAssistant (Kopf et al., 2023) which utilized DeBERTa (He et al., 2020) for reward modeling. Table 2 lists all the results. We observe that AM achieves slightly higher accuracy but significantly lower ECE on all the datasets. This indicates that AM is capable of maintaining the same level of ranking accuracy while providing reliable and well-calibrated scores. A detailed analysis of calibrations is provided in the following sections. We attribute this phenomenon to the fact that AM is formulated to directly model additional rewards, _i.e._ advantages, making it more stable and less prone to yield high variances cores. Additionally, the accuracy on the proprietary data is much higher than that on HH-RLHF. We speculate that the trade-off between helpfulness and harmlessness objectives is more pronounced in HH-RLHF, possibly due to the limited presence of harmful examples in our proprietary data.
Calibrations of AMThe reward model score of a response should accurately reflect the probability that humans prefer it. These probabilities must be precise; in other words, the scores should be
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{HH-RLHF} & \multicolumn{2}{c}{Proprietary Data} \\ \cline{2-5} & Accuracy \(\uparrow\) & ECE \(\downarrow\) & Accuracy \(\uparrow\) & ECE \(\downarrow\) \\ \hline OpenAssistant Köpf et al. (2023) & 69.24 & - & - & - \\ \hline Reward Model & 69.25 & 4.70 & 74.75 & 5.35 \\ Advantage Model & 69.43 & 3.48 & 75.28 & 3.83 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation results on HH-RLHF and our proprietary data. Note that maximizing accuracy is not the exclusive objective in AM optimization. The aim also extends to reducing ECE to improve reliability, whilst sustaining or improving the level of ranking accuracy compared with RM.
Figure 2: Ranking accuracy is shown as a function of the difference in scores between higher and lower ranked responses. The orange lines indicate the calibrated prediction of accuracy \(1/(1+e^{-\Delta})\) in which \(\Delta\) denotes the score difference. On the left, we show calibration of RM and AM on HH-RLHF data while on the right we show results for our proprietary data. We observe that AM calibration is better than RM’s.
well-calibrated. This is crucial since these scores will serve as reward signals to guide PPO training Bai et al. (2022). To assess whether our AM is calibrated or not, in Figure 2, we depict the ranking accuracy as a function of score differences assigned to pairs of samples. An orange line representing perfect calibration is also included. Our observations indicate that the AM exhibits significantly lower ECE and is better calibrated than RM on both datasets, whereas RM tends to be overconfident in most cases. We further show the distribution of scores for both good and bad examples in Figure 3. While in general both RM and AM are able to assign higher scores for good examples, AM exhibits a more distinct distribution pattern.
Means and variances of AMDuring PPO training, RLHF exhibits instability, largely owing to unpredictable fluctuations in reward estimation scales. Directly modeling advantage, as our AM does, could potentially alleviate the above issue. To validate AM's efficacy in stabilizing score scales and ranges, we calculated the AM scores for individual examples and analyzed the mean and variance across all the the task spectrum. This analysis is depicted in Figure 3(a). We observe markedly different means for each task in the case of RM. Such significant disparities in means can potentially give rise to reward hacking issues (Skalse et al., 2022) and result in repeated failures during PPO training. In addition, Figure 3(b) illustrates the standard deviations of both AM and RM, with AM consistently operating at a stable scale. These results endorse AM as a strategy designed to normalize reward scores at the individual example level while enhancing ranking accuracy.
PPO training resultsWe conducted a comparative analysis of PPO training with different scoring models in terms of their performance on both main test set and forget test set. The learning curve
Figure 4: Mean and standard variance for each task categorized by a task spectrum on the in-house data.
Figure 3: Distributions of RM and AM scores for pairs of good and bad examples from the proprietary data.
is shown in 5. We observe that AM-PPO outperformed RM-PPO in the main set, achieving higher rewards and a superior win rate over the SFT model. In addition, RM-PPO faces significant reward hacking issues, witnessed by a drop in win rate evaluated by GPT-4, shown in 5b despite a rise in RM scores. Despite utilizing moving average for score normalization, RM-PPO w/ MA encounters instabilities during PPO training. Conversely, AM-PPO exhibits resistance to such problems, maintaining stable GPT-4 outcomes. This emphasizes AM's stability and alignment efficiency over RM. The forget test set result reveal RM-PPO's substantial susceptibility to catastrophic forgetting, portraying a noticeable performance drop. In contrast, AM-PPO is stable, avoiding significant drops and showcasing stability. Incorporating selective rehearsal, the AM-PPO-SR variant demonstrate an uplifted win rate on both sets, underscoring the role of selective rehearsal in alleviating catastrophic forgetting and enhancing model efficacy.
Analysis on Selective RehearsalWe also conduct an in-depth examination of the impact of the number of clusters, denoted as \(c\), in the context of selective rehearsal during PPO training. As illustrated in Figure 6, our results reveal a relatively consistent variance of approximately 0.05 points in test-set rewards across various cluster numbers \(c\). While our findings highlight the robustness of the selective rehearsal technique, we recommend conducting a thorough analysis of this aspect when applying selective rehearsal to different datasets, as domain-specific variations can have a notable impact.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Main Test Set} & \multicolumn{3}{c}{Forget Test Set} \\ \cline{2-7} & \multicolumn{1}{c}{\(\mathtt{Win}\uparrow\)} & \multicolumn{1}{c}{\(\mathtt{Lose}\downarrow\)} & \multicolumn{1}{c}{\(\mathtt{Tie}\)} & \multicolumn{1}{c}{\(\mathtt{Win}\uparrow\)} & \multicolumn{1}{c}{\(\mathtt{Lose}\downarrow\)} & \multicolumn{1}{c}{\(\mathtt{Tie}\)} \\ \hline RM-PPO & 12.72 & 12.62 & 74.66 & 16.87 & 29.28 & 53.84 \\ AM-PPO & 14.87 & 10.38 & 74.74 & 9.70 & 8.44 & 81.86 \\ AM-PPO-SR & 15.78 & 9.77 & 74.45 & 10.30 & 7.95 & 81.75 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison results of different models over the SFT model.
Figure 5: PPO training curves on the Main Test Set with different scoring models. RM-PPO and AM-PPO denote PPO trained with Reward Model and Advantage Model, respectively. AM-PPO-SER additionally equips with Selective Rehearsal.
Figure 6: The AM-PPO-SR training curves on the Main Test Set with different number of clustering groups \(c\) for selective rehearsal.
Related Work
LLM Alignments with Human Preferences.LLMs are typically pre-trained on extensive datasets and can be adapted to a wide variety of downstream tasks. One critical aspect of utilizing LLMs effectively is ensuring their alignment with human preferences, which helps in averting responses that are unsafe, toxic, sexually explicit, biased, or criminal (Leike et al., 2018). A predominant strategy in achieving this is RLHF. This involves training a reward model based on human feedback and utilizing PPO to improve to fine-tuning LLMs (Christiano et al., 2017; Bai et al., 2022; Glaese et al., 2022; Bai et al., 2022; Stiennon et al., 2020; Qiu et al., 2022).
Instabilities in RLHF.Despite its success, the RLHF approach is inherently complex and poses significant challenges, thereby encouraging the exploration of simpler methods to align LLMs with human preferences. In this context, Cobbe et al. (2021) introduced the best-of-n sampling, which reinforces LLMs by choosing the responses with the highest reward score from a set of n responses. A similar pathway was pursued by RAFT (Dong et al., 2023), which focuses on selecting high-quality samples to fine-tuning to enhance the model's performance. Moreover, the RRHF strategy (Yuan et al., 2023) evaluates sampled responses from various sources using the logarithm of conditional probabilities. It then aligns these probabilities with human preferences by applying ranking loss, fostering a more refined alignment process. Furthermore, Rafailov et al. (2023) introduced the concept of Direct Preference Optimization (DPO). This approach leverages a relationship between reward functions and optimal policies to address a constrained reward maximization problem through a single stage of policy training. In a similar vein, Preference Ranking Optimization (PRO) (Song et al., 2023) sidesteps the necessity for Reinforcement Learning (RL) training. Instead, it directly aligns LLMs with human preferences using the Bradley-Terry comparison -- a method that involves the probability ranking of n responses generated by the LLM, ensuring they are consistent with human preference rankings.
Data Curation for LLM Alignments.Many approaches have been devised to curate high-quality, instruction-following datasets to fine-tune LLMs (Wang et al., 2022; 2023; Taori et al., 2023; Chiang et al., 2023; Peng et al., 2023). For instance, the study by LLMA (Zhou et al., 2023) underscores that even a limited set of carefully curated and high-quality examples can be utilized to fine-tune a strong pre-trained language model, enabling it to deliver competitive results across a diverse array of prompts. Similarly, Wei et al. (2023) introduced a versatile and straightforward data selector designed to autonomously curate a subset from the original fine-tuning dataset, adhering to specific principles for training vision-language models. While these strategies converge on the shared objective of data curation for LLM fine-tuning, our approach is uniquely centered on data curation for PPO training. This strategy diverges fundamentally from others that emphasize the SFT stage, thereby addressing a distinct problem.
## 6 Conclusion
In this report, we identified and analyzied critical impediments in RLHF training of LLMs, namely reward hacking and catastrophic forgetting. These issues emerge due to the variances in learned reward score distributions and the over-optimization of specific training examples, resulting in instabilities in RLHF training. To alleviate these issues, we introduced the _Advantage Model_ and _Selective Rehearsal_--innovative strategies formulated to stabilize the RLHF training process. The Advantage Model aims to maintain balanced reward score distributions across diverse categories and examples, thereby averting complications arising from reward hacking. On the other hand, Selective Rehearsal selectively identifies optimal examples for PPO training, primal examples for PPO training, encouraging the retention of crucial knowledge from the SFT stage, and preventing the depreciation of performance over time. Empirical analyses conducted on a range of datasets substantiated the efficacy of our proposed techniques, which not only enhanced stability in RLHF training but also led to improved reward scores and win rates the SFT models.
|
2302.14316 | Near-field localization of the boson peak on tantalum films for
superconducting quantum devices | Superconducting circuits are among the most advanced quantum computing
technologies, however their performance is currently limited by losses found in
surface oxides and disordered materials. Here, we identify and spatially
localize a near-field signature of loss centers on tantalum films using
terahertz scattering-type scanning near-field optical microscopy (s-SNOM).
Making use of terahertz nanospectroscopy, we observe a localized excess
vibrational mode around 0.5 THz and identify this resonance as the boson peak,
a signature of amorphous materials. Grazing-incidence wide-angle x-ray
scattering (GIWAXS) shows that oxides on freshly solvent-cleaned samples are
amorphous, whereas crystalline phases emerge after aging in air. By localizing
defect centers at the nanoscale, our characterization techniques and results
will inform the optimization of fabrication procedures for new low-loss
superconducting circuits. | Xiao Guo, Zachary Degnan, Julian Steele, Eduardo Solano, Bogdan C. Donose, Karl Bertling, Arkady Fedorov, Aleksandar D. Rakić, Peter Jacobson | 2023-02-28T05:14:57Z | http://arxiv.org/abs/2302.14316v1 | # Near-Field Localization of the Boson Peak on Tantalum Films for Superconducting Quantum Devices
###### Abstract
Superconducting circuits are among the most advanced quantum computing technologies, however their performance is currently limited by losses found in surface oxides and disordered materials. Here, we identify and spatially localize a near-field signature of loss centers on tantalum films using terahertz scattering-type scanning near-field optical microscopy (s-SNOM). Making use of terahertz nanospectroscopy, we observe a localized excess vibrational mode around 0.5 THz and identify this resonance as the boson peak, a signature of amorphous materials. Grazing-incidence wide-angle x-ray scattering (GIWAXS) shows that oxides on freshly solvent-cleaned samples are amorphous, whereas crystalline phases emerge after aging in air. By localizing defect centers at the nanoscale, our characterization techniques and results will inform the optimization of fabrication procedures for new low-loss superconducting circuits.
tantalum, scanning near-field optical microscopy, terahertz spectroscopy, grazing incidence wide angle x-ray scattering, superconducting quantum devices, amorphous materials The current central goal of superconducting quantum computing is to improve the fidelity of qubits enough to implement operations with minimal error correction [1]. Qubit fidelity, and in turn the progress of quantum computing, is currently limited by decoherence due to the dissipative coupling of qubit modes to electric dipoles in amorphous materials and defect states [2; 3]. These loss channels concentrate at amorphous oxides present at device interfaces, such as the metal-air interface [4], and their removal significantly improves the performance of planar superconducting devices in the low-temperature and low-power regimes [5]. It is therefore critical to devise methods to minimize oxidation and/or control the chemical reactivity of metal surfaces found in quantum devices [6; 7].
Tantalum has been identified as the leading material system for the fabrication of superconducting devices with state-of-the-art performance, showing notable improvements over aluminum or niobium-based devices [8; 9]. While initial results are promising, surface oxides remain the primary factor limiting device performance [9; 10]. Tantalum oxide (Ta\({}_{2}\)O\({}_{5}\)) has well-documented crystalline and amorphous phases, and both phases possess similar coordination shells below 4 A [11; 12; 13; 14]. The main structural difference is a reduced Ta coordination number in the amorphous phase (\(a\)-Ta\({}_{2}\)O\({}_{5}\)) [14]. For native amorphous oxide layers atop the Ta surfaces, it is unclear whether the structural trends observed for bulk oxide phases persist at the metal-air interface [15]. While the structural motifs at these interfaces are less explored, electronic structure measurements using (soft) x-ray photoelectron spectroscopy after near-ambient oxidation indicate pentoxide and suboxide phases are present, increasing the chemical complexity of this amorphous oxide [16].
Amorphous materials have well-documented anomalous thermodynamic behaviors at low temperatures such as deviations from the Debye model [17; 18; 19]. This behavior was explained independently by Phillips and Anderson _et al._, who proposed a model of tunneling two-level systems (TLS; Figure 1a) [20; 21]. For superconducting devices and hardware (resonators, Josephson junctions etc.), TLSs are a ubiquitous source of noise and fidelity loss [22]. However, removing TLS is not straightforward as TLS are heterogeneous with structural motifs ranging from single atoms to groups of atoms, which tunnel between configurations of differing energy. Major successes of the TLS model include capturing the effect of strain on individual TLSs in active superconducting devices [4; 23], and reproducing peaks in the heat capacity at low temperatures due to an excess vibrational density of states, i.e. the boson peak [19]. The boson peak is a universal signature of amorphous states of matter -- ranging from disordered aggregates of colloidal nanoparticles [24] and organic materials [25; 26], to metallic glasses [27]. Importantly, as TLS and the boson peak are ultimately lattice vibra
tion phenomena, it can be detected spectroscopically and at elevated temperatures [19]. Probes sensitive to the boson peak include inelastic neutron scattering [28; 29; 30], (hyper) Raman spectroscopy [31; 32], helium atom scattering [33; 34], and terahertz (THz) time-domain spectroscopy [35; 36; 37; 38]. However, these methods are spatially averaged with responses restricted by (at best) the diffraction limit, precluding an understanding of the local spatial variation of TLS across microscopic surface regions. Due to their direct influence on qubit performance, the local identification of amorphous oxide phases on tantalum-based quantum devices represents a scientifically challenging and technologically important _terra incognita_.
Here, we study oxide phases at Ta surfaces in the near-field. The coexistence of two surface phases is observed with both IR and THz near-field responses in scattering-type scanning near-field optical microscopy (s-SNOM) [39; 40]. THz nanospectroscopy shows that flat regions exhibit a low-frequency spectral signature characteristic of the boson peak, confirming it as an amorphous phase, whereas 1D nanoridges show characteristic signatures of dielectric relaxation. The presence of thin amorphous and crystalline oxides is corroborated by grazing-incidence wide-angle x-ray scattering (GIWAXS), while topographic and chemical changes associated with etching treatments are tracked by atomic force microscope (AFM) and x-ray photoelectron spectroscopy (XPS). THz nanospectroscopy after etching indicates that the amorphous oxide has been altered or removed. Our experiments demonstrate that THz s-SNOM in tandem with GIWAXS can differentiate between crystalline and amorphous phases at the nanoscale, thereby informing the processing of materials for superconducting quantum computing.
We begin by isolating the topographic and near-field optical differences on Ta films using a combination of surface-sensitive probes. Figure 1a schematically illustrates the s-SNOM measurement principle and the connection to TLS. In s-SNOM, a metallic probe tip periodically taps the sample surface, which is simultaneously illuminated by an electromagnetic stimulus, e.g., THz radiation. The probe tip is transiently polarized by the incident illumination and thereby forms a highly concentrated electric field -- a nanofocus -- near its apex. With nanometer precision in the positioning of the nanofocus, and a probe tip radius around 60 nm, THz s-SNOM is able to bypass the diffraction limit and resolve the nanoscale THz response as well as the sample topography in AFM. Therefore, near-field imaging provides a spatial mapping of the spectrally-averaged optical response, while THz nanospectroscopy yields a frequency-dependent imprint of the material within the nanofocus [39; 40].
AFM of the Ta film reveals the surface consists of two distinct regions, an elevated striped region (A), and a low-lying region (B) (Figure 1b). In near-field images (Figures 1c,d), two regions of strong optical scattering contrast with sharp boundaries are observed using both mid-IR and THz excitations. The strongest scattered amplitude coincides with region B, while the stripes of region A show less prominent scattering indicating a difference in the dielectric response and implying a difference between the regions.
To elucidate the origins of the difference between regions, THz nanospectroscopy was performed on both solvent-cleaned and reactive ion etched (RIE) Ta samples (Figure 2). To suppress background noise in s-SNOM scattering spectra, higher-harmonic signals (\(n\geq 2\)) are used for s-SNOM vector calibration to quantitatively retrieve absorption coefficients, \(\alpha(\nu)\), for region A and B (Figure S4) [42]. Normalized absorption coefficients (\(\alpha(\nu)/\nu^{2}\)) are obtained and compared with the Debye model, which appears as a horizontal line (Figure 1a, green dashed line). For the solvent-cleaned sample, we observe a diverging response for regions A and B below 0.65 THz and these trends are consistent for higher-order harmonic signals (\(S_{3}\) - \(S_{5}\)) in both regions indicating exemplary signal to noise characteristics. The relative permittivity (\(\epsilon_{r}\)) of Ta oxide (\(>30\)) is substantially greater than other common dielectrics (e.g. SiO\({}_{2}\sim 5\)) over this range [43], facilitating the strong tip-scattered THz signals in s-SNOM.
The absorption in region A steeply increases with decreasing frequency, roughly following a power law dependence, indicating a reorientation of dipoles within a highly polarizable dielectric. In contrast, the absorption rapidly decreases in region B following a log-normal distribution at low frequency which is a characteristic boson peak signature [37]. For Ta samples processed by RIE (Figure 2b), the results are markedly different with an apparent overall reduction in the normalized absorption coefficient, attributed to the removal of dielectric material and a more metallic surface, as supported by XPS measurements and discussed below. Critically, this processing removes the spectroscopic signature due to the boson peak and the post-RIE response of both regions can be ascribed to dipole reorientations.
While there is robust debate on the precise cause and interpretation of the boson peak, it is understood as a universal indicator of materials with glassy or amorphous structures [44; 45; 46; 47; 48; 49; 50]. From the linear response theory for disordered materials, the absorption coefficient is proportional to the vibrational density of states (vDOS: \(g(\nu)\)) and the boson peak is usually characterized by \(g(\nu)/\nu^{2}\)[28; 32]. Recently, the boson peak has been observed and characterized using the normalized absorption coefficients \(\alpha(\nu)/\nu^{2}\) from THz time-domain spectroscopic measurements on amorphous or glass-like materials [35; 36; 37; 38]. As Ta\({}_{2}\)O\({}_{5}\) has well-documented amorphous phases, the spectroscopic signature at 0.5 THz in region B fits with our understanding of the universal na
ture of the boson peak. Therefore, our THz nanospectroscopy results indicate that \(a\)-TaO\({}_{x}\) exists at discrete locations on the surface of the film.
As RIE processing removes the boson peak signature but retains the spatially varying near-field optical response, further surface treatments common to superconducting devices were evaluated. High-aspect ratio AFM tips, in contrast to the larger radius tips used for THz and IR measurements, were employed to investigate the surface structure after piranha, BOE, and RIE (Figure 3). On the piranha-cleaned sample, region A sits \(\sim 3\) nm higher than region B (versus \(\sim 1\) nm on solvent-cleaned), indicating that piranha cleaning removes material from region B. The lateral spacing between parallel stripes in region A is \(\sim 70\) nm and changing etching method (BOE, RIE) does not affect the structure. Additionally, comparable surface coverages of region A (BOE, 24%; piranha, 35%; RIE, 28%) are observed across treatments.
While region A is unchanged across etched and
Figure 1: (a) Schematic operational principles of probing nanoscale light-matter interactions employing an s-SNOM (left): a broadband pulse (mid-IR/THz) is collimated onto a metallic probe tip periodically tapping on the sample surface with a nanofocus generated around the tip end to surpass the diffraction limit. The interrogated sample information is encoded into the tip-scattered field through the dipole-dipole interaction between the tip-sample pair. The demodulated scattering field at high-order harmonics (\(n\)-th) of the tip tapping frequency (\(S_{n}\)) highlights localized near-field responses at the nanoscale. The structure of an amorphous oxide (top middle) containing TLSs (red) in the form of individual/groups of atoms tunneling between two configurations. The TLSs are modeled by a double-well potential with an energy difference \(E=\sqrt{\Delta_{0}^{2}+\epsilon^{2}}\) between eigenstates, where \(\Delta_{0}\) is the inter-well tunneling rate and \(\epsilon\) is the asymmetry bias (top right) [41]. Depiction of a boson peak with the characteristic log-normal distribution (solid purple) from THz nanospectroscopy, and a typical vibrational density of states curve for a Debye solid \(\propto\nu^{2}\) (dashed green) (bottom right) [37]. The surface of a solvent-cleaned Ta film interrogated by s-SNOM with (b) AFM phase, (c, d) the amplitude of third-harmonic s-SNOM scattering signals (\(S_{3}\)) from mid-IR and THz nanoimaging in the white-light mode.
solvent-cleaned samples, we do observe topographic changes in region B. Starting with the piranha-cleaned sample, region B is less rough than the solvent-cleaned sample and develops a weaker stripe pattern (\(\sim 1\) nm corrugation) in place of the disordered structures observed in region B of the solvent-cleaned sample (Figure 3a, line profile). Using the stripes of region A as a point of reference, these weaker stripes run nearly perpendicular to the 1D stripes. We observe the same structure on BOE and RIE-treated samples, with stripes of similar corrugation and periodicity appearing in region B. As all surface treatments lead to two surface regions with consistent topography - including RIE which removes material - the 1D stripes in region A represent structures formed during film nucleation.
The prominent 1D stripes are consistent with growth modes observed in other bcc metals such as W where 1D 'nanoridges' are the result of anisotropic diffusion of adatoms at bcc (110) surfaces during film growth [51]. This mass transport mechanism is corroborated by domain intersections, where the 1D stripes meet at angles of \(\sim 110\) degrees, as expected for preferential diffusion on low index bcc surfaces (Figure S5). However, mass transport alone cannot explain the persistent near-field contrast. Surface defects such as step edges or kinks are well-known reactive sites due to the reduced surface coordination and presence of adsorption sites not present on flat surfaces [52]. Highly stepped or microfaceted surfaces are often regions with enhanced chemical reactivity and more prone to oxidation through dissociation of molecular oxygen or water. Therefore, we attribute the strong near-field contrast at the surface of our films to the preferential oxidation of these 1D nanostructures.
Due to a variety of possible oxidation states (1+ to 5+), Ta surfaces exhibit complicated structural evolutions when exposed to ambient oxygen, kinetically transforming from a pure metal into a variety of intermediate metastable suboxides (TaO\({}_{x}\)) before ultimately forming the thermodynamically stable pentoxide (Ta\({}_{2}\)O\({}_{5}\)). The detection and identification of (sub)oxide phases subsequently require a phase-sensitive probe. To elucidate the oxidation profile of our Ta film, we employ surface-sensitive GIVAXS (Figure S6) and scan through relatively shallow incident angles (\(\alpha_{i}<0.3^{\circ}\)) to formulate a surface-subsurface structural model. Conveniently, given that TaO\({}_{x}\) is relatively less dense than Ta, even a very thin oxide layer (\(<5\) nm) can be studied due to the amplifying effects of surface refraction.
Figure 3d compares representative 2D GIVAXS images recorded from the solvent-cleaned and aged solvent-cleaned Ta films, with their corresponding integrated profiles contained in Figure 3e. Examining the full 1D profile (Figure S7), the fresh solvent-cleaned Ta material is shown to contain only a minor portion of metastable \(\beta\)-Ta that resides near the surface of a predominantly \(\alpha\)-Ta film (Figure S8). The Ta film is highly oriented with respect to the planar surface (Figure S9), a feature that is consistent across the entire film. Conversely, following 2 weeks of exposure to air, several new Bragg peaks are introduced within the scattering range between \(q=18-33\) nm\({}^{-1}\) (Figure 3e) and we assess its development by comparing it to thermally driven oxidation [53]. Here, the relatively prominent peak residing at 21.5 nm\({}^{-1}\) is consistent with diffraction from the (110)/(200) scattering planes of the orthorhombic \(\beta\)-Ta\({}_{2}\)O\({}_{5}\) structure, while splitting of the \(\alpha\)-Ta (110) peak indicates partial oxidation of Ta metal (TaO\({}_{x}\)). An angle of incidence scan further demonstrates these signals
Figure 2: THz nanospectroscopy on solvent-cleaned and RIE-treated Ta surface. (a, b) Normalized absorption coefficient, \(\alpha(\nu)/\nu^{2}\), from calibrated THz nanospectroscopy with multiple high-order harmonic signals (\(S_{n}\), \(n\geq 2\)): The localized THz spectral absorption of region B (blue) follows a log-normal distribution as a characteristic boson peak feature for the unetched sample (a) and turns to obey the power law after RIE surface etching (b). Localized THz spectral absorption in region A (red) follows a power law (red dashed lines, guided for eyes) towards decreasing THz frequencies. The spectral absorption feature for high-resistivity (non-metallic) silicon (black) is plotted for comparison. AFM phase images of the solvent-cleaned Ta surface (a), and after reactive ion etching (RIE) (b) are inset.
arise from the uppermost surface of the film (Figure S8), and lateral sampling (\(\sim 2\) mm) of different areas captures a variety of metastable surface oxides (Figure 2e). The thermodynamically preferred \(\beta\)-Ta\({}_{2}\)O\({}_{5}\) phase is restricted to specific regions in the 2D GIWAXS pattern, indicating this phase grows in a highly faceted manner(Figure 3d).
The surface chemical composition and oxide thickness after etching procedures were tracked by XPS (Figure 4). Within the probed depth of the solvent-cleaned sample, metallic Ta accounts for \(\sim 30\%\) of the signal with the remaining intensity due to oxides; a similar trend is observed for the piranha and BOE-treated samples (Figure S1). However, significant changes in the Ta 4f core level are seen after RIE treatments, notably a strong increase in the metallic Ta component. The enhancement of the metallic Ta peak is accompanied by a long shoulder at intermediate binding energies due to
Figure 3: AFM and GIWAXS. AFM height images of Ta films after (a) piranha treatment, (b) buffered oxide etching (BOE), and (c) RIE. Line profiles are displayed below each AFM image, with the direction of each scan shown by the corresponding arrow in the image above. The blue scans traverse the 1D stripes of region A, and the orange scans move across the perpendicular stripes of region B. (d) 2D GIWAXS patterns recorded from a solvent-cleaned Ta film and an aged film exhibiting surface oxidation, due to exposure to ambient oxygen. (e) Integrated scattering signals (\(q_{xyz}\)) corresponding to the frames shown in (d), along with four additional examples (#1-4) of varied TaO\({}_{x}\)-related surface signals. These patterns have been expanded and rescaled known to contain TaO\({}_{x}\) peaks, with complete scattering patterns contained in Figure S7 for completeness. The asterisk (\(*\)) indicates an integration artifact, introducing a slight dip in the background.
suboxides, consistent with previous reports [16; 54; 55]. To gauge the efficacy of these treatments, we estimate the surface oxide thickness after each surface treatment by employing a simple model considering Ta, TaO\({}_{x}\), and Ta\({}_{2}\)O\({}_{5}\). From this model, we determine a total oxide thickness of 2.9 nm for the solvent-cleaned sample and 2.8 nm for both the piranha and BOE-treated samples. For the RIE-treated sample, the oxide thickness is significantly reduced to 0.8 nm.
While the Ta 4f level has a complex lineshape, the O 1s level is comparatively simple (Figures 4c, d). Comparing the solvent cleaned and RIE processed samples, the O 1s retains a similar profile at normal emission, albeit with a reduced relative intensity (\(-22\%\)). However, by performing grazing angle XPS (Figure S2), we observe a pronounced increase in the surface hydroxyl component for the solvent-cleaned sample. In these more surface sensitive measurements, BOE shows a modest hydroxyl enhancement, while piranha and RIE show no significant increase. The C 1s peak, a general indicator of surface cleanliness, decreases after both piranha (\(-22\%\)) and RIE (\(-8\%\)) treatments (Figure S3).
The dramatic change in the Ta 4f core level after RIE processing indicates the removal of surface oxides, a critical step in the fabrication of superconducting devices. Within a device, the intrinsic losses can be expressed as \(F\tan\delta\), where \(\tan\delta\) is the loss tangent of the material (\(\epsilon^{\prime\prime}/\epsilon^{\prime}=\tan\delta\)), and \(F\) is the filling factor (or participation ratio), defined as the fraction of the total capacitive energy stored in the constituent materials. Amorphous oxides have particularly large loss tangents in comparison to their crystalline counterparts. For example, the loss tangent of amorphous Al\({}_{2}\)O\({}_{3}\) at cryogenic temperatures is \(\sim 1.6\times 10^{-3}\), while crystalline sapphire (\(\alpha\)-Al\({}_{2}\)O\({}_{3}\)) under comparable conditions has a loss tangent of \(\sim 2\times 10^{-8}\)[56; 57]. While there are currently no literature reports on the low-temperature dielectric loss of crystalline or amorphous TaO\({}_{x}\), we ex
Figure 4: Monochromatic Al K\(\alpha\) XPS spectra. Ta 4f (a, b) and O 1s (c, d) levels of the solvent cleaned and RIE processed samples collected at normal emission, and the relative intensities of components used for fitting. Experimental spectra (a, c) are shown as offset solid blue lines, and the fitting envelopes (background subtracted) as dashed blue lines. The Ta 4f core level consists of two dominant sets of spin-orbit split peaks, where the low binding energy doublet (Ta 4f\({}_{7/2}\), 21.6 eV) corresponds to metallic Ta, while the higher binding energy doublet (Ta 4f\({}_{7/2}\), 26.7 eV) is Ta\({}_{2}\)O\({}_{5}\). Suboxide components make up a small contribution at intermediate binding energies for most samples. To avoid overfitting, a single symmetric doublet is used to model the suboxide contributions. The O 1s level was fit with contributions from lattice oxygen, surface hydroxyls (-OH) and oxygen-containing organics (C=O).
spect a similar trend to Al\({}_{2}\)O\({}_{3}\). The filling factor (\(F\)) can be engineered to minimize the participation of lossy regions by, for instance, thinning oxide layers at the metal-air interface [58]. As the metal-air interface is a region subjected to large electric fields, losses due to surface oxides play an outsized role in the final device performance, and any possible reduction in the oxide participation is crucial [56]. On the solvent-cleaned sample, direct measurement of the boson peak in region B, together with the absence of crystalline oxide phases in GIWAXS, confirms the existence of lossy \(a\)-TaO\({}_{x}\). Furthermore, XPS measurements give evidence for surface-bound hydroxyls, which are likely TLS candidates. On the RIE-treated sample we observe a reduction in both the oxide layer thickness and the removal of amorphous material, confirming RIE as an effective technique to prepare Ta films for use in superconducting devices.
In conclusion, our experiments reveal a strong and unexpected near-field optical contrast on Ta films. We attribute these differences to the dissimilar oxidation behavior of 1D nanoridges and flat regions. Probing the two regions with THz nanospectroscopy, we observe an absorption peak across multiple harmonics at 0.5 THz in flat regions, which corresponds to the boson peak, a universal signature of amorphous materials. This is supported by GIWAXS measurements indicating a lack of crystalline oxide phases on fresh Ta films. Our observation of the boson peak and its localization to a particular surface region is a rare opportunity to visualize microscopic sources of decoherence. The boson peak in amorphous materials is directly connected to the TLS invoked in superconducting quantum devices by bistable defects, the major loss channel in qubits. These defect centers have, to the best of our knowledge, only been observed indirectly via area averaging techniques or in _operando_ devices, limiting our microscopic understanding of structure-property relationships. Our findings further demonstrate that THz s-SNOM can inform and guide the processing of materials for quantum computing [59]. The direct observation and identification of the boson peak using THz nanospectroscopy opens new doors to explore amorphous materials at the nanoscale. Finally, we note that these methods are particularly relevant to 2D materials, where reduced dimensionality strongly enhances the near-field optical response, and may be applied to disordered or amorphous forms of these materials such as bilayer SiO\({}_{2}\)[60], h-BN [61], graphene, and the metal dichalcogenides [62; 63].
## Acknowledgements
The authors acknowledge that UQ operates on the land of the Jagera and Turrbal peoples. We pay our respects to their Ancestors and their descendants who continue cultural and spiritual connections to Country. The authors acknowledge the facilities, and the scientific and technical assistance, of the Microscopy Australia Facility at the Centre for Microscopy and Microanalysis, The University of Queensland. This work used the Queensland node of the NCRIS-enabled Australian National Fabrication Facility (ANFF). Financial support was provided by the Australian Research Council's Discovery Projects' funding scheme (No. DP210103342), the ARC Centre of Excellence for Engineered Quantum Systems (EQUS, No. 286 CE170100009), and the Foundational Questions Institute Fund (Grant No. FQXi-IAF19-04). JS acknowledges support from Maarten B. J. Roeffaers and from the Research Foundation - Flanders (FWO: Grant No. 12Y7221N, V400622N). The authors thank the staff of the BL11 NCD-SWEET beamline at ALBA Synchrotron for their assistance in recording the GIWAXS data.
## Author contributions
XG, ZD, BCD, JS, and ES performed the experiments: XG, THz s-SNOM measurements; ZD, XPS and sample etching; BCD, AFM and nano-FTIR measurements; JS, GIWAXS measurements. XG, ZD, PJ and JS analyzed the data. ZD, XG and PJ prepared the manuscript, with contributions from all authors. PJ and ADR supervised the project. All authors discussed the results and reviewed the manuscript.
## Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
|
2309.06460 | Widely Interpretable Semantic Representation: Frameless Meaning
Representation for Broader Applicability | This paper presents a novel semantic representation, WISeR, that overcomes
challenges for Abstract Meaning Representation (AMR). Despite its strengths,
AMR is not easily applied to languages or domains without predefined semantic
frames, and its use of numbered arguments results in semantic role labels,
which are not directly interpretable and are semantically overloaded for
parsers. We examine the numbered arguments of predicates in AMR and convert
them to thematic roles that do not require reference to semantic frames. We
create a new corpus of 1K English dialogue sentences annotated in both WISeR
and AMR. WISeR shows stronger inter-annotator agreement for beginner and
experienced annotators, with beginners becoming proficient in WISeR annotation
more quickly. Finally, we train a state-of-the-art parser on the AMR 3.0 corpus
and a WISeR corpus converted from AMR 3.0. The parser is evaluated on these
corpora and our dialogue corpus. The WISeR model exhibits higher accuracy than
its AMR counterpart across the board, demonstrating that WISeR is easier for
parsers to learn. | Lydia Feng, Gregor Williamson, Han He, Jinho D. Choi | 2023-09-12T17:44:40Z | http://arxiv.org/abs/2309.06460v1 | Widely Interpretable Semantic Representation: Frameless Meaning Representation for Broader Applicability
###### Abstract
This paper presents a novel semantic representation, WISeR, that overcomes challenges for Abstract Meaning Representation (AMR). Despite its strengths, AMR is not easily applied to languages or domains without predefined semantic frames, and its use of numbered arguments results in semantic role labels which are not directly interpretable and are semantically overloaded for parsers. We examine the numbered arguments of predicates in AMR and convert them to thematic roles which do not require reference to semantic frames. We create a new corpus of 1K English dialogue sentences annotated in both WISeR and AMR. WISeR shows stronger inter-annotator agreement for beginner and experienced annotators, with beginners becoming proficient in WISeR annotation more quickly. Finally, we train a state-of-the-art parser on the AMR 3.0 corpus and a WISeR corpus converted from AMR 3.0. The parser is evaluated on these corpora and our dialogue corpus. The WISeR model exhibits higher accuracy than its AMR counterpart across the board, demonstrating that WISeR is easier for parsers to learn.
## 1 Introduction
Abstract Meaning Representations (AMRs, Banarescu et al., 2013) represent the meaning of a natural language sentence as a singly rooted, directed acyclic graph in which nodes correspond to concepts and edges correspond to relations between them. They are typically displayed using the human-readable PENMAN notation (Matthiessen and Bateman, 1991), as shown in Figure 0(a). A central feature of AMR is its use of PropBank (Palmer et al., 2005; Bonial et al., 2014), a corpus of frames that assigns a specific argument structure to every predicate sense in the form of a list of _numbered arguments_ (:ARG\(n\)).
There are several advantages of AMR, most notably its abstractness--meaning components need not be directly anchored to parts of the text (i.e., morphemes, lexical items, or MWEs). In this respect, AMR differs from other graphical representation languages such as Elementary Dependency Structures (Oepen and Lonning, 2006), Prague Semantic Dependencies (Miyao et al., 2014), and Universal Conceptual Cognitive Annotation (Abend and Rappoport, 2013) which all feature some degree of anchoring (Oepen et al., 2019, 2020). Although this design feature has been argued to be a potential downside of AMR (Bender et al., 2015), it renders it particularly well-suited for parsing dialogue since it is able to represent the meaning of natural language expressions that are not syntactically well-formed, obviating problems that arise from many types of production error. It also has a sizable corpus of annotation (Knight et al., 2014, 2017, 2020), and a significant amount of research has been conducted to enhance AMR's representation of phenomena such as quantifier scope (Pustejovsky et al., 2019; Lai et al., 2020; Bos, 2020), tense/aspect (Donatelli et al., 2018, 2019), intensional operators (Williamson et al., 2021), and speech acts (Bonial et al., 2020). Lastly, numerous state-of-the-art AMR parsers have been developed with promising results (Cai and Lam, 2020; Oepen et al., 2020; Xu et al., 2020; Lee et al., 2020; Bevilacqua et al., 2021).
Nonetheless, AMR has a few disadvantages. It relies on PropBank for predicate argument structures, presupposing the existence of semantic frames for predicate senses. This makes it less versatile for languages or domains with frequent novel predicate senses, due to high upfront labor costs in creating many new frames.1 Some studies that have adapted AMR to other languages have noted that AMR needs to be augmented to accommodate
language-specific features. For instance, Migueles-Abraira et al. (2018) introduce novel predicates such as sinnombre ("nameless"), to annotate prodrop in Spanish.
Another problem for AMR is that numbered arguments are semantically opaque without reference to the frames. There is no consistent mapping from numbered arguments to traditional thematic roles that is applicable to all senses besides perhaps :ARG0 and :ARG1, which correspond to prototypical agent and patient respectively. For instance, :ARG2 of tell-01 in Figure 0(a) is the entity the telling is directed at, while :ARG2 of dislodge-01 is the initial position of the dislodged entity. Meanwhile, the initial position of the entity stepping-down is the :ARG1 of step-down-01. This inconsistent correspondence between numbered arguments and thematic roles makes semantic role labels uninterpretable for parsing models during training.
Section 2 discusses the drawbacks of AMR in detail. Section 3 introduces a novel annotation scheme, WISeR (Widely Interpretable Semantic Representation), designed to overcome these challenges. In contrast to AMR, WISeR does not utilize frames, instead maintaining a one-to-one correspondence between argument labels and thematic roles. It also has the benefit of permitting the introduction of novel predicates on an ad-hoc basis. Section 4 presents our new corpus comprising 1,000 English dialogue sentences annotated in both WISeR and AMR, making fair comparisons between the schemes for annotation adaptability and quality. Section 5 compares a seq-to-seq parsing model (Bevilacqua et al., 2021) trained on the AMR 3.0 corpus and a WISeR corpus converted from AMR 3.0. Parsing models are evaluated on those corpora as well as our new dialogue corpora.2
Footnote 2: Our resources including the converted WISeR corpus and the new dialogue WISeR corpus are publicly available: [https://github.com/emorynlp/wiser](https://github.com/emorynlp/wiser)
## 2 Inside AMR
### Predicates in AMR
AMR annotation begins by identifying disambiguated predicate senses from PropBank frames. Although providing frames as a reference to annotators is designed to ensure consistency during annotation, sense disambiguation is often difficult for annotators, leading to low agreement levels in word sense disambiguation tasks (Ng et al., 1999; Lopez de Lacalle and Agirre, 2015). In addition, PropBank occasionally conflates senses (e.g., put-on-08 is used for the sense in _'put on some clothes'_ as well as _'put on a show'_). AMR's reliance on PropBank means that it is constrained to only a few languages for which frames exist (Palmer et al., 2005; Xue and Palmer, 2005; Palmer et al., 2006; Zaghouani et al., 2010; Vaidya et al., 2011; Duran and Aluisio, 2011; Haverinen et al., 2015; Sahin and Adali, 2018) and it often lacks domain-specific predicates in specialized domains.
However, AMR contains several predicate senses that are not found in PropBank. These senses often represent idioms or multi-word constructions created ad-hoc during annotation (e.g., throw-under-bus-08, pack-sand-00). Furthermore, there are 9 senses in AMR that have additional numbered arguments not featured in their respective PropBank frames.3
Footnote 3: The 9 senses with additional arguments in AMR: bind-01: ARG4, damage-01: ARG3, late-02: ARG3, misconduct-01: ARG1, oblige-02: ARG2, play-11: ARG3, raise-02: ARG3, rank-01: ARG5, unique-01: ARG3-4.
Table 1 shows the statistics of PropBank4 and the AMR 3.0 release (Knight et al., 2020). We calculate the number of frames in AMR 3.0 by
Figure 1: AMR and WISeR graphs for the sentence _’The woman told the man she will step down from the role when she dislodges the boss from the board’_.
combining information in the release text file5 with the annotation corpus since there are frames in the text file that are not in the corpus, and vice versa. Out of 9,090 senses in AMR 3.0, only 556 are unique to AMR. In other words, 8,534 senses in AMR 3.0 (i.e., 94%) are based on PropBank frames, emphasizing the extent to which AMR annotation depends on PropBank.
Footnote 5: AMR frames are included in LDC2020T02 as
propbank-amr-frame-arg-descr.txt
### Numbered Arguments in AMR
The argument structure of a predicate sense in PropBank is a set of numbered arguments. As shown in Table 2, the thematic role of benefactive or attribute may be encoded by either :ARG2 or :ARG3.
Consequently, there is no one-to-one correspondence between numbered arguments and thematic roles. For instance, :ARG0/ARG1 largely correspond to the thematic roles of prototypical agent/patient respectively. However, even this correspondence is occasionally lost. As such, the numbered arguments do not directly encode meaning relations. Rather, their semantics is given through two other resources in PropBank: function tags and VerbNet roles (Kipper et al., 2002; Kipper, 2005; Loper et al., 2007; Kipper et al., 2008). Table 3 shows the function tags used in PropBank.
Table 4 shows the distribution of function tags over numbered arguments, highlighting that every numbered argument is semantically opaque without reference to the PropBank frame. As a result, numbered argument role labels make the task of automatic parsing more difficult for machines.
As mentioned, numbered arguments are occasionally annotated with VerbNet roles. Table 5 shows the distribution of VerbNet thematic roles (in rows) over the numbered arguments (in columns) in PropBank frames. Unfortunately, the coverage of PropBank frames associated with VerbNet classes is incomplete, with 25.5% of PropBank frames not covered. Even among the PropBank frames that are associated with VerbNet classes there are mismatches; an argument described in one resource may be omitted from the other, or a single argument may be split into multiple arguments. These mismatches reflect both practical and theoretical differences in the resources, and as a result, only 40.6% of arguments in PropBank are mapped to VerbNet roles.
## 3 Inside WISeR
### Annotation Scheme
Here, we present the WISeR annotation scheme, which is designed to rectify the weaknesses of AMR in Section 2. The complete WISeR annotation guidelines are provided in Appendix A. WISeR does not rely on frames, discarding numbered arguments and predicate sense IDs. In this
\begin{table}
\begin{tabular}{l|c} \hline
**Label** & **Thematic Role** \\ \hline ARG0 & agent \\ ARG1 & patient \\ ARG2 & instrument, benefactive, attribute \\ ARG3 & starting point, benefactive, attribute \\ ARG4 & ending point \\ \hline \end{tabular}
\end{table}
Table 2: Numbered arguments and corresponding thematic roles in the PB guidelines (Bonial et al., 2015).
\begin{table}
\begin{tabular}{l|c c} \hline
**Tag** & **Description** & **Tag** & **Description** \\ \hline PPT & Prototypical Patient & EXT & Extent \\ PAG & Prototypical Agent & CAU & Cause \\ GOL & Goal & COMM & Comitative \\ PRD & Secondary Predication & PRP & Purpose \\ MNR & Manner & TMP & Temporal \\ DIR & Directional & ADJ & Adjectival \\ VSP & Verb-specific & ADV & Adverbial \\ LOC & Locative & REC & Reciprocal \\ \hline \end{tabular}
\end{table}
Table 3: Descriptions of the function tags in PropBank.
\begin{table}
\begin{tabular}{l|c} \hline
**Label** & **Thematic Role** \\ \hline ARG0 & agent \\ ARG1 & patient \\ ARG2 & instrument, benefactive, attribute \\ ARG3 & starting point, benefactive, attribute \\ ARG4 & ending point \\ \hline \end{tabular}
\end{table}
Table 2: Numbered arguments and corresponding thematic roles in the PB guidelines (Bonial et al., 2015).
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline & A0 & A1 & A2 & A3 & A4 & A5 & A6 & \(\Sigma\) \\ \hline PPT & 38 & 8,593 & 1,249 & 49 & 4 & 0 & 0 & 10,284 \\ PAG & 8,412 & 664 & 28 & 1 & 0 & 0 & 0 & 9,105 \\ GOL & 2 & 503 & 1,436 & 238 & 214 & 2 & 0 & 2,395 \\ PRD & 0 & 79 & 701 & 231 & 85 & 10 & 0 & 1,106 \\ MNR & 2 & 10 & 808 & 159 & 8 & 11 & 0 & 998 \\ DIR & 18 & 147 & 518 & 270 & 14 & 4 & 0 & 971 \\ VSP & 1 & 58 & 338 & 214 & 48 & 19 & 0 & 678 \\ LOC & 6 & 196 & 268 & 43 & 25 & 4 & 0 & 542 \\ EXT & 1 & 5 & 244 & 25 & 3 & 5 & 6 & 289 \\ CAU & 75 & 22 & 140 & 30 & 0 & 0 & 0 & 267 \\ COM & 0 & 83 & 100 & 9 & 4 & 0 & 0 & 196 \\ PRP & 0 & 6 & 74 & 32 & 5 & 1 & 0 & 118 \\ TMP & 0 & 3 & 15 & 3 & 6 & 1 & 0 & 28 \\ ADJ & 0 & 5 & 10 & 4 & 0 & 0 & 0 & 19 \\ ADV & 0 & 2 & 4 & 5 & 1 & 0 & 0 & 12 \\ REC & 0 & 1 & 2 & 1 & 0 & 0 & 0 & 4 \\ \hline \(\Sigma\) & 8,906 & 10,377 & 5,935 & 1,314 & 417 & 57 & 6 & 27,012 \\ \hline \end{tabular}
\end{table}
Table 4: Distribution of function tags (rows) over numbered arguments (columns) in PropBank.
respect, WISeR annotation is similar to Open Information Extraction (OpenIE, Banko et al., 2007; Yates et al., 2007; Fader et al., 2011; Angeli et al., 2015), the extraction of simple predications from large, diverse corpora without the need for a pre-defined vocabulary. However, WISeR does make use of a set of thematic roles similar to VerbNet (Kipper, 2005) and meaning representations built on VerbNet, such as the Discourse Representation Structures used in the Parallel Meaning Bank (Abzianidze et al., 2017). WISeR represents thematic relations directly as edge labels, similar to the PENMAN Sentence Plan Language (Kasper, 1989) and an earlier version of AMR prior to the incorporation of PropBank (Langkilde and Knight, 1998). This particular design choice has strengths and weaknesses. On the one hand, Dowty (1991) famously argued that a closed set of discrete thematic roles is theoretically questionable, inspiring attempts to perform semantic role labelling without them (e.g., Reisinger et al., 2015; White et al., 2017). However, using thematic roles has practical benefits, allowing the categorization of arguments into natural classes and facilitating NLI about notions such as causation, direction of movement, secondary predication etc., as well as quantifying the frequency and distribution of thematic roles across different domains.
The WISeR graph in Figure 0(b) above shows how WISeR resolves the issues arising from use of numbered arguments in Figure 0(a). Both _role_ and _board_ stand in the :start relation to their predicates in WISeR because they both describe an initial state. However, in AMR, the former is labeled :ARG1 and the latter :ARG2. Next, both _man_ and _board_ are labeled as :ARG2 in AMR whereas they take distinct thematic roles of :benefactive and :start in WISeR. Similarly, :ARG1 is overloaded in AMR for _role_, _boss_, and _man_, whereas WISeR disambiguates them by assigning the :start relation to _role_ and :theme to _boss_ and _man_.
WISeR adopts non-core roles that exist in AMR, allowing annotation of most numbered arguments using these non-core roles. For example, WISeR incorporates the AMR :source role with numbered arguments corresponding to initial states into the role :start. It also combines the :beneficiary role in AMR into the thematic role :benefactive. This reduces redundancy
\begin{table}
\begin{tabular}{l|r r r r r r|r} & ARG0 & ARG1 & ARG2 & ARG3 & ARG4 & ARG5 & \(\mathbf{\Sigma}\) \\ \hline agent & 3,462 & 30 & 1 & 1 & 0 & 0 & 3,494 \\ theme & 208 & 1,661 & 371 & 13 & 0 & 0 & 2,253 \\ patient & 13 & 1,131 & 20 & 0 & 0 & 0 & 1,164 \\ experience & 187 & 264 & 5 & 2 & 0 & 0 & 458 \\ destination & 0 & 231 & 183 & 21 & 10 & 1 & 446 \\ stimulus & 247 & 172 & 14 & 0 & 0 & 0 & 433 \\ location & 7 & 145 & 142 & 30 & 23 & 1 & 348 \\ source & 17 & 109 & 194 & 7 & 2 & 0 & 329 \\ recipient & 0 & 56 & 251 & 10 & 0 & 0 & 317 \\ instrument & 0 & 2 & 243 & 51 & 0 & 3 & 299 \\ topic & 0 & 192 & 61 & 5 & 0 & 0 & 258 \\ co-patient & 0 & 6 & 151 & 4 & 1 & 0 & 162 \\ beneficiary & 0 & 40 & 47 & 44 & 7 & 0 & 138 \\ attribute & 0 & 9 & 101 & 7 & 2 & 6 & 125 \\ result & 0 & 30 & 81 & 5 & 7 & 0 & 123 \\ co-agent & 0 & 69 & 25 & 0 & 0 & 0 & 94 \\ material & 1 & 25 & 46 & 9 & 0 & 0 & 81 \\ goal & 0 & 8 & 58 & 6 & 1 & 0 & 73 \\ co-theme & 0 & 37 & 27 & 5 & 1 & 0 & 70 \\ product & 0 & 35 & 17 & 4 & 13 & 0 & 69 \\ initial\_location & 0 & 9 & 23 & 8 & 0 & 0 & 40 \\ cause & 30 & 3 & 3 & 0 & 0 & 0 & 36 \\ asset & 0 & 21 & 0 & 11 & 1 & 1 & 34 \\ predicate & 0 & 4 & 18 & 6 & 0 & 0 & 28 \\ pivot & 26 & 1 & 0 & 0 & 0 & 0 & 27 \\ extent & 0 & 0 & 26 & 6 & 0 & 0 & 26 \\ value & 0 & 5 & 13 & 7 & 0 & 0 & 25 \\ trajectory & 0 & 3 & 0 & 0 & 0 & 0 & 3 \\ actor & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ proposition & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ \hline \(\mathbf{\Sigma}\) & 4,199 & 4,298 & 2,121 & 257 & 68 & 12 & 10,955 \\ \hline \end{tabular}
\end{table}
Table 5: Distribution of VerbNet thematic roles over numbered arguments in PropBank.
in the annotation scheme since there are no longer two relations fulfilling the same semantic function. WISeR also features a small number of thematic roles based on the PropBank function tags and VerbNet roles. These include the :actor and :theme roles that broadly correspond to :ARG0 and :ARG1 in AMR, respectively. The :actor role encompasses thematic agent as well as certain non-agentive subjects (e.g., _the bus_ in _the bus hit the curb_). Finally, WISeR adopts reified relations from AMR such as have-rel-role and have-degree. The argument structure for each these reified relations is still semi-arbitrary and annotators will need to refer to the guidelines at first.
### Converting AMR to WISeR
To test the relative performance of parsing models on both AMR and WISeR, a mapping is defined to convert numbered arguments in the AMR 3.0 corpus into WISeR roles. AMR 3.0 is the largest AMR corpus comprising 59,255 sentences collected from various sources including discussion forums, broadcast conversations, weblogs, newswire, children's stories, and more (Knight et al., 2020). There are 556 predicate senses in AMR 3.0 created on an ad-hoc basis (Section 2.1) without reference to a PropBank frame. Sentences that include these ad-hoc senses are removed from this conversion. Furthermore, sentences featuring rare predicates with highly-specific, non-generalizable argument structures are also removed. For instance, ARG1-9 of publication-91 describe _author_, _title_, _abstract_, _text_, _venue_, _issue_, _pages_, _ID_, and _editors_. In total, there are 6 such predicates.6
Footnote 6: The 6 senses with non-generalizable argument structures are: byline-91, street-address-91, course-91, distribution-range-91, publication-91, statistical-test-91. We hope to accommodate these predicates in future versions of WISeR.
A total of 5,789 predicate senses are collected from PropBank frames that appear at least once in AMR 3.0. The mapping converts every numbered argument for each of these senses
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**ARGx** & **F-Tag** & **VerbNet Role** & **Description** & **WISeR Role** \\ \hline +ARG0 & +PAG & & & Actor \\ +ARG1 & +CAU & & & Actor \\ +ARG1 & +PPT & & & Theme \\ +ARG1 & +PAG & & +(entity\(|\)thing) & Theme \\ & +MNR & +instrument & & Instrument \\ & +GOL & +destination & & End \\ & +GOL & & (end point\(|\)ending point\(|\) & \\ & & & + state\(|\)destination\(|\)attach\(|\) & End \\ & +GOL & + (beneficiary\(|\)recipient\(|\) & & Benefice \\ & & experiencer) & & \\ & & & (benefice\(|\)beneficiary\(|\)recipient\(|\) & \\ & & & listener\(|\)hearer\(|\)perceiver\(|\)to whom\(|\) & Benefice \\ & & & pay\(|\) paid\(\rangle\) & \\ & +LOC & +destination & & End \\ & +LOC & +initial\_location & & Start \\ & +LOC & +source & & Start \\ & +LOC & -destination & & Location \\ & +LOC & & +(end point\(|\)ending point\(|\)state\(|\) & End \\ & & & destination\(|\)attach\(|\)target\(|\) end & \\ & +LOC & & +(start\(|\)source\(|\)from\(|\)starting\(|\) & Start \\ & +DIR & +initial\_location & & Start \\ & +DIR & +source & & Start \\ & +DIR & & +(start\(|\)source\(|\)from\(|\)starting\(|\) & Start \\ & +COM & -recipient \& beneficiary & Accompanier \\ & +COM & +(recipient\(|\)beneficiary) & & Benefice \\ +ARG1 & +VSP & +asset & & Theme \\ & +VSP & & +(price\(|\)money\(|\)rent\(|\) & Asset \\ & & & amount\(|\)gravity\()\) & \\ & +PRP & & +(purpose\(|\)for & Purpose \\ & & & +(why)reason\(|\)source\(|\) & Cause \\ -ARG1 & +CAU & -recipient & cause\(|\)crime\(|\)because\() & \\ & +VSP & +(material\(|\)source) & Start \\ & +VSP & & +(start\(|\)material\(|\)source) & Start \\ & +VSP & & +(aspect\(|\)domain) \& -specific & Domain \\ \hline \hline \end{tabular}
\end{table}
Table 6: WISeR role mappings from ARGx, f-tag, VerbNet role, and description information.
WISeR role, totalling 15,120 unique arguments. The conversion rules are presented in Table 6. To define this mapping, several resources are used including the argument number, the function tag, the VerbNet role (if present), and certain keywords in the informal description of the argument written by PropBank annotators. For example, if an instance of an ARGl is labeled with a PAG function tag in PropBank and has a description containing either "entity" or "thing", then it is mapped to the WISeR role theme (see row 4 of Table 6). Using these mappings, for each AMR graph, all numbered argument edge labels were identified and relabeled with their WISeR role. We also relabeled AMR non-core roles of source to the WISeR role start, destination to end, beneficiary to beneficiary, and medium to manner.
The AMR-to-WISeR conversion rules result in a total of 12,311 mappings, which leaves 2,809 numbered arguments in AMR 3.0 that are not automatically mapped to WISeR roles. These are manually mapped using the information in their PropBank frames as well as their specific usage in the corpus. Once all numbered arguments are converted into WISeR roles, sense IDs are removed so that the converted corpus becomes "frameless". Table 7 shows the distribution of numbered arguments over the 12 most frequently occurring roles in the converted WISeR corpus. Although mappings are created for 15,120 numbered arguments based on the PropBank frames, only 14,428 of them appear in the AMR 3.0 corpus, as shown in the \(\Sigma\) column of the \(\Sigma\) row in Table 7.
## 4 WISeR Dialogue Corpus
This section presents our new WISeR corpus of 1,000 English language sentences from a variety of dialogue datasets such as EmpatheticDialogues Rashkin et al. (2018), DailyDialog Li et al. (2017), Boston English Centre,7 and PersonaChat Gu
\begin{table}
\begin{tabular}{l|r r r r r r r|r} \hline & ARG & ARG1 & ARG2 & ARG3 & ARG4 & ARG5 & ARG6 & \(\Sigma\) \\ \hline theme & 57 & 5,076 & 256 & 15 & 1 & 0 & 0 & 5,405 \\ actor & 4,945 & 21 & 9 & 0 & 0 & 0 & 0 & 4,975 \\ beneficiative & 1 & 148 & 554 & 90 & 38 & 2 & 0 & 833 \\ end & 0 & 160 & 385 & 51 & 137 & 0 & 0 & 733 \\ start & 14 & 63 & 322 & 190 & 6 & 0 & 0 & 595 \\ instrument & 2 & 7 & 441 & 89 & 4 & 3 & 0 & 546 \\ attribute & 0 & 6 & 144 & 44 & 6 & 2 & 0 & 202 \\ location & 1 & 65 & 83 & 7 & 1 & 3 & 0 & 160 \\ cause & 2 & 16 & 115 & 25 & 1 & 0 & 0 & 159 \\ purpose & 0 & 11 & 122 & 19 & 5 & 1 & 0 & 158 \\ topic & 2 & 14 & 113 & 20 & 3 & 0 & 0 & 152 \\ accompannier & 0 & 53 & 69 & 7 & 3 & 0 & 0 & 132 \\ extent & 0 & 0 & 77 & 8 & 2 & 0 & 0 & 87 \\ comparison & 0 & 1 & 51 & 7 & 3 & 3 & 2 & 67 \\ asset & 0 & 1 & 11 & 53 & 1 & 0 & 0 & 66 \\ domain & 0 & 4 & 23 & 11 & 0 & 0 & 0 & 38 \\ mod & 0 & 2 & 15 & 4 & 1 & 0 & 0 & 22 \\ manner & 0 & 3 & 9 & 5 & 2 & 0 & 0 & 19 \\ direction & 0 & 0 & 7 & 0 & 2 & 5 & 0 & 14 \\ path & 0 & 7 & 4 & 1 & 0 & 0 & 0 & 12 \\ cause-of & 0 & 0 & 6 & 2 & 1 & 0 & 0 & 9 \\ degree & 0 & 0 & 3 & 5 & 1 & 0 & 0 & 9 \\ subevent & 0 & 0 & 3 & 2 & 1 & 0 & 0 & 6 \\ quantity & 0 & 1 & 4 & 0 & 0 & 0 & 0 & 5 \\ value & 0 & 0 & 3 & 2 & 0 & 0 & 0 & 5 \\ time & 0 & 1 & 2 & 1 & 0 & 0 & 0 & 4 \\ part-of & 0 & 1 & 1 & 2 & 0 & 0 & 0 & 4 \\ duration & 0 & 0 & 2 & 0 & 1 & 0 & 0 & 3 \\ theme-of & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 2 \\ range & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ poss & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ example & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ consist-of & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ concession & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ frequency & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ \hline \(\Sigma\) & 5,024 & 5,661 & 2,840 & 662 & 220 & 19 & 2 & 14,428 \\ \hline \end{tabular}
\end{table}
Table 7: Distribution of PropBank numbered arguments to WISeR thematic roles.
et al., 2020). Finally, we employ Mechanical Turking tasks to generate 300 sentences, in which subjects are provided with sentences from PersonaChat and asked to respond with emotionally driven reactions (100) or engaging follow-ups (200).8
Footnote 8: Crowd workers are compensated in-line with standard rates.
500 of these sentences are split into 10 batches with every batch similar in length and complexity. Six batches are split among beginner annotators and are double-annotated in both AMR and WISeR while the other four are divided evenly and double-annotated in either WISeR or AMR by experienced annotators.9 All annotators annotate in both AMR and WISeR. To control for familiarity, half of the annotators first annotate in AMR before switching to WISeR while the other half begin in WISeR before switching to AMR.
Footnote 9: The beginner annotators consist of 6 linguistics undergraduates who are rewarded for participation. The experienced annotators are the creators of the WISeR guidelines.
Beginner annotators are trained for a week and are given additional feedback on common errors, to minimize orthogonal differences in inter-annotator agreement. The remaining 500 sentences are single-annotated by experienced annotators. All annotation is performed using the StreamSide annotation tool (Choi and Williamson, 2021).10
Footnote 10: [https://github.com/emorynlp/StreamSide](https://github.com/emorynlp/StreamSide)
### Inter-Annotator Agreement
Inter-annotator agreement (IAA) is measured via Smatch scores (Cai and Knight, 2013) on doubly-annotated batches. Table 8 shows IAA scores of individual batches and the macro-average scores of six batches by beginner and four batches by experienced annotators. AMR and WISeR have similar IAA among experts; however, IAA for WISeR is noticeably higher among beginners, implying that AMR has a steeper learning curve, although both schemes produce high-quality annotation once annotators reach the expert-level. All double-annotated sentences are adjudicated with correction prior to inclusion in the corpus.
### Annotation Time
Every beginner annotator is assigned 3 batches and asked to report annotation times, allowing us to compare how quickly they become proficient in annotating in either scheme. These results are summarized in Table 9. For Batches 1 and 2 there is practically no difference in time between AMR and WISeR annotation. However, by Batch 3, annotating in WISeR is quicker. This is likely due to familiarization with the WISeR guidelines and experience choosing the appropriate WISeR roles, while the process of identifying the correct frames and numbered arguments in AMR remains the same regardless of experience.
### Corpus Analytics
Table 10 shows the statistics of our dialogue corpus annotated in AMR and WISeR, providing diverse utterances from six sources. DailyDialog, Boston English Center, and EmpatheticDialogues have longer utterances as they are commonly in narrative form. PersonaChat consists of slightly shorter utterances, but its structures are still relatively complex. Utterances in MTurk-Followup are mostly interrogatives and are shorter than ones from the other three. MTurk-Reaction utterances are the shortest since they are mainly emotional reactions (e.g., _that's impressive_). These six sources yield 8.3K+ tokens with 5.4K+ concepts and 5.2K+ relations, allowing researchers to make meaningful parsing evaluation for dialogue.11
Footnote 11: At present, our corpus does not include :wiki information. We intend to include this in a future release.
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \hline \multirow{2}{*}{**BID**} & \multicolumn{2}{c|}{**Beginner**} & \multirow{2}{*}{**BID**} & \multicolumn{2}{c}{**Experienced**} \\ & **AMR** & & & **WISeR** & **AMR** & **WISeR** \\ \hline
01 & 0.72 & 0.74 & 07 & 0.87 & - \\
02 & 0.72 & 0.75 & 08 & 0.84 & - \\
03 & 0.68 & 0.70 & 09 & - & 0.89 \\
04 & 0.69 & 0.79 & 10 & - & 0.85 \\
05 & 0.77 & 0.79 & & & \\
06 & 0.72 & 0.76 & & & \\ \hline \(\mathbf{\mu_{b}}\) & 0.72 & **0.76** & \(\mathbf{\mu_{e}}\) & 0.86 & **0.87** \\ \hline \hline \end{tabular}
\end{table}
Table 8: IAA scores for batches annotated by beginner and expert annotators in AMR and WISeR. BID: batch ID, \(\mu_{b/e}\): macro-average scores of the beginner and experienced groups, respectively.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{**AID**} & \multicolumn{4}{c|}{**AMR**} & \multicolumn{4}{c}{**WISeR**} \\ & **1** & **2** & **3** & **1** & **2** & **3** \\ \hline A & 115 & 123 & 121 & 114 & 112 & 114 \\ B & 66 & 67 & 67 & 66 & 67 & 66 \\ C & 129 & 87 & 95 & 105 & 91 & 94 \\ D & 106 & 138 & 128 & 124 & 144 & 138 \\ E & 154 & 131 & 127 & 146 & 93 & 78 \\ F & 122 & 75 & - & 140 & 105 & - \\ \hline \(\mathbf{\mu_{a}}\) & **115** & 104 & 107 & 116 & **102** & **98** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Minutes per batch taken by the 6 annotators across 3 batches of 50 annotations. Annotator F completed only the first two batches. AID: annotator ID.
In comparison, the Dialogue-AMR corpus (Bonial et al., 2020) consists of 80 hours of commands and requests made by humans to robots in search and navigation tasks. It is mostly limited to these specific speech acts and mainly focuses on spatial words. Our dialogue corpus, on the other hand, contains personal interactions about the speakers' likes and dislikes, relationships, and day-to-day life. Finally, our corpus is publicly available whereas there is no public access currently available for the Dialogue-AMR corpus.
## 5 Experiments
To assess the interpretability of the WISeR scheme, a parser is trained and tested on trimmed AMR 3.0 (AMR\({}_{t}\))12 and the WISeR corpus converted from AMR\({}_{t}\) (WISeR\({}_{c}\)). The AMR\({}_{t}\) parsing model is also tested on our dialogue corpus annotated in AMR (ADC). Finally, the WISe\({}_{t}\) model is evaluated on the ADC converted into WISeR (WDC\({}_{c}\)), as well as our manually annotated WISeR dialogue corpus (WDC\({}_{m}\)). The key differences between WDC\({}_{c}\) and WDC\({}_{m}\) are discussed in Section 5.5.
Footnote 12: The AMR 3.0 corpus is trimmed as described in Section 3.2.
### Datasets
Table 11 shows the number of sentences in each split for the datasets used in our experiments. ADC and WDC\({}_{c|m}\) are annotations of the same dialogue corpus and are used only for evaluation.
### Seq-to-Seq Parser
We adopt a seq-to-seq parser, SPRING (Bevilacqua et al., 2021), which holds the highest parsing accuracy on AMR 3.0 at the time of writing. The hyper-parameter settings for the seq-to-seq parser (Section 5.2) are described in Table 12.
SPRING linearizes every graph into a sequence of tokens in the depth-first search order and trains the sequence using a seq-to-seq model, BART (Lewis et al., 2020). In this sequence, special tokens are used to indicate variables and parentheses in the PENMAN notation. Given a sentence and its linearized graph, BART is finetuned to learn the transduction from the former to the latter. Once a linearized graph is generated, parenthesis parity is restored and any token that is not a possible continuation given the previous token is removed. In our experiments, we use the BART large model with greedy decoding.
### Parsing Results
Table 13 shows the performance of the seq-to-seq parser on the five datasets, with Smatch scores (Cai and Knight, 2013), as well as more fine-grained metrics (Damonte et al., 2017). Comparing the
\begin{table}
\begin{tabular}{l|r r r|r r r|r r r r} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{**Source**}} & \multicolumn{1}{c}{\multirow{2}{*}{**Sent.**}} & \multicolumn{1}{c}{\multirow{2}{*}{**Tokens**}} & \multicolumn{2}{c}{**Concepts**} & \multicolumn{2}{c}{**Relations**} & \multicolumn{2}{c}{**Recent.**} & \multicolumn{2}{c}{**Negations**} & \multicolumn{2}{c}{**NE**} \\ & & & **A** & **W** & **A** & **W** & **A** & **W** & **A** & **W** & **A** & **W** \\ \hline DailyDialog & 200 & 2,177 & 1,297 & 1,298 & 1,315 & 1,318 & 211 & 229 & 27 & 26 & 21 & 22 \\ Boston English Center & 200 & 1,989 & 1,182 & 1,196 & 1,167 & 1,179 & 217 & 219 & 33 & 33 & 12 & 13 \\ PersonaChat & 200 & 1,431 & 962 & 961 & 921 & 911 & 147 & 153 & 18 & 17 & 32 & 30 \\ EmpatheticDialogues & 100 & 1,090 & 692 & 699 & 712 & 710 & 131 & 128 & 20 & 20 & 1 & 1 \\ MTurk-Followup & 200 & 1,368 & 1,037 & 1,040 & 935 & 928 & 134 & 137 & 7 & 7 & 10 & 8 \\ MTurk-Reaction & 100 & 298 & 260 & 256 & 191 & 180 & 14 & 15 & 7 & 6 & 0 & 0 \\ \hline \(\mathbf{\Sigma}\) & 1,000 & 8,353 & 5,433 & 5,447 & 5,240 & 5,226 & 854 & 881 & 112 & 109 & 76 & 74 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Statistics of our dialogue corpus (in counts) by different categories annotated in AMR (A) and WISeR (W). Sent: sentences, Recent: Reentrancies, NE: named entities.
\begin{table}
\begin{tabular}{l r} \hline \hline
**BART** & \\ \hline version & large \\ \# parameters & 406M \\ layers & 24 \\ hidden size & 1024 \\ heads & 16 \\ \hline
**Adam Optimizer** & \\ \hline learning rate & 5e-5 \\ warm up steps & 0 \\ weight decay & 0.004 \\ batch \#tokens & 5000 \\ epochs & 30 \\ \hline \hline \end{tabular}
\end{table}
Table 12: Hyper-parameters for the seq-to-seq parser.
results on AMR\({}_{t}\) and WISeR\({}_{c}\), the WISeR parser outperform the AMR parser on all categories, showing \(\approx\)1% higher Smatch scores, which implies that WISeR is easier to learn, enabling parsers to train more robust models. The _No WSD_ (no word sense disambiguation) scores for WISeR are equivalent to the Smatch scores because predicates in WISeR are not distinguished by senses. Unsurprisingly, the WISeR parser shows higher scores on this category confirming that WSD introduces an extra burden on the AMR parser. For _Concepts_ and _Negations_, the WISeR parser also shows significant improvement over the AMR parser; \(\approx\)3% and 6%, respectively.
The _SRL_ (semantic role labeling) metric is only defined for numbered arguments and so is not applicable to WISeR. To assess core argument labeling in both schemes, we propose a new metric called _xSRL_ (extended SRL). The xSRL metric compares the WISeR roles in Table 7 against :ARG0-6 plus a few non-core roles in AMR, which correspond to the WISeR roles in Table 7.13 The WISeR parser again outperforms the AMR parser in this category. Comparing the results on the ADC and WDC\({}_{c}\), which are out-of-domain datasets, we find the same trend. The performance gain here is even larger as the WISeR parser produces a Smatch score higher by \(\approx\)2%. This indicates that the WISeR parser handles dialogue better. Surprisingly, scores on the dialogue corpus are higher for _xSRL_ and _Reentrancies_ for both models. This may be due to smaller graphs and possibly simpler argument structures in the dialogue corpus.
Footnote 13: The non-core roles are: accompanier, :beneficiary, :destination, :instrument, :location, ppurpose, :source, and :topic. The AMR role:cause is not used in the AMR 3.0 corpus.
Comparing the results of WDC\({}_{c}\) and WDC\({}_{m}\), we expect that WDC\({}_{c}\) should score better than WDC\({}_{m}\) due to discrepancies between converted and manual annotation. However, the unlabeled scores are slightly higher on WDC\({}_{m}\) for both parsers, implying that the WISeR models still find the correct representations for out-of-domain data. The named entity results of the seq-to-seq model are 6.5% higher on WDC\({}_{m}\) than WDC\({}_{c}\) which is encouraging for areas such as Conversational AI that rely heavily on named entity recognition.
### Analysis
We hypothesize that the seq-to-seq parser benefits from the more natural relation names in WISeR that are learnt during the pre-training of BART. In addition, the WISeR parser has the freedom to coin novel concepts for predicate senses on which it lacks sufficient training. For example, the verb _premeditate_ is absent from the training data, but present in the test set of AMR\({}_{t}\) and WISeR\({}_{c}\). Out of 3 runs, the seq-to-seq AMR parser predicts the correct concept premeditate-01 only once, predicting the concept intend-01 once and deliberate-01 once. In comparison, the WISeR parser uses the novel concept premeditate every time. The set of frames that occur only in the test set is rather small, so to make a fair comparison when evaluating the performance on the AMR\({}_{t}\) corpus, we restrict the comparison to the subset of novel frames that do not correspond to concepts in the WISeR\({}_{c}\) training data after conversion.14 When comparing on the dialogue corpus, we restrict our comparison to those concepts that are annotated identically in WDC\({}_{m}\) and WDC\({}_{c}\), and the concepts in AMR that feed into WDC\({}_{c}\). We thus compare performance only on words that are translated into a novel predicate in every dataset. The recall of the seq-to-seq parser across the evaluation sets is shown in Table 14. We see that WISeR clearly outperforms AMR in recall of novel predicates.
Footnote 14: E.g., move-04 is absent in the AMR training set but present in the test set. It is excluded from the comparison as it is converted to move which occurs in the WISeR training.
Finally, we tested the seq-to-seq parser on the WSD
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline
**Dataset** & **Smatch** & **Unlabeled** & **No WSD** & **Concepts** & **xSRL** & **Reentrancies** & **Negations** & **Named Entity** \\ \hline AMR\({}_{t}\) & 83.5 \(\pm\) 0.1 & 85.9 \(\pm\) 0.0 & 84.0 \(\pm\) 0.1 & 90.3 \(\pm\) 0.0 & 75.9 \(\pm\) 0.2 & 71.4 \(\pm\) 0.3 & 73.0 \(\pm\) 1.0 & 88.7 \(\pm\) 0.5 \\ WISeR\({}_{c}\) & **84.4 \(\pm\) 0.1** & **86.7 \(\pm\) 0.1** & **84.4 \(\pm\) 0.1** & **93.0 \(\pm\) 0.1** & **76.2 \(\pm\) 0.4** & **71.9 \(\pm\) 0.2** & **78.9 \(\pm\) 0.2** & **88.7 \(\pm\) 0.4** \\ \hline ADC & 80.3 \(\pm\) 0.2 & 83.8 \(\pm\) 0.1 & 81.4 \(\pm\) 0.2 & 86.8 \(\pm\) 0.0 & 78.8 \(\pm\) 0.3 & 71.8 \(\pm\) 0.8 & 70.3 \(\pm\) 0.5 & 65.5 \(\pm\) 1.4 \\ WDC\({}_{c}\) & **82.3 \(\pm\) 0.2** & 85.7 \(\pm\) 0.2 & **82.3 \(\pm\) 0.2** & 90.8 \(\pm\) 0.1 & **79.2 \(\pm\) 0.3** & **72.8 \(\pm\) 0.3** & 76.2 \(\pm\) 0.9 & 68.2 \(\pm\) 1.8 \\ WDC\({}_{m}\) & 81.5 \(\pm\) 0.2 & **85.9 \(\pm\) 0.2** & 81.5 \(\pm\) 0.2 & **91.1 \(\pm\) 0.1** & 75.9 \(\pm\) 0.2 & 70.6 \(\pm\) 0.4 & **78.2 \(\pm\) 0.1** & **74.9 \(\pm\) 1.0** \\ \hline \hline \end{tabular}
\end{table}
Table 13: Parsing performance achieved by the seq-to-seq model on the five evaluation sets over three runs.
\begin{table}
\begin{tabular}{l c|l c} \hline \hline
**Dataset** & **Recall** & **Dataset** & **Recall** \\ \hline AMR\({}_{t}\) & 0.57 & ADC & 0.28 \\ WISeR\({}_{c}\) & 0.80 & WDC\({}_{c}\) & 0.42 \\ WDC\({}_{m}\) & 0.60 \\ \hline \hline \end{tabular}
\end{table}
Table 14: Recall of the seq-to-seq parser on novel predicate concepts in the five evaluation sets.
and SRL tasks independently. The bottom left cell in Table 15 is the results for the WISeR parser, and the top right is the AMR parser. The top left is a parser trained with PropBank senses and automatically converted WISeR roles, while the bottom right used numbered ARGs without predicate senses.15
Footnote 15: Since the choice of numbered argument depends on predicate sense IDs, WSD and SRL tasks are not sensibly separated with numbered arguments.
This shows a \(\approx\)0.3% increase when using WISeR roles over numbered arguments even with predicate senses, while removing predicate senses accounts for a larger \(\approx\)0.7% increase.
### Challenges
A potential challenge in these experiments is that the converted WISeR corpus, WISeR\({}_{c}\), is arguably only pseudo-WISeR. For instance, many predicate concepts corresponding to adjectives (e.g., _great_) do not have PropBank frames. Consequently, the sentence _that is great_ is annotated using the role :domain in AMR but :theme in WISeR. Such inconsistency introduces noise to parsing models that leads to suboptimal performance. We quantify the difference between the converted WISeR dialogue corpus (WDC\({}_{c}\)) and the manually annotated corpus (WDC\({}_{m}\)), by calculating their Smatch similarity, which returns a score of 0.88. Although relatively high, this does indicate a training-evaluation discrepancy. In future releases, we plan to enhance the automatic conversion to reduce this gap further.
### Discussions
Since WISeR uses slightly fewer relations than AMR, we should perhaps expect the SRL classification task to be strictly simpler for WISeR. However, this is not necessarily the case. Table 7 shows that the distribution of numbered arguments after :ARG0-2 drop off rapidly (only 6% of numbered arguments in AMR 3.0 are not :ARG0-2), whereas the distribution of WISeR roles shows a significantly shallower decline (22% of core arguments are not covered by the 3 most frequent roles). This larger number of reasonable candidate roles for a core argument in WISeR compared to AMR would ordinarily make classification harder. A potential explanation for why the WISeR parser nonetheless outperforms the AMR parser is that many WISeR roles are associated with surface level syntax. For example, a :topic argument is often introduced with the preposition _about_ or _on_, an :end is typically introduced by _to_ etc. These cues are obscured when a single numbered argument encodes more than one thematic role, or when one thematic role is encoded by more than one numbered argument. In WISeR, there is a one-to-one correspondence between edge labels, and their semantic function. As such, syntactic cues indicating the appropriate WISeR role can be identified, making classification easier. Moreover, assigning consistent, more meaningful labels can help with data sparsity, while also capitalizing on the understanding that pre-trained models have of the language.
Finally, since automatically converted WISeR roles can be used with PropBank predicate senses, researchers can still make use of PropBank resources if they are required for downstream inference tasks, while nonetheless employing more transparent semantic role labels during parsing, albeit with more modest improvements.
## 6 Conclusion
To rectify a number of problems for AMR, this paper introduces a novel annotation scheme, WISeR, which allows for the spontaneous creation of predicates to extend to new domains and languages. Our findings show that WISeR supports improved parsing performance as well as annotation of equal quality in less time. We conclude that the removal of numbered arguments and sense disambiguation in favor of an open class of predicates and a modest inventory of thematic roles makes WISeR easier to learn for annotators and parsers alike.
|
2309.05169 | SYSPART: Automated Temporal System Call Filtering for Binaries | Restricting the system calls available to applications reduces the attack
surface of the kernel and limits the functionality available to compromised
applications. Recent approaches automatically identify the system calls
required by programs to block unneeded ones. For servers, they even consider
different phases of execution to tighten restrictions after initialization
completes. However, they require access to the source code for applications and
libraries, depend on users identifying when the server transitions from
initialization to serving clients, or do not account for dynamically-loaded
libraries. This paper introduces SYSPART, an automatic system-call filtering
system designed for binary-only server programs that addresses the above
limitations. Using a novel algorithm that combines static and dynamic analysis,
SYSPART identifies the serving phases of all working threads of a server.
Static analysis is used to compute the system calls required during the various
serving phases in a sound manner, and dynamic observations are only used to
complement static resolution of dynamically-loaded libraries when necessary. We
evaluated SYSPART using six popular servers on x86-64 Linux to demonstrate its
effectiveness in automatically identifying serving phases, generating accurate
system-call filters, and mitigating attacks. Our results show that SYSPART
outperforms prior binary-only approaches and performs comparably to source-code
approaches. | Vidya Lakshmi Rajagopalan, Konstantinos Kleftogiorgos, Enes Göktaş, Jun Xu, Georgios Portokalidis | 2023-09-10T23:57:07Z | http://arxiv.org/abs/2309.05169v2 | # SysPart: Automated Temporal System Call Filtering for Binaries
###### Abstract.
Restricting the system calls available to applications reduces the attack surface of the kernel and limits the functionality available to compromised applications. Recent approaches automatically identify the system calls required by programs to block unneeded ones. For servers, they even consider different phases of execution to tighten restrictions after initialization completes. However, they require access to the source code for applications and libraries, depend on users identifying when the server transitions from initialization to serving clients, or do not account for dynamically-loaded libraries. This paper introduces SysPart, an automatic system-call filtering system designed for binary-only server programs that addresses the above limitations. Using a novel algorithm that combines static and dynamic analysis, SysPart identifies the serving phases of all working threads of a server. Static analysis is used to compute the system calls required during the various serving phases in a sound manner, and dynamic observations are only used to complement static resolution of dynamically-loaded libraries when necessary. We evaluated SysPart using six popular servers on x86-64 Linux to demonstrate its effectiveness in automatically identifying serving phases, generating accurate system-call filters, and mitigating attacks. Our results show that SysPart outperforms prior binary-only approaches and performs comparably to source-code approaches.
System-call filtering, temporal, binary analysis, attack-surface reduction, exploit mitigation +
[MISSING_PAGE_POST]
20 |
2309.14064 | Helium-Electrospray: an improved sample delivery system for
single-particle imaging with X-ray lasers | Imaging the structure and observing the dynamics of isolated proteins using
single-particle X-ray diffractive imaging (SPI) is one of the potential
applications of X-ray free-electron lasers (XFELs). Currently, SPI experiments
on isolated proteins are limited by three factors: low signal strength, limited
data and high background from gas scattering. The last two factors are largely
due to the shortcomings of the aerosol sample delivery methods in use. Here we
present our modified electrospray ionization (ESI) source, which we dubbed
Helium-ESI (He-ESI). With it, we increased particle delivery into the
interaction region by a factor of 10, for 26 nm-sized biological particles, and
decreased the gas load in the interaction chamber corresponding to an 80%
reduction in gas scattering when compared to the original ESI. These
improvements will lead to a significant increase in the quality and quantity of
SPI diffraction patterns in future experiments using He-ESI, resulting in
higher-resolution structures. | Tej Varma Yenupuri, Safi Rafie-Zinedine, Lena Worbs, Michael Heymann, Joachim Schulz, Johan Bielecki, Filipe R. N. C. Maia | 2023-09-25T11:56:07Z | http://arxiv.org/abs/2309.14064v1 | Helium-Electrospray: an improved sample delivery system for single-particle imaging with X-ray lasers
###### Abstract
Imaging the structure and observing the dynamics of isolated proteins using single-particle X-ray diffractive imaging (SPI) is one of the potential applications of X-ray free-electron lasers (XFELs). Currently, SPI experiments on isolated proteins are limited by three factors: low signal strength, limited data and high background from gas scattering. The last two factors are largely due to the shortcomings of the aerosol sample delivery methods in use. Here we present our modified electrospray ionization (ESI) source, which we dubbed Helium-ESI (He-ESI). With it, we increased particle delivery into the interaction region by a factor of 10, for 26 nm-sized biological particles, and decreased the gas load in the interaction chamber corresponding to an 80% reduction in gas scattering when compared to the original ESI. These improvements will lead to a significant increase in the quality and quantity of SPI diffraction patterns in future experiments using He-ESI, resulting in higher-resolution structures.
\({}^{\ddagger}\)These authors contributed equally to this work.
\({}^{\ddagger}\)These authors contributed equally to this work.
\({}^{*}\) Correspondence e-mail: [email protected], [email protected]
## 1 Introduction
Current generation X-ray free electron lasers (XFELs) with their ability to produce highly intense X-ray pulses with durations of only a few tens of femtoseconds offer a powerful tool to image a wide variety of aerosolized particles at room temperature. Such high intensities on femtosecond time scales suggested that useful data could be collected from weakly scattering single proteins or viruses by outrunning radiation damage using the idea of "diffraction before destruction" [1]. Taking full advantage of this new capability of coherent diffractive X-ray imaging using single particles in the gas phase promises to not only deliver
high-resolution structures but to extend the study towards ultrafast dynamics [2, 3], opening the door for pump-probe experiments on femto-and picosecond time scales. So far, single-particle imaging (SPI) experiments have been successfully performed by injecting the aerosolized sample into the X-ray interaction region using the "Uppsala"-injector [4] on large biological samples (70-2000 nm) using gas dynamic virtual nozzles (GDVN's) on viruses [5, 6, 7, 8, 9, 10], cell organelles [11], whole cells [12] and most recently on gold nanoparticles [13] using electrospray ionization (ESI).
Gas phase injection [14, 15] via an aerodynamic lens stack (ALS) has gained substantial attention for its high scattering contrast, low background scattering compared to liquid sample delivery, capacity for high-rate data collection and wide sample compatibility. The typical experimental SPI layout is shown in Figure 1. Particularly, ESI as a sample aerosolization method has proven effective due to its ability to produce small droplets, resulting in virtually contaminant-free sample delivery [16]. But even with the large pulse energies available at modern XFEL facilities, the diffraction patterns from small particles, such as single proteins or virus particles with sizes smaller than 50 nm have a very low signal-to-noise ratio preventing structure determination, despite computational efforts to reduce the noise [17]. A recent experiment on the GroEL complex from _E. coli_ delivered using ESI highlights the challenge for small bioparticles: a high amount of background scattering from the N\({}_{2}\) and CO\({}_{2}\) in the interaction region [18]. The large gas background hampers the identification of signal from the sample of interest.
To obtain a higher-resolution structure with current sample injection and XFEL facility parameters, a reduction of N\({}_{2}\) and CO\({}_{2}\) gas density in the interaction region and a higher particle throughput, i.e., higher hit rates to collect several hundred thousand hits [19] and signal averaging over many identical particles are needed. Collecting this amount of data on identical particles makes sample delivery techniques one of the crucial factors in achieving high-resolution atomic-scale images at high acquisition rates [11, 20].
In this paper, we address these sample delivery challenges and present a modified ESI source, which we refer to as the Helium electrospray (He-ESI). The main change is the addition of a 3D-printed nozzle, designed to reduce the N\({}_{2}\) and CO\({}_{2}\) consumption compared to the earlier setup (original ESI) [16] while still maintaining stable sample delivery conditions. Helium (He) is introduced around the 3D-printed nozzle and serves as the main gas for particle transport. Our modifications lead to a lower N\({}_{2}\) and CO\({}_{2}\) use and a decrease of heavy gasses in the interaction region by 83%. We also demonstrate the successful use of the He-ESI with the "Uppsala"-injector and compare the performance with the original ESI in the injector setup. We observe an increase in injection yield which can be as high as a factor of 10 for the small biological particles.
Our He-ESI system shows great potential for SPI of small particles. The reduction in heavy-gas background effectively increases the signal-to-noise ratio. Furthermore, the use of He as the transport gas improves particle focusing in the "Uppsala"-injector, and enhances the throughput of particles into the interaction region. The ESI-setup developed here makes it possible to acquire millions of diffraction patterns with sufficiently low background, an important milestone on the way to high-resolution time-resolved 3D structures of isolated proteins and viruses using SPI.
## 2 Methods and Results
The experimental setup in this study consists of a modified version of the ESI introduced in [21], the "Uppsala"-injector [4] with a two-skimmer box setup, an optical scattering setup to detect the nanoparticles in the main chamber [22] and a residual gas analyzer (RGA) (Extorr Inc., XT100M) to analyze the gas
composition inside the chamber.
### Modified ESI source: He-ESI design
The modified ESI setup is shown in Figure 2. It includes a 3D printed nozzle (Uppsala nozzle) measuring 4.45 x 1.56 x 1.56 mm\({}^{3}\) printed via two-photon polymerization in a liquid resin (UpPhoto) within 35 minutes using the NanoOne 3D printing system (UpNano). After printing, the nozzle was glued to a stainless-steel tube with an inner diameter (ID) of 1.15 mm using a standard two-component epoxy glue (Loctite power epoxy) and connected to the N\({}_{2}\) and CO\({}_{2}\) gas mixture line. To reduce the background scattering in SPI experiments, we replaced most of the N\({}_{2}\) and CO\({}_{2}\) used for particle transport with He. The gas inlet previously used for the N\({}_{2}\) and CO\({}_{2}\) gas mixture was used as the He inlet, as shown in Figure 2. The Uppsala nozzle is designed to hold the silica fused capillary of 360 um outer diameter (OD) in the center of the nozzle as shown in Figure 2. We reduced the consumption of N\({}_{2}\) and CO\({}_{2}\) by placing a 3D-printed structure around the capillary generating an N\({}_{2}\) and CO\({}_{2}\) atmosphere between the capillary and the nozzle and filling the rest of the ESI head with He.
An alternative 3D-printed nozzle design, referred to as the EuXFEL nozzle, follows the same principle of gas replacement but does not require the use of a fused silica capillary inside. Instead, it is entirely
Figure 1: The schematic diagram details a typical experimental setup of an electrospray-based aerosol injector used for single particle imaging experiments at free electron lasers.This setup includes the ESI process illustrated at the top, which aerosolizes the sample. Subsequently, the aerosol beam is transported through the skimmer stages and the aerodynamic lens and eventually reaches the interaction chamber. Here, it intersects with the XFEL beam. The XFEL pulses scatter off the particles within the aerosol beam, generating diffraction patterns captured on the detector. (i) The Taylor cone, during standard operation of He-ESI. (ii) The interaction between a particle beam and an X-ray beam. (iii) Scattering pattern produced by a particle.
printed using the Nanoscribe Photonic Professional GT with IP-S photoresist. This design incorporated two capillary inlets: one with an ID of \(40\,\mathrm{\SIUnitSymbolMicro m}\) for the sample and another with an ID of \(180\,\mathrm{\SIUnitSymbolMicro m}\) for protective gases (\(\mathrm{N_{2}}\) and \(\mathrm{CO_{2}}\)). The dimensions of the EuXFEL nozzle are \(1.4\times 0.5\times 1.2\,\mathrm{mm}\). Details on the EuXFEL nozzle can be found in the supplementary materials. Furthermore, the CAD models for both the Uppsala and EuXFEL nozzles are freely accessible and can be downloaded from our GitHub repository at ([https://github.com/ytejvarma/Helium-nozzle](https://github.com/ytejvarma/Helium-nozzle)) and ([https://github.com/safirafie/ESDesign](https://github.com/safirafie/ESDesign)) respectively.
### Simulations of Gas Flow Around the Taylor Cone
To protect the Taylor cone from Corona discharge and to minimize heavier gases, it is important to understand the behaviour of gases surrounding the Taylor cone within the He-ESI system. Therefore, we performed simulations using COMSOL Multiphysics, a finite element analysis software [23]. We used the laminar flow interface and the transport of concentrated species interface and coupled these interfaces together through a multi-physics interface. The laminar flow interface allowed us to model the gas flow dynamics by computing the velocity and pressure fields of the gases. Concurrently, the transport of concentrated species interface was used to study gaseous mixtures by solving for the mass fractions of all participating species.
To monitor the risk of corona discharge around the Taylor cone, we calculated the fractional concentration of each gas, denoted as \(x_{i}\) and defined as:
\[x_{i}=\frac{c_{i}}{c_{i}+c_{j}+c_{k}}\]
where \(c_{i}\) symbolizes the molar concentration of the gas for which we are determining its fractional concentration, \(x_{i}\). The \(c_{j}\) and \(c_{k}\) denote the molar concentrations of the remaining two gases in the mixture.
We compared the gas distribution between the original ESI and the He-ESI system, as displayed in Figures 2(a), 2(b), and 2(c). In the original ESI system, a mixture of two gases was used. \(\mathrm{N_{2}}\) was utilized as the carrier gas at a flow rate of \(1\,\mathrm{L/min}\), and \(\mathrm{CO_{2}}\) was employed as a protective gas, to shield the Taylor cone from corona discharge, with a flow rate of \(150\,\mathrm{mL/min}\). The He-ESI system instead uses a mixture of three
Figure 2: Schematic of the He-ESI. The modification of the ESI to operate with He includes a Swagelok T-piece, a stainless steel tube and the 3D-printed Uppsala nozzle. Liquid sample flows through the capillary and a stable Taylor cone is formed by applying a high voltage. In between the capillary and the inside of the nozzle a \(\mathrm{N_{2}}\) and \(\mathrm{CO_{2}}\) environment is formed with a combined flow rate of around \(50\,\mathrm{mL/min}\). Helium is introduced through the original gas inlet, surrounding the nozzle within the electrospray head and facilitating the flow of particles. The highly charged droplets pass through a Po-210 neutralizer. Then, the neutralized aerosol is exiting the electrospray head.
gases. He serves the role of the carrier gas with a flow rate of 1.2 L/min, while N\({}_{2}\) and CO\({}_{2}\), with flow rates of 20 mL/min and 15 mL/min respectively, functioned as protective gases. The simulation results illustrate how in the He-ESI system, the Taylor cone is effectively enveloped by CO\({}_{2}\), preventing corona discharge.
To study the influence of various CO\({}_{2}\) flow rates on the gas distribution around the Taylor cone in the He-ESI setup, we performed simulations at CO\({}_{2}\) flow rates of 10, 15, 30, and 50 mL/min, as depicted in Figures 3d and 3e. The He and N\({}_{2}\) flow rates were kept constant at 1.2 L/min and 20 mL/min, respectively. These simulations help estimate the minimal fractional concentration of gases necessary to sustain a stable Taylor cone, thus minimizing the potential for corona discharge. This provides important insights into the interactions and flow dynamics of the gases around the Taylor cone, which can further help us optimize the design and operating conditions of the electrospray system.
### Operating Conditions for the He-ESI
The He-ESI with the Uppsala nozzle is stable under the following conditions: the tip of the angled capillary (conically ground at an angle of 30\({}^{\mathrm{o}}\)) must be kept at the edge or slightly inside the nozzle, which is placed at a distance of 1.5 - 1.8 mm away from the grounded orifice of 0.5 mm diameter. The liquid sample flow rates must be 100 - 200 mL/min, the He flow rate in the ESI head should be 1.2-1.4 L/min, the N\({}_{2}\) flow rate 0.03-0.035 L/min, the CO\({}_{2}\) flow rate 0.015-0.02 L/min and the voltage between 2.2 - 2.6 kV. The stability of the He-ESI was monitored by measuring the current (typically around 200 - 300 nA) and visually with a camera pointing at the Taylor cone. Under these conditions, the Taylor cone ejects charged droplets. These charged droplets pass through a Po-210 neutralizer, which neutralizes the charge on the particles and the neutralized particles then pass through a conductive tubing to the inlet of the experimental setup. Detailed
Figure 3: The fractional concentration of various gases in the vicinity of the Taylor cone, highlighting the effectiveness of gas shielding against corona discharge. a) The fractional concentration of CO\({}_{2}\) in the original ESI system. b) and c) The fractional concentration of CO\({}_{2}\) and He respectively in the He-ESI system. d) and e) The fractional concentration of CO\({}_{2}\) and He respectively in the He-ESI system under varying CO\({}_{2}\) flow rates while maintaining a constant He flow rate of 1.2 L/min and N\({}_{2}\) flow rate of 20 mL/min.
information regarding the operating conditions using the EuXFEL nozzle can be found in the supplementary materials.
### Injector Setup: Operation using He-ESI
The He-ESI is coupled to the injector setup [16]. An extra helium inlet was added to the injector setup before the first skimmer stage to avoid the suction of the gas in the aerosolization and neutralization chamber of the ES due to the pumping in the skimmer stages and to protect the Taylor cone. Typically, 2.5-3 L/min He is added at the aerosol inlet. In total, 4.2 L/min He is required in the setup. The excess gas is skimmed away using scroll pumps at the two nozzle-skimmer stages. The particles enter the aerodynamic lens with a pressure of 1-1.2 mbar and exit the lens through a 1.5 mm aperture into the interaction region in the experimental chamber, which is kept at \(10^{-5}\) mbar.
### Gas Reduction in the Interaction Chamber
We used an RGA, mounted 25 cm away from the interaction region, to determine the composition of the gas in the interaction chamber. RGA spectra while using both types of ESI are shown in Figure 4. For the He-ESI (dashed red line), the largest contribution is He at 4 atomic mass units (amu) with a partial pressure, measured from the peak area, of \(1.9\times 10^{-5}\) Torr, while N\({}_{2}\) and CO\({}_{2}\), shown in the spectrum at 28 and 44 amu, have partial pressures of \(1.6\times 10^{-6}\) Torr and \(3.1\times 10^{-7}\) Torr respectively. There's a further peak at 18 amu due to water contamination.
The relative composition of the input gases to the He-ESI is 1.22 % N\({}_{2}\) and 0.97 % CO\({}_{2}\), compared to 8 % N\({}_{2}\) and 1.5 % CO\({}_{2}\) measured in the interaction chamber. This discrepancy may be explained by the different pumping efficiency for He, N\({}_{2}\) and CO\({}_{2}\) based on Graham's law, which states that the rate of diffusion or effusion of gas is inversely proportional to its molecular weight. This implies, that N\({}_{2}\) and CO\({}_{2}\) diffuse much slower than He when passing through the nozzle in the two skimmer stages leading to He being skimmed away first and more efficiently.
The RGA spectrum of the original ESI, shown in black, shows much larger N\({}_{2}\) and CO\({}_{2}\) peaks, with partial pressures of \(8.7\times 10^{-6}\) and \(2.4\times 10^{-6}\) Torr respectively.
While the gases in the interaction chamber will scatter both elastically and inelastically, the inelastically scattered photons can be filtered due to their different energy. But the elastically scattered ones are indistinguishable from those scattered by the sample and are the main contributors to background noise in SPI [18]. For the resolutions relevant to SPI, each gas molecule is well approximated as a point scatterer and the total scattering is then proportional to the square of the number of electrons. We can then estimate the elastic scattering from the gas as the weighted sum of the contributions of the different gas species, and with it calculate the expected elastic scattering by the gas when using the He-ESI relative to the original ESI (\(I_{rel}\)),
\[I_{rel}=\frac{p_{N_{2}}^{\rm new}Z_{N_{2}}^{2}+p_{CO_{2}}^{\rm new}Z_{CO_{2}} ^{2}+p_{He}^{\rm new}Z_{He}^{2}}{p_{N_{2}}^{\rm old}Z_{N_{2}}^{2}+p_{CO_{2}}^{ \rm old}Z_{CO_{2}}^{2}},\]
where \(p^{\rm new}\) are the partial pressures of the He-ESI setup, \(p^{\rm old}\) of the original ESI and \(Z\) is the total number of electrons of each gas molecule. Using this equation with the partial pressures measured above we obtain an \(I_{rel}\) of 0.188 or an expected reduction of scattering intensity by \(\approx\) 81 %.
### Sample delivery performance with the He-ESI
To show that the He-ESI is working stable and generates aerosolized particles suitable for SPI experiments, we coupled the He-ESI to the "Uppsala"-injector. To detect the flow of particles into the interaction region we used a Rayleigh-scattering microscopy setup [22] and recorded particle intensities and beam evolution curves for 20-80 nm polystyrene spheres (PS) in a 20 mM ammonium acetate (AmAc) buffer solution. The beam-evolution curve is shown in the supplementary information Figure S2. At a given injector pressure, the particle-beam focus position moves away from the injector exit with increasing particle size. A similar behaviour has been observed in a previous study using the same injector and GDVN aerosolization, i.e. focusing with He, for PS with diameters larger than 40 nm [4]. For larger particles, the gas density in the focus is lower. In addition, the opening angle of the particle beam decreases with increasing particle size. The particle-beam parameters are summarized in Tab. S1. The data collection and analysis are discussed in a [22]. Next, we characterized the particle-beam density at the particle beam focus for different sizes of PS and Bacteriophage MS2 (MS2) and compared it to particle-beam density measurements using the original ESI for aerosolization, i.e. using N\({}_{2}\) as the main carrier gas. As a proxy for the particle-beam number density, we show the number of particles collected in 1000 frames. Each frame contains one laser pulse for particle detection. Table 1 shows the measured mean number of particle hits per 1000 frames in the particle-beam focus. For all particle sizes, the measured number of particles is higher using the He-ESI compared to the original ESI. While the improvement of the measured particle numbers is different for the used sizes, the highest improvement in particle throughput, by a factor of \(\approx\) 11, is observed in the bioparticle MS2.
#### 2.6.1 Exploration of Various Ionization Techniques in He-ESI
A comparative study was conducted to analyze the transmission efficiency between two different ionization techniques used in He-ESI: a Polonium (Po-210) source and an ultraviolet (UV) ionizer. The target sample utilized for this experiment was a silver cube suspended in ethanol. In both techniques, the gas flow rates were maintained at 1 L/min for He and 30 mL/min for CO\({}_{2}\). Particle detection was carried out in the interaction chamber using Rayleigh scattering [22]. The results from the number of particles detected showed that the Polonium source delivered approximately 5% more particles than the UV ionizer to the interaction chamber.
Nonetheless, given that UV light is more efficient at ionizing N\({}_{2}\) gas [24], we extended our experiment by
Figure 4: Residual gas analysis spectrum inside the interaction chamber. Measured for the He-ESI (dashed red) with flow rates of 4.2 L/min He, 0.03 L/min N2, 0.015 L/min CO2 and the original ESI (solid black) with flow rates of 1 L/min N2 and 0.2 L/min CO2.
adding 30 mL/min of N\({}_{2}\) through the He inlet. This introduction of N\({}_{2}\) enhanced the transmission efficiency of the UV ionizer setup and outperformed the Polonium source setup by delivering approximately 30% more particles. This enhancement can be attributed to the improved neutralization of the particles, facilitated by the UV ionizer's more effective in ionizing N\({}_{2}\). These findings suggest that the inclusion of N\({}_{2}\) gas in the UV ionizer setup could be a potential strategy to enhance transmission efficiency in He-ESI. It is important to highlight, though, that the advantages gained from incorporating N\({}_{2}\) need to be balanced against its potential contribution to background noise.
## 3 Discussion and Outlook
Within this paper, we presented improvements in the sample aerosolization process by developing a He-ESI to reduce the background scattering due to gases in SPI experiments. We used 3D printed nozzles to reduce the amount of N\({}_{2}\) and CO\({}_{2}\) and kept modifications of the previously used ESI setup to a minimum. With the He-ESI, the main particle transport gas into the interaction chamber is He. In the interaction chamber and based on RGA measurements using the He-ESI with the Uppsala-nozzle, the amount of N\({}_{2}\) was reduced by 82 % and for CO\({}_{2}\) by 87.7 %. While the large reduction of the heavy gasses in the initial gas mixture could not be observed to the same extent in the interaction region, presumably due to different pumping efficiencies, an optimization of the skimmer assembly may improve the ratio in the interaction region further. Nonetheless, assuming the ratio of the gasses measured in the RGA translates into the ratio of contribution to background scattering, we reduced the gas background scattering off the gas by 81 %.
Additionally, through simulations conducted using COMSOL Multiphysics, our study has deepened the understanding of gas flow dynamics around the Taylor cone in a He-ESI system. This allowed us to model the behaviour of different gas mixtures, examining their respective impacts on protecting the Taylor cone from corona discharge. Given our optimal operational conditions with a water-based buffer our simulations suggest that to maintain a stable Taylor cone, the He percentage should not exceed 20% at the cone's tip. Further computational analysis of the gas distribution and breakdown voltage can aid in determining the minimum fractional concentration necessary to maintain a stable cone before corona discharge occurs.
\begin{table}
\begin{tabular}{|l l l|} \hline \hline & \multicolumn{3}{c|}{Particles per 1000 frames} \\ Sample/DMA & He-ESI & Original-ESI \\ size (nm) & & \\ \hline Bacteriphase & 460 & 42 \\ MS2/ 25.9 & & \\
20 nm PS/ 18.9 & 1010 & 271 \\
30 nm PS/ 28.9 & 2546 & 517 \\
40 nm PS/ 42.9 & 2264 & 874 \\
50 nm PS/ 59.4 & 1553 & 939 \\
70 nm PS/ 76.4 & 1118 & 527 \\
80 nm PS/ 88.2 & 1150 & 300 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between He-ESI and original ESI of the mean number of particles per 1000 frames as a function of the sample diameter.
Our modification of the ESI not only demonstrates a decreased use of heavy gasses for sample injection but also an increased throughput of particles into the interaction region. The highest increase in transmission of particles was observed while injecting small bioparticles: approximately by a factor of 11 for MS2 particles. Whereas, while delivering PS into the interaction region, we measure an increase in the transmission of particles by a factor of 2 to 5 depending on the particle size.
To further enhance particle transmission, we conducted a comparative analysis of the transmission efficiency between Po-210 sources and UV ionizer techniques within He-ESI systems. Our results demonstrated that by adding 30 mL/min of N\({}_{2}\) gas along with He at the He inlet, the UV ionizer's performance was enhanced, surpassing the Po-210 source by approximately 30%. For a more comprehensive understanding of their impact on transmission efficiency, future studies could investigate the simultaneous utilization of both the Po-210 source and the UV ionizer in He-ESI systems.
Although this is not the first adaptation of ES injection for X-ray diffractive imaging, the presented modification is a much-required leap towards single protein imaging by aiming at lower background scattering from the injection gases, allowing us to recognize lower scattering signals from the sample in the diffraction data. We expect the He-ESI to improve the quality of collected data and provide better experimental conditions for X-ray imaging of small nanoparticles not only due to the lowered background but also because of a higher particle transmission through the injector. Together, higher quality and quantity of diffraction patterns can be collected in the future using a He-ESI for sample aerosolization.
|
2309.03518 | Learning Compact Compositional Embeddings via Regularized Pruning for
Recommendation | Latent factor models are the dominant backbones of contemporary recommender
systems (RSs) given their performance advantages, where a unique vector
embedding with a fixed dimensionality (e.g., 128) is required to represent each
entity (commonly a user/item). Due to the large number of users and items on
e-commerce sites, the embedding table is arguably the least memory-efficient
component of RSs. For any lightweight recommender that aims to efficiently
scale with the growing size of users/items or to remain applicable in
resource-constrained settings, existing solutions either reduce the number of
embeddings needed via hashing, or sparsify the full embedding table to switch
off selected embedding dimensions. However, as hash collision arises or
embeddings become overly sparse, especially when adapting to a tighter memory
budget, those lightweight recommenders inevitably have to compromise their
accuracy. To this end, we propose a novel compact embedding framework for RSs,
namely Compositional Embedding with Regularized Pruning (CERP). Specifically,
CERP represents each entity by combining a pair of embeddings from two
independent, substantially smaller meta-embedding tables, which are then
jointly pruned via a learnable element-wise threshold. In addition, we
innovatively design a regularized pruning mechanism in CERP, such that the two
sparsified meta-embedding tables are encouraged to encode information that is
mutually complementary. Given the compatibility with agnostic latent factor
models, we pair CERP with two popular recommendation models for extensive
experiments, where results on two real-world datasets under different memory
budgets demonstrate its superiority against state-of-the-art baselines. The
codebase of CERP is available in https://github.com/xurong-liang/CERP. | Xurong Liang, Tong Chen, Quoc Viet Hung Nguyen, Jianxin Li, Hongzhi Yin | 2023-09-07T06:58:34Z | http://arxiv.org/abs/2309.03518v2 | # Learning Compact Compositional Embeddings via Regularized Pruning for Recommendation
###### Abstract
Latent factor models are the dominant backbones of contemporary recommender systems (RSs) given their performance advantages, where a unique vector embedding with a fixed dimensionality (_e.g._, 128) is required to represent each entity (commonly a user/item). Due to the large number of users and items on e-commerce sites, the embedding table is arguably the least memory-efficient component of RSs. For any lightweight recommender that aims to efficiently scale with the growing size of users/items or to remain applicable in resource-constrained settings, existing solutions either reduce the number of embeddings needed via hashing, or sparsify the full embedding table to switch off selected embedding dimensions. However, as hash collision arises or embeddings become overly sparse, especially when adapting to a tighter memory budget, those lightweight recommenders inevitably have to compromise their accuracy. To this end, we propose a novel compact embedding framework for RSs, namely Compositional Embedding with Regularized Pruning (CERP). Specifically, CERP represents each entity by combining a pair of embeddings from two independent, substantially smaller meta-embedding tables, which are then jointly pruned via a learnable element-wise threshold. In addition, we innovatively design a regularized pruning mechanism in CERP, such that the two sparsified meta-embedding tables are encouraged to encode information that is mutually complementary. Given the compatibility with agnostic latent factor models, we pair CERP with two popular recommendation models for extensive experiments, where results on two real-world datasets under different memory budgets demonstrate its superiority against state-of-the-art baselines. The codebase of CERP is available in [https://github.com/xurong-liang/CERP](https://github.com/xurong-liang/CERP).
lightweight recommender systems, compositional embeddings, regularized pruning
## I Introduction
The invention of recommender systems (RSs) greatly eases the difficulty of identifying and suggesting useful information or products from the sheer volume of data based on users' preferences. Most RSs leverage collaborative filtering through latent factor models, in which all entities (_i.e._, users and items in most RSs) are mapped to distinct, real-valued dense vectors of a unified dimension. Then, based on these vector representations, _i.e._, embeddings, a pairwise similarity function (_e.g._, dot product [1], multi-layer perceptrons [2], graph neural networks [3], _etc._) can be learned to rank each item's relevance to a user. In latent factor-based collaborative filtering, all entities' embeddings are hosted in an embedding table and can be efficiently drawn via a look-up operation.
Given the large number of possible users and items in recommendation services, the embedding table is commonly the heaviest component in an RS in terms of parameter sizes [4, 5, 6, 7, 8, 9]. Recently, with the frequently intersected needs for handling large-scale e-commerce data and deploying RSs on resource-constrained devices [10], the memory consumption of embedding tables has become the major bottleneck that prevents RSs from scaling up. Take an example of the _Amazon Product Reviews_ dataset [11] which includes \(20.98\) million users and \(9.35\) million items. If the embedding dimension is \(128\), representing all these entities in a full embedding table incurs approximately \(3.9\) billion parameters, translating into \(31.2\) GB memory consumption for a double floating-point system. In comparison, the number of parameters used in the recommendation layer is almost negligible even for state-of-the-art RSs built upon deep neural networks (DNNs). Clearly, storing embedding vectors with a fixed dimension for all entities drastically escalates memory usage, making it intractable for RSs to scale to large datasets or support on-device applications. To this end, the urge for a lightweight recommender, created by utilizing a memory-efficient embedding structure, is raised. One naive solution is to choose a small dimension size for all entities so that a low memory budget can be met. However, as the dimension size determines the ability to encode each entity's information [12], this approach heavily impedes the expressiveness of embedding vectors and thus, the recommendation accuracy.
To counter the inflexibility of fixed-size embeddings, one mainstream of recent memory-efficient RSs is to dynamically allocate different embedding sizes to entities. This is done by either constructing an automated search procedure to find the best embedding size for each entity from a set of predefined options [12, 13, 14, 15], or applying sparsification (_i.e._, pruning) on the full embedding table to zero-out less important dimensions in every entity embedding [16, 17, 18, 10]. Though introducing varying dimensions helps selectively preserve embedding expressiveness for important entities (_e.g._, popular items) when working toward a tight memory budget, the usable embedding dimensions in both search- and pruning-based approaches will decrease dramatically. Consequently, a substantial amount of embedded information is lost, sacrificing the accuracy when calculating user-item similarity. The key reason is that, these methods follow the conventional embedding table scheme, where every entity is still explicitly mapped to a unique |
2310.00166 | Motif: Intrinsic Motivation from Artificial Intelligence Feedback | Exploring rich environments and evaluating one's actions without prior
knowledge is immensely challenging. In this paper, we propose Motif, a general
method to interface such prior knowledge from a Large Language Model (LLM) with
an agent. Motif is based on the idea of grounding LLMs for decision-making
without requiring them to interact with the environment: it elicits preferences
from an LLM over pairs of captions to construct an intrinsic reward, which is
then used to train agents with reinforcement learning. We evaluate Motif's
performance and behavior on the challenging, open-ended and
procedurally-generated NetHack game. Surprisingly, by only learning to maximize
its intrinsic reward, Motif achieves a higher game score than an algorithm
directly trained to maximize the score itself. When combining Motif's intrinsic
reward with the environment reward, our method significantly outperforms
existing approaches and makes progress on tasks where no advancements have ever
been made without demonstrations. Finally, we show that Motif mostly generates
intuitive human-aligned behaviors which can be steered easily through prompt
modifications, while scaling well with the LLM size and the amount of
information given in the prompt. | Martin Klissarov, Pierluca D'Oro, Shagun Sodhani, Roberta Raileanu, Pierre-Luc Bacon, Pascal Vincent, Amy Zhang, Mikael Henaff | 2023-09-29T22:10:01Z | http://arxiv.org/abs/2310.00166v1 | # Motif: Intrinsic Motivation from Artificial Intelligence Feedback
###### Abstract
Exploring rich environments and evaluating one's actions without prior knowledge is immensely challenging. In this paper, we propose Motif, a general method to interface such prior knowledge from a Large Language Model (LLM) with an agent. Motif is based on the idea of grounding LLMs for decision-making without requiring them to interact with the environment: it elicits preferences from an LLM over pairs of captions to construct an intrinsic reward, which is then used to train agents with reinforcement learning. We evaluate Motif's performance and behavior on the challenging, open-ended and procedurally-generated NetHack game. Surprisingly, by only learning to maximize its intrinsic reward, Motif achieves a higher game score than an algorithm directly trained to maximize the score itself. When combining Motif's intrinsic reward with the environment reward, our method significantly outperforms existing approaches and makes progress on tasks where no advancements have ever been made without demonstrations. Finally, we show that Motif mostly generates intuitive human-aligned behaviors which can be steered easily through prompt modifications, while scaling well with the LLM size and the amount of information given in the prompt.
## 1 Introduction
_Where do rewards come from?_ An artificial intelligence agent introduced into a new environment without prior knowledge has to start from a blank slate. What is good and what is bad in this environment? Which actions will lead to better outcomes or yield new information? Imagine tasking an agent with the goal of opening a locked door. The first time the agent finds a key, it will have no idea whether this could be useful for achieving the goal of opening a door: it has to learn this fact by interaction. A human, instead, would know by mere common sense that picking up a key is generally desirable for opening doors. Since the idea of manually providing this knowledge on a per-task basis does not scale, we ask: what if we could harness the collective high-level knowledge humanity has recorded on the Internet to endow agents with similar common sense?
Although this knowledge may not provide a direct solution to how an agent should manage its sensors or actuators, it bears answers to the fundamental questions mentioned above. This holds true for many of the environments where we would want to deploy an agent. However, the knowledge on the Internet is highly unstructured and amorphous, making it difficult to find and reuse information. Fortunately, by learning on Internet-scale datasets, Large Language Models (LLMs) absorb this information and make it accessible (Brown et al., 2020). Nonetheless, empowering a sequential decision-making agent with this source of common sense is far from trivial.
While an LLM's knowledge typically exists at a high level of abstraction, a decision-making agent often operates at a lower level of abstraction, where it must process rich observations and output
Figure 1: NetHack score for Motif and baselines. Agents trained exclusively with Motif’s intrinsic reward _surprisingly outperform agents trained using the score itself_, and perform even better when trained with a combination of the two reward functions.
fine-grained actions in order to achieve a desired outcome. For an agent to harness this prior knowledge and know what to look for in an environment, it is necessary to build a bridge between an LLM's high-level knowledge and common sense, and the low-level sensorimotor reality in which the agent operates. We propose to bridge this gap by deriving an intrinsic reward function from a pretrained LLM, and using it to train agents via reinforcement learning (RL) (Sutton and Barto, 2018). Our method, named Motif, uses an LLM to express preferences over pairs of event captions extracted from a dataset of observations and then distill them into an intrinsic reward. The resulting reward is then maximized directly or in combination with an extrinsic reward coming from the environment. A guiding principle in the design of Motif is the observation that _it is often easier to evaluate than to generate_(Sutton, 2001; Schulman, 2023). Motif's LLM expresses preferences over textual event captions; these are only required to be coarse descriptions of events happening in the environment rather than fine-grained step-by-step portrayals of the current observations. The LLM is not even asked to understand the low-level action space, which may be composite or continuous. In comparison, an approach using an LLM as a policy typically requires a complete text interface with the environment (Wang et al., 2023; Yao et al., 2022). When using Motif, the LLM remains in the space of high-level knowledge it was trained on, but leverages the capabilities of deep RL algorithms to deal with decision-making under rich observation and action spaces.
We apply Motif to the challenging NetHack Learning Environment (NLE) (Kuttler et al., 2020), and learn intrinsic rewards from Llama 2's preferences (Touvron et al., 2023) on a dataset of gameplays. This dataset, collected by policies of different levels of proficiency, only contains observations from the environment, without any action or reward information. Using this framework, we show that the resulting intrinsic reward drastically improves subsequent learning of a policy by RL. Motif excels in both relatively dense reward tasks, such as maximizing the game score, and extremely sparse reward tasks, such as the oracle task. To our knowledge, our paper is the first to make progress on this task without leveraging expert demonstrations. Notably, _an agent trained only through Motif's intrinsic reward obtains a better game score than an agent trained directly with the score itself._
In addition to quantifying Motif's strong game performance, we also delve into the qualitative properties of its produced behaviors. First, we show that Motif's intrinsic reward typically yields behaviors that are more aligned with human gameplay on NetHack. Second, we find tendencies of Motif to create _anticipatory rewards_(Thomaz et al., 2006; Pezzulo, 2008) which ease credit assignment while being consistent with human common sense. Third, we uncover a phenomenon that we name _misalignment by composition_, due to which the joint optimization of an aligned intrinsic reward and a task reward yields a misaligned agent with respect to the latter. Fourth, we demonstrate that the performance of the agent scales favorably in relation to both the size of the LLM and the amount of information contained in the prompt. Fifth, we investigate how sensitive the performance is to slight variations in the prompt. Sixth, we demonstrate it is possible to steer the agent's behavior by prompt modifications, naturally generating a set of semantically diverse policies.
## 2 Background
A Partially Observable Markov Decision Process (POMDP) (Astrom, Karl Johan, 1965) is a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{O},\mu,p,O,R,\gamma)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\mathcal{O}\) the observation space and \(\gamma\) is a discount factor. First, an initial state \(s_{0}\) is sampled from the initial state distribution \(\mu\). At each time step \(t\geq 0\), an observation \(o_{t}\) is sampled from the emission function, \(o_{t}\sim O(s_{t})\). This observation is given to the agent, which then produces an action \(a_{t}\) leading to an environment transition \(s_{t+1}\sim p(\cdot|s_{t},a_{t})\) and, upon arrival to the next state and sampling from the emission function, a reward \(r_{t+1}=R(o_{t+1})\). The goal of the agent is to learn a policy \(\pi:\mathcal{O}^{t}\rightarrow\Delta(\mathcal{A})\) which maximizes the expected discounted cumulative reward \(\mathbb{E}_{\pi}[\sum_{t=0}^{\infty}\gamma^{t}r_{t}]\). Each observation \(o_{t}\) has a (potentially empty) textual _caption_\(c(o_{t})\in\mathcal{C}\) as a component.
We assume access to a dataset of observations \(\mathcal{D}=\{o^{(i)}\}_{i=1}^{N}\). This type of dataset departs from the more typical ones, employed for instance in offline RL, which normally contain information about actions and possibly rewards (Levine et al., 2020). It is often much easier in practice to obtain a dataset of observations, for example videos of humans playing videogames (Hambro et al., 2022), than to record actions or to rely on a possibly non-existing reward function. We do not assume any level of proficiency in the policies that generated the dataset, but we assume sufficient coverage.
## 3 Method
The basic idea behind our method is to leverage the dataset \(\mathcal{D}\) together with an LLM to construct a dataset \(\mathcal{D}_{\text{pref}}\) of preferences, and then use \(\mathcal{D}_{\text{pref}}\) for training an intrinsic reward function. This intrinsic reward is then incorporated into an RL algorithm interacting with the environment. We next describe in detail the three phases characterizing our method, which are also depicted in Figure 2.
Dataset annotationIn the first phase, we use a pretrained language model, conditioned with a prompt possibly describing a desired behavior, as an annotator over pairs of captions. Specifically, the annotation function is given by \(\texttt{LLM}:\mathcal{C}\times\mathcal{C}\rightarrow\mathcal{Y}\), where \(\mathcal{C}\) is the space of captions, and \(\mathcal{Y}=\{1,2,\varnothing\}\) is a space of choices for either the first, the second, or none of the captions. Allowing a refusal to answer when uncertain reduces the noise coming from mistaken annotations and helps in normalizing the reward function (Lee et al., 2021). Concretely, we construct a dataset of preferences over pairs \(\mathcal{D}_{\text{pref}}=\{(o_{1}^{(j)},o_{2}^{(j)},y^{(j)})\}_{j=1}^{M}\) where observations \(o_{1}^{(j)},o_{2}^{(j)}\sim\mathcal{D}\) are sampled from the base dataset and annotations \(y^{(j)}=\texttt{LLM}(c(o_{1}^{(j)}),c(o_{2}^{(j)}))\) are queried from the LLM.
Reward trainingFor deriving a reward function from the LLM's preferences, we use standard techniques from preference-based RL (Wirth et al., 2017), minimizing a cross-entropy loss function on the dataset of pairs of preferences \(\mathcal{D}_{\text{pref}}\) to learn a parameterized reward model \(r_{\mathbf{\phi}}:\mathcal{O}\rightarrow\mathbb{R}\):
\[\begin{split}\mathcal{L}(\mathbf{\phi})=-\mathbb{E}_{(o_{1},o_{2},y) \sim\mathcal{D}_{\text{pref}}}\Bigg{[}&\,1[y=1]\log P_{\mathbf{\phi}} [o_{1}\succ o_{2}]+1[y=2]\log P_{\mathbf{\phi}}[o_{2}\succ o_{1}]\\ &+1[y=\varnothing]\log\left(\sqrt{P_{\mathbf{\phi}}[o_{1}\succ o_{2} ]\cdot P_{\mathbf{\phi}}[o_{2}\succ o_{1}]}\right)\Bigg{]},\end{split} \tag{1}\]
where \(P_{\mathbf{\phi}}[o_{a}\succ o_{b}]=\frac{e^{x_{\mathbf{\phi}}(o_{a})}}{e^{x_{\mathbf{\phi }}(o_{a})}+e^{x_{\mathbf{\phi}}(o_{b})}}\) is the probability of preferring an observation to another.
Reinforcement learning trainingOnce we have trained the intrinsic reward model \(r_{\mathbf{\phi}}\), we use it to define an intrinsic reward \(r_{\text{int}}\), and provide it to an RL agent, which will optimize a combination of the intrinsic and extrinsic rewards: \(r_{\text{effective}}(o)=\alpha_{1}r_{\text{int}}(o)+\alpha_{2}r(o)\). In some of our experiments, we set \(\alpha_{2}=0\), to have agents interact with the environment guided only by the intrinsic reward. For simplicity, we do not further fine-tune the reward function on data collected online by the agent, and instead fully rely on the knowledge acquired offline.
Figure 2: A schematic representation of the three phases of Motif. In the first phase, _dataset annotation_, we extract preferences from an LLM over pairs of captions, and save the corresponding pairs of observations in a dataset alongside their annotations. In the second phase, _reward training_, we distill the preferences into an observation-based scalar reward function. In the third phase, _RL training_, we train an agent interactively with RL using the reward function extracted from the preferences, possibly together with a reward signal coming from the environment.
### Learning from Artificial Intelligence Feedback on NetHack
We apply our method to the game of NetHack (Kuttler et al., 2020), an extremely challenging rogue-like video game in which the player has to go through multiple levels of a procedurally generated dungeon, exploring, collecting and using items, and fighting monsters. NetHack is an interesting domain for testing intrinsic motivation approaches: it is rich and complex, and the reward signal coming from the environment (e.g., the game score, or the dungeon level) can be sparse and not necessarily aligned with what a human would evaluate as good gameplaying.
To instantiate Motif on the NLE, which provides access to NetHack-based tasks, we follow the general strategy described above, integrating it with domain-specific choices for the dataset of observations, the LLM model and prompting strategy, the reward model architecture and post-processing protocol, and the agent architecture and RL algorithm. We now provide the main information regarding these different choices. Further details can be found in the Appendix B and Appendix G.
Dataset generationA convenient feature of NetHack is that the game displays a text message in about 10% to 20% of game screens, typically describing events happening in the game. These include positive events, such as killing a monster, negative events, such as starving, or neutral events, like bumping into a wall. Every message is part of the observation, which also includes a visual representation of the game screen and numerical features such as the position, life total and statistics of the agent. Thus, messages can be interpreted as captions, and we use them as the input that we feed to the LLM to query its preference. To construct a reasonably diverse dataset \(\mathcal{D}\), we collect a set of 100 episodes at every 100 million steps of learning with the standard NLE RL baseline CDGPT5 (Mifyli, 2022) and repeat the process for 10 seeds. The CDGPT5 baseline is trained for 1 billion steps to maximize the in-game score. We analyze these choices in Appendix H.3.
LLM choice and promptingWe employ the 70-billion parameter chat version of Llama 2 (Touvron et al., 2023) as annotator to generate \(\mathcal{D}_{\text{pref}}\) from \(\mathcal{D}\). We determined via a preliminary analysis (shown in Appendix C) that this model has sufficient knowledge of NetHack and common-sense understanding to be useful as an annotator, even with no domain-specific fine-tuning. We modify the model's system prompt from its default, and write a prompt that tasks the model with evaluating pairs of messages extracted from \(\mathcal{D}\). We use a form of chain of thought prompting (Wei et al., 2022), asking the model to provide a brief summary of its knowledge of NetHack, and an analysis of its understanding of the messages presented, before expressing a preference.
Annotation processWe use a regular expression to identify one of the labels in \(\mathcal{Y}\) in the LLM's output text. In case of a failure in finding one of the them, we ask the model again by continuing the conversation, and remove the pair from the dataset if the second attempt also fails. When two messages are exactly the same, as can happen in roughly 5% to 10% of the cases (e.g., due to empty messages), we automatically assign the label \(y=\varnothing\) without any further processing.
Intrinsic reward architecture and post-processingWe train the intrinsic reward \(r_{\phi}\) by optimizing Equation 1 by gradient descent. For simplicity, we only use the message as the part of the observation given to this reward function, and process it through the default character-level one-dimensional convolutional network used in previous work (Henaff et al., 2022). To make it more amenable to RL optimization, we transform the reward function \(r_{\phi}\) produced by the training on \(\mathcal{D}_{\text{pref}}\) into:
\[r_{\text{im}}(\texttt{message})=\mathbb{I}\left[r_{\phi}(\texttt{message })\geq\epsilon\right]\cdot r_{\phi}(\texttt{message})/N(\texttt{message})^{ \beta}, \tag{2}\]
where \(N(\texttt{message})\) is the count of how many times a particular message has been previously found during the course of an episode. The transformation serves two purposes. First, it employs episodic count-based normalization, as previously utilized in Raileanu & Rocktaschel (2020); Mu et al. (2022); Zhang et al. (2021). This transformation helps in overcoming some of the major limitations of a Markovian reward function (Abel et al., 2021), encouraging the agent to diversify the observed outcomes and preventing it from getting fixated on objects with which it cannot interact due to its limited action space or skills. Second, applying a threshold below \(\epsilon\) reduces the noise coming from training based on preferences from an imperfect LLM. We ablate these choices in Appendix H.2.
Reinforcement learning algorithmWe train agents using the CDGPT5 baseline, which separately encodes messages, bottom-line features, and a cropped-field-of-view version of the screen. The algorithm is based on PPO (Schulman et al., 2017) using the asynchronous implementation of _Sample Factory_(Petrenko et al., 2020). We additively combine intrinsic and extrinsic rewards. We will specify what weight is given to each reward function, depending on the experiment.
## 4 Experiments
We perform an extensive experimental evaluation of Motif on the NLE. We compare agents trained with Motif to baselines trained with the extrinsic reward only, as well as a combination between extrinsic and intrinsic reward provided by Random Network Distillation (RND) (Burda et al., 2019). RND is an established intrinsic motivation baseline and it has previously been shown to provide performance improvements on certain tasks from the NLE (Kuttler et al., 2020). We evaluate additional baselines in Appendix F, showing that none of them is competitive with Motif. We report all experimental details in Appendix G, and additional experiments and ablations in Appendix H.
### Performance on the NetHack Learning Environment
To analyze the performance of Motif, we use five tasks from the NLE. The first one is the score task, in which the agent is asked to maximize the game score proper to NetHack. This task is generally considered the most important one in the NLE, the score being also the metric of agent evaluation used in previous NetHack AI competitions (Hambro et al., 2022). The other four are sparse-reward tasks. We employ three variations of a dungeon descent task (staircase, staircase (level 3), staircase (level 4)), in which the agent only receives a reward of \(50\) when it enters the second, third and fourth dungeon level respectively. We additionally use the extremely sparse reward oracle task, in which the agent gets a reward of \(50\) when it finds _the oracle_, an in-game character that resides in a specific branch of the dungeon, at a depth level greater than five.
Figure 1 reports the performance of Motif and related baselines on the score task. While, as shown in previous work (Kuttler et al., 2020; Zhang et al., 2021), existing intrinsic motivation approaches offer minimal benefits on this task, Motif significantly enhances the agent's performance. In particular, training an agent only through Motif's intrinsic reward function and no extrinsic reward already generates policies collecting more score than the baselines that directly maximize it. To the best of our knowledge, this is the first time an agent trained with deep RL using only an intrinsic reward is shown to outperform one trained with the environment's reward on a relatively dense-reward complex task. When combining intrinsic and extrinsic rewards, the score improves even further: Motif largely surpasses the baselines in terms of both final performance and sample efficiency. We provide more insights into the behavior induced by the intrinsic reward in Section 4.2.
We show the success rate of Motif and the baselines on the sparse reward tasks in Figure 3. On the staircase task, in which the agent has to reach the second dungeon level, Motif has better sample efficiency than the baselines, albeit featuring worse asymptotic performance than RND. On the other more complex staircase tasks, the agent only receives a reward from the environment when it reaches dungeon level 3 or 4. Since the LLM prefers situations which will likely lead to new discoveries and progress in the game, the intrinsic reward naturally encourages the agent to go deep into the dungeon. Thus, Motif is able to make significant progress in solving the tasks, with just its intrinsic reward and even more when combining it with the extrinsic reward, while an agent trained with either the extrinsic reward or RND has a zero success rate. On the oracle task, the hardest task in the set, no approach ever reported any meaningful progress without using human demonstrations (Bruce et al., 2023), due to the extremely sparse reward. In Figure 3, we show that, when combining intrinsic and extrinsic reward, Motif can achieve a success rate of about 30%.
Figure 3: Success rate of Motif and baselines on sparse-reward tasks. Motif is sample-efficient and makes progress where no baseline learns useful behaviors. In Appendix F, we additionally compare to E3B (Henaff et al., 2022) and NovelD (Zhang et al., 2021), finding no benefits over RND.
### Behavior and Alignment Analysis
From which type of behavior do the large performance gains provided by Motif come from? We now analyze in-depth the policies obtained using Motif's intrinsic reward and the environment's reward, showing that Motif's better performance can be attributed to the emergence of complex strategies. We then characterize these behaviors and discuss their alignment with human intuition.
Characterizing behaviorsIt is customary to measure the gameplay quality of an agent on NetHack using the game score (Kuttler et al., 2020; Hambro et al., 2022). While the score is indeed a reasonable quality measure, it is a one-dimensional representation of a behavior that can be fairly rich in a complex and open-ended environment such as NetHack. To more deeply understand the relationship among the kind of behaviors discovered via the intrinsic reward, the extrinsic reward and their combination, we characterize policies using metrics similar to the one proposed in Bruce et al. (2023) and Piterbarg et al. (2023). Figure 4 shows that the three agents exhibit qualitatively different behaviors. The agent trained only with the extrinsic reward greedily goes down into the dungeon, guided by the reward it gets when transitioning between dungeon levels or collecting the gold it can find in a new level. Disregarding the perils from adventuring down into the dungeon too fast and without a sufficiently high experience level is very risky, as each new dungeon level will generate more challenging monsters and present dangerous situations. The agent trained only with Motif's intrinsic reward has a behavior tailored for survival, more aligned to a player's behavior, killing more monsters, gaining more experience levels and staying alive for longer. The agent trained with a combination of Motif's intrinsic reward and the environment reward leverages their interplay and achieves the best of both worlds, acquiring the survival-oriented skills implied by the intrinsic reward but leveraging them at the cost of a shorter lifespan to go down into the dungeon with better combat skills, collecting more gold and score.
Alignment with human intuitionMotif's intrinsic reward comes from an LLM trained on human-generated data and then fine-tuned on human preferences. It is natural to wonder whether the alignment of the LLM with human intentions will be converted into a behavior that follows human intuition. In Appendix H, we provide evidence of human-aligned behavior, in addition to Figure 4, with agents trained with Motif being less likely to kill their pet. The agent also exhibits a natural tendency to explore the environment. Indeed, many of the messages most preferred by Motif are related to the exploration of the environment (e.g., "The door opens."), which would also be intuitively preferred by humans (see Appendix D). When compared to traditional intrinsic motivation approaches, this has profound consequences. Typical approaches define the _novelly_ as a feature of a state and let the RL algorithm solve the credit assignment problem to find special states that might lead to novel states. Motif goes a step beyond that: it directly rewards states that, under some intuitive assumption about a policy, will likely lead to new discoveries (such as opening a door), an anticipatory reward-crafting behavior that has been observed in humans (Thomaz et al., 2006). This brings Motif's intrinsic reward conceptually closer to a value function (Ng et al., 1999), and drastically eases credit assignment for the RL algorithm. In other words, via its LLM, Motif effectively addresses both _exploration_ (by leveraging prior knowledge) and _credit assignment_ (by anticipating future developments in a reward function), which may explain Motif's strong performance.
Misalignment by composition in the oracle taskWe now show how the alignment with human intuition can break when combining Motif's intrinsic reward with the environment reward. We have seen in Figure 3 that Motif reaches a good level of performance on the challenging oracle task, in which the agent has to find the oracle @ by going deep into the dungeon and after facing significant challenges. However, if we inspect the behavior learned by Motif, we observe something surprising: it almost never goes past the first level. Instead, as shown in Figure 5, the agent learns a complex behavior to hack the reward function (Skalse et al., 2022), by finding a particular hallucinogen. To do so, the agent first has to find a specific monster, a yellow mold F, and defeat it. As NetHack is a procedurally generated game with hundreds of different monsters, this does not happen trivially, and
Figure 4: Comparison along different axes of policy quality of agents trained with Motif’s and environment’s reward functions.
the agent must survive thousands of turns to get this opportunity. Once the yellow mold is killed, the agent has to eat its corpse \(\Uparrow\) to start hallucinating. In this state, the agent will perceive monsters as characters typically found in other parts of the game, such as a Yeti Y. Normally, the agent would attack this entity, but to hack the reward, it must completely avoid being aggressive and hope to survive the encounter. If it does so and the hallucination state is not over, it will hallucinate the monster as an oracle \(\Uparrow\). As the NLE detects that a nearby character appears to be the oracle, the task will be declared as completed.2 To summarize, _the agent learns to find hallucinogens to dream of the goal state, instead of actually going there_. This unexpected behavior is not found by the agent that optimizes the extrinsic reward only. At the same time, the intrinsic reward, despite being generally aligned with human's intuition, creates in the agent new capabilities, which can be used to exploit the environment's reward function. We name the underlying general phenomenon _misalignment by composition_, the emergence of misaligned behaviors from optimizing the composition of rewards that otherwise lead to aligned behaviors when optimized individually. We believe this phenomenon may appear in other circumstances (e.g., for chat agents) and is worthy of future investigations.
Footnote 2: For completeness, we report in Appendix H that Motif performs well, albeit with lower success rate, also on a modified version of the oracle task, in which success is only valid when the agent is not hallucinating.
### Sensitivity to LLM Size and Prompt
So far, we trained agents with Motif using a fixed LLM and a fixed prompt. In this section, we seek to understand how interventions along these variables influence the agent's behavior.
Scaling behaviorWe first investigate how scaling the LLM annotator impacts the downstream performance of the RL algorithm. If the LLM has more domain or common-sense knowledge, we can expect the reward function to more accurately judge favorable events in NetHack. We train a Motif agent on staircase (level 3) with a combination of extrinsic reward and intrinsic rewards obtained from Llama 27, 13b, and 70b. In Figure 5(a), we show that larger LLMs lead to higher success rates when used to train agents via RL. This result hints at the scalability of Motif, which could potentially take advantage of more capable LLMs or domain-specific fine-tuned ones.
Figure 5: Illustration of the behavior of Motif on the oracle task. The agent \(\Uparrow\) first has to survive thousands of steps, waiting to encounter \(\mathbb{F}\) (a yellow mold), a special kind of monster that contains an hallucinogen in its body (1). Agent \(\Uparrow\) kills \(\mathbb{F}\) (2) and then immediately casts its corpse \(\Uparrow\) (3). Eating the corpse of \(\mathbb{F}\) brings the agent to the special _hallucinating_ status, as denoted by the Hallu shown at the bottom of the screen (4). The behavior then changes, and the agent seeks to find a monster and remain non-aggressive, even if the monster may attack (5). If the agent survives this encounter and the hallucination period is not over, agent \(\Uparrow\) will see the monster under different appearances, for example here as a Yeti Y. Eventually, it will hallucinate the oracle \(\Uparrow\) and complete the task (6).
Effect of task-relevant information in the promptThe regular prompt we provide to the agent includes a few keywords that act as hints for constructive NetHack gameplay (i.e., maximizing the score, killing monsters, collecting gold, and going down the dungeon). What if we let the model entirely rely on its own knowledge of NetHack, without providing any type of information about the game? With a _zero-knowledge_ prompt, the only way for the intrinsic reward to be effective is for an LLM to not only be able to discern which messages are more or less aligned to a given goal or gameplaying attitude but also to infer the goal and the attitude by itself. We report the prompts in the Appendix. Figure 5(b) shows the performance of Motif trained using only the intrinsic reward on the score task. The results show two points: first, Motif exhibits good performance also when the annotations come from a zero-knowledge prompt, denoting the capability of the LLM to naturally infer goals and desirable behaviors on NetHack; second, adding knowledge in this user-friendly way (through a single sentence in the prompt) significantly boosts the performance, demonstrating that Motif unlocks an intuitive and fast way to integrate prior information into a decision-making agent.
Sensitivity to prompt rewordingLLMs are known to be particularly sensitive to their prompt, and different wordings for semantically similar prompts are known to cause differences in task performance (Lu et al., 2022). To probe whether this is the case also in the context of Motif, we design a _rewarded_, but semantically very similar, prompt and compare the performance of Motif trained with the default and the rewarded prompts, with the combination of intrinsic and extrinsic rewards. While we do not observe significant performance differences in the score task (see Appendix H), we show in Figure 5(c) that the intrinsic reward implied by the rewarded prompt, in interaction with the extrinsic reward, induces a significantly different behavior compared to the one derived from the default prompt. In particular, while Motif equipped with the default prompt finds the "hallucination technique" to hack the reward function, and does not need to go down the dungeon, the version of Motif that uses the rewarded prompt optimizes the expected solution, learning to go down the dungeon to find the oracle. This is due to emergent phenomena resulting from the combination of the LLM's prompt sensitivity and RL training: a change in the prompt affects preferences, that are then distilled into the intrinsic reward, which in turn leads to a behavior. We believe studying this chain of interactions is an important avenue for future safety research.
### Steering towards Diverse Behaviors via Prompting
A major appeal of Motif is that its intrinsic reward can be modulated by prompts provided in natural language to an LLM. This begs the question of whether a human can leverage this feature not only to provide prior information to the agent but also to steer the agent towards particular behaviors, aligned with their intentions. To demonstrate this, we add different modifiers to a base prompt, generating three agents encouraged to have semantically diverse behaviors. The first agent, _The Gold Collector_, is incentivized to collect gold and avoid combat. The second agent, _The Descender_, is encouraged to descend the stairs but avoid confrontation. The third agent, _The Monster Slayer_, is encouraged to combat monsters. For each prompt, we show in Table 1 the messages most preferred by the corresponding reward function and the ratio of improvement over the _zero-knowledge_ prompt. For each agent, we calculate this ratio on the most relevant metric: gold collected for _The Gold Collector_, dungeon level reached for _The Descender_, and number of monsters killed for _The Monster Slayer_.
Figure 6: Changes in downstream performance of the RL agent due to changes in LLM or prompt. (a) Downstream performance scales with the LLM size. (b) Adding more information to the prompt improves the already noticeable performance of a zero-knowledge prompt. (c) The wording of the prompt can lead to very different behaviors in complex tasks.
The results show that the agent's behavior can indeed be steered, with noticeable improvements across all prompts, being more than twice as effective as the baseline at collecting gold or combat. Inspecting the most preferred messages, _The Gold Collector_ gets higher rewards for collecting gold, but also for discovering new rooms; _The Desender_ is encouraged to explore each level of the dungeon better; _The Monster Slayer_ is led to engage in any kind of combat.
## 5 Related Work
Learning from preferences in sequential decision-making has a long history (Thomaz et al., 2006; Knox and Stone, 2009). In the field of natural language processing, learning from human (Ouyang et al., 2022) or artificial intelligence (Bai et al., 2022) feedback has created a paradigm shift driving the latest innovations in alignment of LLMs. More closely related to our work is Kwon et al. (2022), that also proposes to use LLMs to design reward functions, albeit working with complete trajectories that include the state of the game and the actions at each time step. In comparison, Motif studies the role of artificial intelligence feedback in a challenging long horizon and open-ended domain where the resulting rewards are used as intrinsic motivation (Schmidhuber, 1991). Another closely related work is Du et al. (2023), which leverages LLMs to generate goals for an agent and defines rewards by measuring the cosine similarity between the goal description and the observation's caption. Motif instead builds on the capabilities of LLMs to anticipate future developments when providing preferences on current events. A separate line of work considers leveraging LLMs as agents interacting directly in the environment (Yao et al., 2022; Wang et al., 2023). However, this introduces the necessity to ground the LLM in both the observation and action space (Carta et al., 2023). We further contextualize our approach by discussing more related work in Appendix A.
## 6 Conclusions
We presented Motif, a method for intrinsic motivation from artificial intelligence feedback. Motif learns a reward function from the preferences of an LLM on a dataset of event captions and uses it to train agents with RL for sequential decision-making tasks. We evaluated Motif on the complex and open-ended NetHack Learning Environment, showing that it exhibits remarkable performance both in the absence and in the presence of an external environment reward. We empirically analyzed the behaviors discovered by Motif and its alignment properties, probing the scalability, sensitivity and steerability of agents via LLM and prompt modifications.
We believe Motif to be a first step to harness, in a general and intuitive manner, the common sense and domain knowledge of LLMs to create competent artificial intelligence agents. Motif builds a bridge between an LLM's capabilities and the environment to distill knowledge without the need for complicated textual interfaces. It only relies on event captions, and can be generalized to any environment in which such a captioning mechanism is available. A system like Motif is well-positioned for directly converting progress in large models to progress in decision-making: more capable LLMs or prompting techniques may easily imply increased control competence, and better multimodal LLMs (Alayrac et al., 2022; Maas et al., 2023) could remove the need for captions altogether. Throughout a large part of this paper, we analyzed the behavior and alignment properties of Motif. We encourage future work on similar systems to not only aim at increasing their capabilities but to accordingly deepen this type of analysis, developing conceptual, theoretical and methodological tools to align an agent's behavior in the presence of rewards derived from an LLM's feedback.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Agent & _The Gold Collector_ & _The Desender_ & _The Monster Slayer_ \\ \hline Prompt Modifier & Prefer agents that maximize their gold & Prefer agents that go down the dungeon & Prefer agents that engage in combat \\ \hline Improvement & +106\% more gold (64\%, 157\%) & +17\% more descents (9\%, 26\%) & +150\% more kills (140\%, 161\%) \\ \hline \multirow{3}{*}{\(\mathfrak{O}\)} & “In what direction?” & “In what direction?” & “You hit the event.” \\ & “s - 2 gold pieces.” & “The door resists” & “You miss the next.” \\ & “s - 4 gold pieces.” & “You can see again.” & “You see here a jackal carpée.” \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance improvement from a particular prompt on the corresponding metric (collected gold, dungeon level, and number of killed monsters) compared to the unaltered prompt, prompt modifiers, and set of most preferred messages from the different reward functions. |
2309.14070 | Scalar and tensor charmonium resonances in coupled-channel scattering
from QCD | We determine $J^{PC}=0^{++}$ and $2^{++}$ hadron-hadron scattering amplitudes
in the charmonium energy region up to 4100 MeV using lattice QCD, a
first-principles approach to QCD. Working at $m_\pi\approx 391$ MeV, more than
200 finite-volume energy levels are computed and these are used in extensions
of the L\"uscher formalism to determine infinite-volume coupled-channel
scattering amplitudes. We find that this energy region contains a single
$\chi_{c0}$ and a single $\chi_{c2}$ resonance. Both are found as pole
singularities on the closest unphysical Riemann sheet, just below 4000 MeV with
widths around 70 MeV. The largest couplings are to kinematically-closed $D^*
\bar{D}^*$ channels in $S$-wave, and couplings to several decay channels
consisting of pairs of open-charm mesons are found to be large and significant
in both cases. Above the ground state $\chi_{c0}$, no other scalar bound-states
or near-$D\bar{D}$ threshold resonances are found, in contrast to several
theoretical and experimental studies. | David J. Wilson, Christopher E. Thomas, Jozef J. Dudek, Robert G. Edwards | 2023-09-25T12:04:37Z | http://arxiv.org/abs/2309.14070v1 | # Scalar and tensor charmonium resonances in coupled-channel scattering from QCD
###### Abstract
We determine \(J^{PC}=0^{++}\) and \(2^{++}\) hadron-hadron scattering amplitudes in the charmonium energy region up to 4100 MeV using lattice QCD, a first-principles approach to QCD. Working at \(m_{\pi}\approx 391\) MeV, more than 200 finite-volume energy levels are computed and these are used in extensions of the Luscher formalism to determine infinite-volume coupled-channel scattering amplitudes. We find that this energy region contains a single \(\chi_{c0}\) and a single \(\chi_{c2}\) resonance. Both are found as pole singularities on the closest unphysical Riemann sheet, just below 4000 MeV with widths around 70 MeV. The largest couplings are to kinematically-closed \(D^{*}\bar{D}^{*}\) channels in \(S\)-wave, and couplings to several decay channels consisting of pairs of open-charm mesons are found to be large and significant in both cases. Above the ground state \(\chi_{c0}\), no other scalar bound-states or near-\(D\bar{D}\) threshold resonances are found, in contrast to several theoretical and experimental studies.
_Introduction_ -- The experimental mapping of the spectrum of excited hadrons containing a charm-anticharm pair has seen rapid progress in recent years. Driven initially by the discovery of the \(X(3872)\)[1], more novel observations quickly followed, including states with an apparent four-quark nature, such as the \(Z_{c}(3900)\)[2; 3]. Work to decipher this new hadron spectroscopy, going beyond the simple \(c\bar{c}\) quark model, is underway [4; 5; 6; 7; 8]. Within the standard model of particle physics lies Quantum Chromodynamics (QCD), the theory of interacting quarks and gluons, which describes hadrons and their interactions. While the theory is well-defined, it remains challenging to perform calculations of its spectrum owing to its strongly-coupled nature.
Charm quarks are heavy enough that relativistic effects are typically sub-leading, and models built using potentials have proven successful in describing the low-lying spectrum [9; 10; 11; 12; 13; 14]. These approaches work well for states below \(D\bar{D}\) threshold whose lifetimes are relatively long, with charm-anticharm annihilation and radiative transitions being the dominant modes of decay. However, above this point states can decay more rapidly to systems of open-charm mesons, and the physics of coupling to decay modes, where we treat excited states as _resonances_, becomes important.
This article aims to address a key weakness in our present understanding: knowledge of resonance decays from first-principles in QCD. We compute using _lattice QCD_ to determine resonance masses, widths, and decay modes. We begin by considering what might naively expected to be relatively simple systems: isoscalar scalar and tensor resonances in the approximation where charm-anticharm annihilation is forbidden.1 A summary of the general approach, which takes advantage of the finite spatial volume of the lattice to determine scattering amplitudes in which resonances appear, is given in a recent review [15]. The discrete spectra extracted from correlation functions computed using lattice QCD can be translated into infinite-volume coupled-channel scattering amplitudes using the Luscher formalism [16] and extensions. The scattering amplitudes so obtained contain resonances as pole singularities in much the same way as experimental analyses. The pole positions yield the masses and widths, and the pole residues factorize into the channel couplings, enabling partial widths to be estimated.
Footnote 1: This is well-defined theoretically, and well-justified empirically given the modest hadronic widths observed for states below \(D\bar{D}\) threshold.
In this short report, we present results for resonances found in \(J^{PC}=0^{++}\) and \(2^{++}\). In an accompanying longer article [17], we give more details of our approach, and provide other amplitudes extracted in this work including \(J^{PC}=3^{++}\), which is found to contain a \(\chi_{c3}\) resonance, and negative parity \(J^{PC}=\{1,2,3\}^{-+}\) waves which lack strong scattering, although \(2^{-+}\) contains a near-threshold bound-state.2
Footnote 2: The closest previous work considering some of these channels in Lattice QCD is Ref. [18], and a comparison with the calculation reported on here can be found in the longer article, Ref. [17].
_Computing finite-volume spectra_ -- We perform calculations using lattices with two degenerate dynamical light quark flavors and a heavier dynamical strange quark [19; 20], with a light quark mass value such that \(m_{\pi}\approx 391\) MeV. The valence charm quarks have the same action as the light and strange quarks and are tuned to approximately reproduce the physical \(\eta_{c}\) mass [21]. Three volumes are employed corresponding to
\(L/a_{s}=\{16,20,24\}\), where \(L\) is the spatial extent and \(a_{s}\) is the spatial lattice spacing. Anisotropic lattices with anisotropy \(\xi=a_{s}/a_{t}\approx 3.5\) are used to obtain a finer energy resolution, where \(a_{t}\) is the temporal lattice spacing. In the computation of two-point correlation functions, all relevant Wick contractions, including those featuring light or strange quark annihilation, are performed efficiently using distillation [22].
No lattice QCD study to-date has considered _all_ hadron-hadron channels present in this energy region, even in the simplifying limit where charm-anticharm annihilation is forbidden. In this work, we compute the complete discrete energy spectrum up to around the \(\psi\phi\) threshold by using a large number of interpolating operators with fermion-bilinear (\(c\bar{c}\)-like) and meson-meson-like structures [23]. In particular, we construct operators resembling every relevant hadron-hadron pair with the correct quantum numbers.
While our aim is to determine the \(J^{PC}=\{0,2\}^{++}\) amplitudes, the reduced symmetry of the finite cubic lattice volume means that scattering in multiple \(J^{P}\) partial-waves contributes to the same finite-volume spectra, obtained in the irreducible representations (irreps) of the cubic group [24; 25]. Parity is a good quantum number for systems overall at rest, but it is not when the system has net momentum. The finite-volume of the lattice imposes quantization of momentum, \(\vec{p}=\frac{2\pi}{L}(i,j,k)=[ijk]\) where \(i,j,k\) are integers, and we will compute spectra for several values of total scattering system momentum. Relevant hadron-hadron scattering combinations with \(J^{PC}=\{0,2,3\}^{++}\) are shown in Table 1, with partial-waves labelled by spectroscopic notation.3
Footnote 3: We do not aim to determine any three-hadron amplitudes, but when computing the finite-volume spectra we include operators with \(\eta_{c}\sigma\)-like and \(\chi_{c}\sigma\)-like structures. The corresponding energy levels are found to be decoupled from the other levels. A complete description is given in the accompanying longer article [17].
In Fig. 1 we present a selection of computed finite-volume spectra for zero overall momentum for irreps having \(J^{PC}=0^{++}\) and \(2^{++}\) as lowest partial-waves. Additional spectra with overall non-zero momentum and with leading \(J^{PC}\)=\(\{1,2,3\}^{-+}\) or \(3^{++}\) at zero momentum are presented in Ref. [17]. In each of the three panels in the figure a level is observed below \(\eta_{c}\eta\) threshold with very little volume dependence, and these levels correspond to the stable \(\chi_{c0}(1P)\) bound state (left), and the stable \(\chi_{c2}(1P)\) bound state (middle, right). From \(\eta_{c}\eta\) threshold up to around 3900 MeV there is a one-to-one correspondence between the computed energies and the levels expected in the absence of interactions, and energy shifts from these non-interacting levels are typically small, suggesting only mild interaction strength. Higher up in energy there appear to be extra levels, and more significant departures from the non-interacting spectrum, which may be due to the presence of one or more resonances. To draw more definite conclusions we must determine infinite-volume scattering amplitudes, constrained by these spectra.
_Scattering amplitudes --_ The coupled-channel scattering \(t\)-matrix is obtained from finite-volume energies using Luscher's finite-volume quantization condition [16], generalized for hadron-hadron scattering for hadrons with arbitrary spin [26],
\[\det\Bigl{[}\mathbf{1}+i\mathbf{\rho}(E)\cdot\mathbf{t}(E)\cdot\bigl{(}\mathbf{1}+i \mathbf{\mathcal{M}}(E,L)\bigr{)}\Bigr{]}=0, \tag{1}\]
where \(\mathbf{t}(E)\) is the scattering \(t\)-matrix, \(\mathbf{\mathcal{M}}(E,L)\) is a matrix of known functions dependent on the volume and irrep, and \(\mathbf{\rho}\) is a diagonal matrix of phase-space factors, \(\rho_{i}=2k_{i}/E\). \(E\) is the centre-of-momentum frame energy and \(k_{i}\) the momentum of each hadron in that frame for hadron-hadron channel \(i\).
Figure 1: Spectra in irreps \(\Lambda^{P}=A_{1,\bar{D}}^{+}E^{+}\) and \(T_{2}^{+}\) with zero overall momentum, having leading \(J^{PC}=0^{++}\), \(2^{++}\) and \(2^{++}\) partial waves respectively. Points are the computed finite-volume energies colored according to their dominant operator-overlap, with colors given in the key on the right. Black points have large overlap with both \(c\bar{c}\)-like and \(D\bar{D}\)-like operators. Solid curves indicate non-interacting meson-meson energies and dashed lines indicate kinematic thresholds. Degenerate non-interacting levels are indicated by multiple parallel curves, slightly displaced in energy for visual clarity.
\begin{table}
\begin{tabular}{c|l} \(0^{++}\) & \(\eta_{c}\eta,D\bar{D},\eta_{c}\eta^{\prime},D_{s}\bar{D}_{s},\psi\omega,D^{*} \bar{D}^{*},\psi\phi\,\{^{1}\!S_{0}\}\) \\ \hline \(2^{++}\) & \(\eta_{c}\eta,D\bar{D},\eta_{c}\eta^{\prime},D_{s}\bar{D}_{s}\,\{^{1}\!D_{2}\}\) & \(\bar{D}\bar{D}^{*},D_{s}\bar{D}^{*}_{s}\{^{3}\!D_{2}\}\) \\ & \(\psi\omega,D^{*}\bar{D}^{*},\psi\phi\{^{1}\!S_{2}\}\) \\ \hline \(3^{++}\) & \(\begin{array}{l}D\bar{D}^{*},\psi\omega,D_{s}\bar{D}_{s}^{*},\psi\phi\{^{1}\! D_{0}\};\quad\eta_{c}\sigma\{{}^{1}\!F_{3}\}\\ \psi\omega,D^{*}\bar{D}^{*},\psi\phi,D_{s}^{*}\bar{D}_{s}^{*}\{^{3}\!D_{3}\}\end{array}\) \\ \end{tabular}
\end{table}
Table 1: Hadron-hadron \({}^{2S+1}\ell_{J}\) total spin (\(S\)), orbital angular momentum (\(\ell\)) and total angular momentum combinations (\(J\)) present for \(J^{PC}=\{0,2,3\}^{++}\) scattering. Those given in grey indicate that the corresponding operator constructions were included, but that the scattering channel was found to be decoupled or otherwise not relevant at these energies.
The matrices in Eq. 1 are in the space of relevant hadron-hadron channels and partial waves, as shown in Table 1. Since many channels contribute, \(\mathbf{t}(E)\) is under-constrained at any given value of \(E\), and it is necessary to _parameterize_ the energy dependence. We make use of amplitudes of the form,
\[\left[t^{-1}\right]_{ij}=\left(2k_{i}\right)^{-\ell_{i}}\left[K^{-1}\right]_{ij }\left(2k_{j}\right)^{-\ell_{j}}+I_{ij}\;, \tag{2}\]
where \(K_{ij}\) are the elements of a symmetric matrix that is real for real \(s=E^{2}\). \(S\)-matrix (\(s\)-channel) unitarity mandates that \(\operatorname{Im}I_{ij}=-\rho_{i}\), while a real part can optionally be generated through a dispersion relation as described in App. B of Ref. [27].
The \(0^{++}\) and \(2^{++}\) amplitudes are determined using the constraint provided by 90 and 86 energy levels respectively, taken from spectra both at-rest and for nonzero total momentum. Additional levels from other irreps are used to fix the \(3^{++}\) and negative parity waves which also contribute to Eq. 1, leading to constraint from more than 200 levels in total. Finite-volume energy levels corresponding to the low-lying \(\chi_{c0}(1P)\) and \(\chi_{c2}(1P)\) bound states are observed below \(\eta_{c}\eta\) threshold in Fig. 1, but because they do not constrain the amplitudes in the physical scattering region we choose not to include them when determining the scattering amplitudes.
In \(J^{PC}=\{0,2,3\}^{++}\), amplitudes that prove to be capable of describing the finite-volume spectra are found to house resonance poles coupled to channels consisting of pairs of open-charm mesons. Such poles can be efficiently parameterized by including terms of form \(K_{ij}=g_{i}g_{j}/(m^{2}-s)\), with parameters \(m\) and \(\{g_{i}\}\), and increased flexibility in the amplitude comes from adding a low-order polynomial in \(s\) to this pole term. The free parameters in the amplitudes are determined by comparing the spectrum predicted by Eq. 1 for a given parameterization to the lattice QCD spectra, via a \(\chi^{2}\) minimization [27; 28; 29].
To reduce bias from selection of a specific choice of form for \(K_{ij}\), we consider a range of parameterizations, and when quoting properties of the scattering amplitudes such as pole positions and couplings, we take an envelope over the range of values coming from all parameterization choices that describe the spectra with reasonable \(\chi^{2}/N_{\text{dof}}\). Representative examples resulting from this procedure are shown in Fig. 2.
In both scalar and tensor cases, clear narrow peaks are visible near 4000 MeV, likely indicating resonant behavior. In the scalar case, the peaks in elastic \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) appear at the same location, and both are distorted in their high-energy tail by the opening of the \(D^{*}\bar{D}^{*}\) channel. In the tensor case, the elastic \(D\bar{D}^{*}\) energy-dependence is sculpted by the \(D\)-wave threshold opening only slightly below the resonance leading to a peaking behavior at a slightly larger energy than the peak in \(D\bar{D}\). No peak is seen in tensor \(D_{s}\bar{D}_{s}\).
_Poles & Interpretation_ -- The partial-wave \(t\)-matrices we use are analytic functions of \(s=E^{2}\) apart from branch cuts opening at thresholds, and poles corresponding to bound-states and resonances.4 Passing through the cuts from the real energy axis where scattering occurs, we enter "unphysical" Riemann sheets on which the resonance poles live. Close to a pole, \(t_{ij}\sim c_{i}c_{j}/(s_{\text{pole}}-s)\), where \(s_{\text{pole}}=\left(m-\frac{i}{2}\Gamma\right)^{2}\) is the location of the pole, and \(c_{i}\) is the coupling of the pole to channel \(i\), which can be related to the partial width \(\Gamma_{i}\) for a kinematically-open channel. For the amplitudes in the current study which describe the computed finite-volume spectra, we find resonance poles on the "proximal sheet" which has \(\operatorname{Im}k_{i}<0\) for kinematically open channels and \(\operatorname{Im}k_{i}>0\) for closed channels, and which is closest to physical scattering. 5
Footnote 4: Neither the \(t\)-matrices we utilize, nor the finite-volume quantization condition [16; 26], explicitly include singularities due to hadron exchange processes in the \(t\) and \(u\)-channels, such as pion exchange in \(D\bar{D}^{*}\) and \(D^{*}\bar{D}^{*}\), which has been highlighted recently for the related \(T_{cc}(3875)^{+}\)[30; 31], and \(NN\)[32] scattering. Work is underway to extend the finite-volume formalism [33] to explicitly account for such physics.
Footnote 5: The amplitudes considered also feature poles on other, more distant, sheets that are not as relevant for scattering at real energies.
Investigating \(0^{++}\), a single resonance pole with large couplings to \(D\bar{D}\), \(D_{s}\bar{D}_{s}\) and \(D^{*}\bar{D}^{*}\) is found on the proximal sheet in every amplitude. Its location is \(m\approx 3995(14)\) MeV, \(\Gamma\approx 67(38)\) MeV, and at this energy, \(D^{*}\bar{D}^{*}\) is a closed channel, but decays are possible to \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) with branching fractions of approximately 40% and 60% respectively. In most amplitudes which describe the finite-volume spectra, very small couplings are found to \(\psi\omega\), although in a few cases a larger value is not ruled out, with a branching fraction no larger than 40%.
Similarly in \(J^{PC}=2^{++}\), only a single resonance pole appears on the proximal sheet, with large couplings to open \(D\bar{D}\), \(D\bar{D}^{*}\) (both in \(D\)-wave) and closed \(D^{*}\bar{D}^{*}\) (in \(S\)-wave). The pole has only very small couplings to \(D_{s}\bar{D}_{s}\), \(\eta_{c}\eta\), \(\psi\omega\) and \(\psi\phi\), and is located at \(m\approx 3961(15)\) MeV, \(\Gamma\approx 65(15)\) MeV. Poles and couplings are shown in Fig. 2.
The use of a light quark mass heavier than the physical value, and expectations of discretization effects, precludes direct comparison of our results with experiment. Nevertheless, we expect resonance properties in the current system to have even milder dependence on the light quark mass than for lighter hadrons [34; 35; 36; 37; 38; 39; 40; 41], so we can view previous experimental results and theoretical predictions in the context of our results.
In the energy region below about 4000 MeV, our calculation results in a state-counting consistent with \(c\bar{c}\) quark-models [10; 12], in which the lightest scalar and tensor states are \(1P\) configurations, and the excited states we have observed would correspond to the \(2P\) radial excitations. The resonances found in this study favor decays to open-charm \(D\)-meson pairs over closed-charm final states, supporting the long-standing OZI phenomenology.
The experimental \(X(3872)\) observed close to \(D\bar{D}^{*}\) threshold has motivated models with attraction between
the open-charm mesons mediated by pion exchange, with enough strength to provide binding. Heavy-quark spin symmetry then suggests similar effects may occur in \(D^{*}\bar{D}^{*}\) in \(S\)-wave [42; 43]. The scalar and tensor resonance poles found in the current calculation do have large couplings to the kinematically-closed \(S\)-wave \(D^{*}\bar{D}^{*}\) channel, but in both cases the attraction is apparently not large enough to produce an additional state beyond the expectations of \(c\bar{c}\) excitations.
Our results suggest a single \(0^{++}\) resonance that might explain both the \(\chi_{c0}(3930)\)[44] and \(\chi_{c0}(3960)\)[45] peak structures seen in \(D\bar{D}\) and \(D_{s}\bar{D}_{s}\) final states respectively. Claims for an additional \(\chi_{c0}\) state between 3700 and 3860 MeV appear in experiment [46], lattice [18], bound hadron-molecule models [47], \(c\bar{c}+D\bar{D}\) hadron-loop dressing models [48; 49; 50; 51; 52], and reanalyses [53; 54; 55; 56] of the experimental data, although no such state is reported in recent LHCb data [44; 57]. Our calculation shows no indication of any such additional state.
The single \(2^{++}\) resonance found in this calculation decays to \(D\bar{D}\) and \(D\bar{D}^{*}\), but has at most weak coupling to \(D_{s}\bar{D}_{s}\) and closed-charm final states. This result is not in tension with the current experimental situation, where a \(\chi_{c2}(3930)\) has been identified in \(D\bar{D}\)[58; 44; 59]. The \(X(3915)\) seen in vector-vector \(\psi\omega\) scattering [60] could be attributed to either \(0^{++}\) or \(2^{++}\), but our findings indicate that interactions in this channel are rather weak.
_Outlook_ -- These results at \(m_{\pi}\approx 391\) MeV suggest a state-counting in \(0^{++}\) and \(2^{++}\) that is not obviously different from expectations in \(c\bar{c}\) pictures. To reconcile these findings with works that find additional states at physical pion masses, distant pole singularities that do not impact the current analysis would be required to move rapidly through the complex energy plane as the light quark mass is reduced. Eliminating this possibility motivates further calculations at lighter quark masses
Figure 2: Scattering amplitudes (top) for \(J^{PC}=0^{++}\) (left) and \(2^{++}\) (right). A single representative amplitude is plotted for each \(J^{PC}\) as \(\rho_{i}\rho_{j}|t_{ij}|^{2}\), which is similar to the scattering cross section. Errorbands are determined by sampling the parameter uncertainties determined from the \(\chi^{2}\) minimum. Small circles on the horizontal axes mark the locations of key hadron-hadron thresholds. Energies used to constrain the amplitudes (middle, open circles) and resonance poles (bottom) are also shown, with the pole parameters reflecting the full uncertainty over parameterization variation, as presented in Ref. [17].
using the current techniques.
Unifying enhancements observed in different final states by identifying pole singularities in unitarity-respecting scattering amplitudes has proven essential, as clearly observed in Fig. 2 in the \(D\bar{D}\) and \(D\bar{D}^{*}\) tensor amplitudes that have different peak locations and amplitude shapes but arise due to a common state. Experimental candidate states appear in _production_ processes rather than scattering, but such processes can also be described in terms of the coupled-channel scattering \(t\)-matrix, and are constrained by unitarity. Future lattice calculations of electroweak production processes appear to be feasible [61; 62; 63].
Further applications of the lattice QCD approach presented in this paper will consider other near-threshold charmonia, the \(X(3872)\) channel being a particularly interesting prospect.
###### Acknowledgements.
We thank our colleagues within the Hadron Spectrum Collaboration (www.hadspec.org), in particular Raul Briceno, Andrew Jackura and Arkaitz Rodas, and also acknowledge useful discussions with Igor Danilkin, Feng-Kun Guo, Christoph Hanhart, Sasa Prelovsek, Steve Sharpe and Adam Szczepaniak. DJW acknowledges support from a Royal Society University Research Fellowship. DJW & CET acknowledge support from the U.K. Science and Technology Facilities Council (STFC) [grant number ST/T000694/1]. JJD acknowledges support from the U.S. Department of Energy contract DE-SC0018416 at William & Mary, and JJD & RGE from contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, manages and operates Jefferson Lab. The software codes Chroma [64], QUDA [65; 66], QUDA-MG [67], QPhIX [68], MG_PROTO [69], QQPQP [70; 71], and Redstar [72] were used. Some software codes used in this project were developed with support from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Nuclear Physics, Scientific Discovery through Advanced Computing (SciDAC) program; also acknowledged is support from the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This work used the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk) on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. Other components were provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1). This work also used the earlier DiRAC Data Analytic system at the University of Cambridge. This equipment was funded by BIS National E-infrastructure capital grant (ST/K001590/1), STFC capital grants ST/H008861/1 and ST/H00887X/1, and STFC DiRAC Operations grant ST/K00333X/1. DiRAC is part of the National E-Infrastructure. This work also used clusters at Jefferson Laboratory under the USQCD Initiative and the LQCD ARRA project. Propagators and gauge configurations used in this project were generated using DiRAC facilities, at Jefferson Lab, and on the Wilkes GPU cluster at the University of Cambridge High Performance Computing Service, provided by Dell Inc., NVIDIA and Mellanox, and part funded by STFC with industrial sponsorship from Rolls Royce and Mitsubishi Heavy Industries. Also used was an award of computer time provided by the U.S. Department of Energy INCITE program and supported in part under an ALCC award, and resources at: the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725; the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231; the Texas Advanced Computing Center (TACC) at The University of Texas at Austin; the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant No. ACI-1548562; and part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.
|
2309.06474 | Spectroscopic analysis of hot, massive stars in large spectroscopic
surveys with de-idealised models | Upcoming large-scale spectroscopic surveys with e.g. WEAVE and 4MOST will
provide thousands of spectra of massive stars, which need to be analysed in an
efficient and homogeneous way. Usually, studies of massive stars are limited to
samples of a few hundred objects which pushes current spectroscopic analysis
tools to their limits because visual inspection is necessary to verify the
spectroscopic fit. Often uncertainties are only estimated rather than derived
and prior information cannot be incorporated without a Bayesian approach. In
addition, uncertainties of stellar atmospheres and radiative transfer codes are
not considered as a result of simplified, inaccurate or incomplete/missing
physics or, in short, idealised physical models.
Here, we address the question of "How to compare an idealised model of
complex objects to real data?" with an empirical Bayesian approach and maximum
a {\it posterior} approximations. We focus on application to large scale
optical spectroscopic studies of complex astrophysical objects like stars. More
specifically, we test and verify our methodology on samples of OB stars in 30
Doradus region of the Large Magellanic Clouds using a grid of FASTWIND model
atmospheres.
Our spectroscopic model de-idealisation analysis pipeline takes advantage of
the statistics that large samples provide by determining the model error to
account for the idealised stellar atmosphere models, which are included into
the error budget. The pipeline performs well over a wide parameter space and
derives robust stellar parameters with representative uncertainties. | J. M. Bestenlehner, T. Enßlin, M. Bergemann, P. A. Crowther, M. Greiner, M. Selig | 2023-09-12T18:00:01Z | http://arxiv.org/abs/2309.06474v1 | Spectroscopic analysis of hot, massive stars in large spectroscopic surveys with de-idealised models
###### Abstract
Upcoming large-scale spectroscopic surveys with e.g. WEAVE and 4MOST will provide thousands of spectra of massive stars, which need to be analysed in an efficient and homogeneous way. Usually, studies of massive stars are limited to samples of a few hundred objects which pushes current spectroscopic analysis tools to their limits because visual inspection is necessary to verify the spectroscopic fit. Often uncertainties are only estimated rather than derived and prior information cannot be incorporated without a Bayesian approach. In addition, uncertainties of stellar atmospheres and radiative transfer codes are not considered as a result of simplified, inaccurate or incomplete/missing physics or, in short, idealised physical models.
Here, we address the question of "How to compare an idealised model of complex objects to real data?" with an empirical Bayesian approach and maximum a _posterior_ approximations. We focus on application to large scale optical spectroscopic studies of complex astrophysical objects like stars. More specifically, we test and verify our methodology on samples of OB stars in 30 Doradus region of the Large Magellanic Clouds using a grid of FASTWIND model atmospheres.
Our spectroscopic model de-idealisation analysis pipeline takes advantage of the statistics that large samples provide by determining the model error to account for the idealised stellar atmosphere models, which are included into the error budget. The pipeline performs well over a wide parameter space and derives robust stellar parameters with representative uncertainties.
keywords: methods: data analysis; methods: statistical; techniques: spectroscopic; atomic data; stars: massive; stars: fundamental parameters
## 1 Introduction
With the advent of large spectroscopic surveys using instruments such as WEAVE (Jin et al., 2022) and 4MOST (de Jong et al., 2019) tens of thousands spectra of massive stars (\(\gtrsim 10M_{\odot}\)) will be obtained, which will need to be analysed in a homogeneous and efficient way (e.g. Cioni et al., 2019; Bensby et al., 2019; Chiappini et al., 2019). Current pipelines of large spectroscopic surveys are largely designed for FGK stars, which are either data driven (e.g. Ness et al., 2015; Guiglion et al., 2020) or model driven (e.g. Allende-Prieto & Apogee Team, 2015; Ting et al., 2019). Traditionally, and still widely performed today, massive stars have been analysed by "eye", which limits the sample size to \(<100\) massive stars. In addition, stellar parameters as well as uncertainties are estimated rather than determined. Larger sample of a couple of hundreds of stars are usually analysed with a \(\chi^{2}\)-minimisation algorithm, where the final fit often needs to be visually verified depending on the goodness-of-fit.
Multi-dimensional probability distribution functions are obtained depending on the number of free parameters and uncertainties are then defined on confidence intervals rather than Gaussian standard deviations. Those uncertainties can be highly asymmetric and very large in the case of degenerated parameters. In the massive star community there are 2 main flavours of \(\chi^{2}\)-minimisation algorithms, grid based (e.g. Simon-Diaz et al., 2011; Castro et al., 2012; Bestenlehner et al., 2014) and Genetic Algorithms on the basis of natural selection (e.g. Mokiem et al., 2007; Brands et al., 2022). All those algorithms use a pre-defined selection of spectral line regions for their analysis.
Theoretical models of complex physical systems are necessarily idealisations. The implied simplifications allow us to focus the view on the essential physics, to keep the model computationally feasible, and to investigate systems for which not all of their components are perfectly known. In contrast to solar-like stars, massive stars have strong stellar winds, which influence the structure of the stellar atmospheres (line blanketing) and a pseudo photosphere at optical depth 2/3, which is located in the stellar winds. The inclusion of line-driven winds into stellar atmosphere models requires the assumption of spherical geometry in the co-moving frame of the star (expanding atmosphere). In addition to effective temperature and surface gravity, mass-loss rate, velocity law, terminal velocity, wind inhomogeneity and line blanketing need to be included into the stellar atmosphere code. The stellar atmosphere of massive stars significantly depart from local thermal equilibrium (LTE) and therefore must be computed in fully non-LTE, which is computational expensive (e.g Santolaya-Rey et al., 1997; Hillier & Miller, 1998; Grafener et al.,
et al. 2002). This limits state-of-the-art stellar atmosphere codes for hot, massive stars to 1D.
When faced to real data of the actual system these models often perform insufficiently when judged on a goodness-of-fit basis. The difference between real and modelled data can easily exceed the error budget of the measurement. The reason is that the idealized models do not capture all aspects of the real systems, but they are still present on the data. To discriminate these missing aspects from measurement errors or noise, we use the term _model errors_ to describe these imperfections of our theoretical description (Oberpirller and Enslin 2018).
Often one tries to determine the model parameters from the data via a likelihood based methodology such as \(\chi^{2}\)-minimisation, maximum likelihood or Bayesian parameter estimation that uses the measurement uncertainties as a metric in data space. The resulting parameter estimates can be strongly distorted by the desire of the method to minimize all apparent differences between predicted and real data, indifferently if these are due to measurement or model errors.
Thus the model errors should be included into the error budget of the parameter estimation. This would require that we have a model for the not yet captured aspects of the system or at least for the model errors these produce. To do this thoroughly we would need to undertake a case by case analysis of the missing physics.
However, this would be quite impractical in cases of the spectroscopic analysis of complex astrophysical objects. Instead, we want to construct a plausible, but by no means perfect, effective description of the model errors. In the construction of the de-idealisation model, we will follow a pragmatic route, but try to indicate the assumptions, approximations and simplifications made.
In Section 2 we introduce the methodology, which is used in our spectroscopic analysis pipeline (Section 3). Using grids of stellar atmospheres (Section 3.3) the pipeline is applied to observational data (Section 3.4). The results are discussed in Section 4. We close with a brief conclusion and outlook (Section 5).
## 2 Method
### Data model
We assume we have a set of objects (e.g. stars, galaxies,...: labelled by \(i\in\{1,\,2,\,\ldots\,n\}\)) with observable signals \(s^{(i)}=(s_{x}^{(i)})_{x}\) over some coordinate \(x\), e.g. the emitted spectral energy distribution \(s^{(i)}=(s_{\lambda}^{(i)})_{\lambda}\) as a function of the wavelength \(\lambda\). These signals are measured with a linearly responding instrument (response matrix \(R\)) with additive Gaussian noise (\(n\)) according to the measurement equation
\[d^{(i)}=R^{(i)}s^{(i)}+n^{(i)}. \tag{1}\]
The individual elements of the data vector (\(d\)) for the \(i\)-th object are then given by
\[d_{j}^{(i)}=\int dx\,R_{j}^{(i)}x_{x}^{(i)}+n^{(i)}, \tag{2}\]
where in our spectroscopic cases \(R_{j}^{(i)}\) is the \(j\)-th bandpass of our \(i\)-th observation as a function of wavelength \(x=\lambda\). Spectroscopic, colour filter, and bolometric measurements can thereby be treated with the same formalism and even combined into a single data vector and response matrix. In addition, we do not require that all objects are observed in the same way by keeping the response matrix dependent on the object index \(i\). In this way, the formalism permits to combine heterogeneous observations.
For the measurement noise \(n^{(i)}\) of the \(i\)-th observation we use the error-spectrum from the data reduction which is assumed for simplicity to be Gaussian with zero mean and signal independent,
\[\mathcal{P}(n^{(i)}|s^{(i)}) = \mathcal{G}(n^{(i)},N^{(i)})\] \[= \frac{1}{\sqrt{|2\pi N^{(i)}|}}\exp\left[-\frac{1}{2}n^{(i)\dagger }\left(N^{(i)}\right)^{-1}n^{(i)}\right]\]
with assumed noise covariance \(N^{(i)}=\langle n^{(i)}n^{(i)\dagger}\rangle_{(n^{(i)}|s^{(i)})}\). The dagger denotes the transposed and complex conjugated vector. The noise of the different observations is assumed to be independent as well, \(\mathcal{P}(n|s)=\mathcal{G}(n,N)=\prod_{i}\mathcal{G}(n^{(i)},N^{(i)})\) with \(n=(n^{(i)})_{i}\) and \(s=(s^{(i)})_{i}\) being the combined noise and signal vectors.
### Model errors
Now we assume that some idealised model for our objects exists that predicts a specific theoretical signal \(t^{[p]}\) given a set of unknown model parameters \(p\). These parameters should be physical quantities like surface gravity, radius, effective temperature, stellar wind properties or chemical composition of the object, so that well defined values \(p^{(i)}\) exist for each object1. Those idealised models can be generated with stellar atmosphere and radiative transfer codes. Knowing these parameters for each object is the primary goal of the inference. In principle, the relation between parameters and signal could be stochastic, but for simplicity we concentrate on deterministic models. The de-idealisation model we develop should serve as an effective description for the remaining stochasticity.
Footnote 1: Counter example would be purely phenomenological parameters, describing aspects of the data that contain observation dependent properties, or such that only make sense within a specific object description methodology. Although the here proposed approach might be applicable to such phenomenological descriptions as well, we currently demand the physical existence of the used parameters in order to be on epistemologically firm grounds.
The idealized model captures hopefully the dominant properties of the system but certainly not all aspects. Therefore the real signal \(s^{(i)}\) of an object will deviate by an unknown stochastic component \(u^{(i)}\), the model error, so that
\[s^{(i)}=t^{[p]}+u^{(i)}. \tag{4}\]
The aim of model de-idealization is to find an appropriate stochastic model \(\mathcal{P}(u|p)=\prod_{i}\mathcal{P}(u^{(i)}|p^{(i)})\) for the model errors. With such, the likelihood becomes
\[\mathcal{P}(d|p)=\int\mathcal{D}u\,\mathcal{P}(d|s=t^{[p]}+u)\,\mathcal{P}(u| p), \tag{5}\]
where \(\int\mathcal{D}u\) denotes a phase space integral for the model errors.
In our case, we want to restrict ourselves to using the simplest possible representation of the model uncertainties. This means that we take only the first and second moments of \(u^{(i)}\) into account, \(v^{(i)}=\langle u^{(i)}\rangle_{(u|p)}\) and \(U^{(i)}=\langle(u-v)^{(i)}(u-v)^{(i)\dagger}\rangle_{(u|p)}\), and assume the fluctuations of different objects to be independent, \(\langle u^{(i)}u^{(j)\dagger}\rangle_{(u|p)}=v^{(i)}v^{(j)\dagger}+\delta_{ij }U^{(i)}\). The probability distribution that represents mean and variance without further information on higher order correlations naturally is a Gaussian with this mean and variance. Among all possible probability distributions with given mean and variance it has a maximal entropy (e.g. Jaynes and Bretthorst 2003; Caticha 2008). By adopting a Gaussian for the model
errors,
\[\mathcal{P}(u|p)=\mathcal{G}(u-v,\,U), \tag{6}\]
the least amount of spurious information is entered into the inference system in case only \(v\) and \(U=\langle(u-v)\,\,(u-v)^{\dagger}\rangle_{(u|p)}\) are considered.
This does not mean that the model error statistics is a Gaussian in reality. It just means that higher order correlations are ignored for the time being. Taking such higher order correlations into account would most certainly improve the method, but is left for future work.
The Gaussianity of measurement noise and modelling error description permits us to integrate Equation (5) analytically leading to
\[\mathcal{P}(d|p,\,v,\,M) = \mathcal{G}(d-R(v+t^{|p|}),\,M), \tag{7}\]
with \(M=N+R\,U\,R^{\dagger}\) being the combined error covariance.
### Implicit hyperprior
The de-idealisation model requires that the auxiliary parameters \(v\) and \(U\) are determined as well, or better marginalised over. This requires that we specify our prior knowledge on these parameters, \(\mathcal{P}(v,U|p)\), which is a very problem specific task. Using such a hyperprior we could then derive an auxiliary parameter marginalised estimator for our desired model parameters \(p\). A good, but numerically very expensive approach would be to sample over the joint space of model and auxiliary parameters, \(p\), \(v\), and \(U\), for example using the Gibbs sampling method (e.g. Wandelt et al. 2004; Jasche et al. 2010).
In order to have a generic, affordable and pragmatic method we introduce a number of approximations and simplifications. The first is that we replace the auxiliary parameter marginalisation by an estimation using the following approximation
\[\mathcal{P}(d,p) = \int\mathcal{D}v\int\mathcal{D}U\,\mathcal{P}(d|p,\,v,\,U) \,\mathcal{P}(v,U|p)\,\mathcal{P}(p) \tag{8}\] \[\approx \mathcal{P}(d|p,\,v^{\star},\,U^{\star})\,\mathcal{P}(p)\]
with \(v^{\star}\) and \(U^{\star}\) being suitable estimates of the auxiliary parameters and \(\mathcal{P}(p)\) the parameter prior. Instead of constructing these point estimators using the so far unspecified and problem specific priors we propose to pragmatically specify them with an educated ad-hoc construction.
The idea is to assume for a moment that a correct model parameter classification \(p^{(i)}\) for any object exists, which has later on to be estimated self consistently with all the other estimates via iteration. The difference of the signals reconstructed from the data \(m^{(i)}=\langle s^{(i)}\rangle_{(s^{(i)}|d^{(i)},p^{(i)})}\) and the one predicted from the model \(t^{|p^{(i)}|}\) plus the current guesses for \(v^{\star}\),
\[\delta^{(i)}=m^{(i)}-t^{|p^{(i)}|}-v^{(i)}{}^{\star} \tag{9}\]
can be analysed to provide information on \(v\) and \(U\). The signal reconstruction can be done via a Wiener filter, since this is optimal in case of a linear measurement and Gaussian noise model Equation (7). For the signal difference this is
\[\delta^{(i)} = D^{(i)}R^{(i)\dagger}\left(N^{(i)}\right)^{-1}\left[d^{(i)}-R^{( i)}\left(t^{|p^{(i)}|}+v^{(i)\star}\right)\right]\] \[D^{(i)} = \left[\left(U^{(i)\star}\right)^{-1}+R^{(i)\dagger}\left(N^{(i)} \right)^{-1}\,\boldsymbol{R}^{(i)}\right]^{-1}, \tag{10}\]
where \(D^{(i)}\) is the Wiener variance or uncertainty of the reconstruction. For the current case we have used some guesses for \(v^{\star}\) and \(U^{\star}\) that need to be updated accordingly to the information contained in the statistics of the signal differences \(\delta^{(i)}\). To do so, we introduce a suitable proximity measure in parameter space, \(\omega_{ii^{\prime}}=\text{prox}(p^{(i)},p^{(i^{\prime})})\), that indicates for an object \(i\) how much another objects \(i^{\prime}\) can be used to learn about the model error statistics of \(i\). A naive choice would be \(\omega_{ii^{\prime}}=1\) always, assuming that the model error statistics is everywhere the same in the model parameter space. A more sophisticated method would partition the parameter space into characteristic regions (e.g. corresponding to the different known star and galaxy classes in our spectroscopic example) and to set \(\omega_{ii^{\prime}}=1\) or \(\omega_{ii^{\prime}}=0\) in case \(i\) and \(i^{\prime}\) belong to the same or different classes. Even more sophisticated proximity weighting schemes can be imagined with \(\omega_{ii^{\prime}}=1/(1+\text{dist}(p^{(i)},p^{(i^{\prime})}))\) using some distance measure in parameter space. However, in our case we group objects together with respect to their main line diagnostics and analyse them in the same batch.
Given such a scheme, the update operation for \(v^{\star}\) and \(U^{\star}\) are
\[v^{(i)}{}^{\star} \rightarrow v^{(i)}{}^{\star}+\sum_{i^{\prime}}\frac{\omega_{ii^{\prime}}} {\Omega_{i}}\delta^{(i^{\prime})},\] \[U^{(i)}{}^{\star} = \sum_{i^{\prime}}\frac{\omega_{ii^{\prime}}}{\Omega_{i}}\left( \delta^{(i^{\prime})}\delta^{(i^{\prime})^{\dagger}}+D^{(i^{\prime})}\right)\text { with} \tag{11}\] \[\Omega_{i} = \sum_{i^{\prime}}\omega_{ii^{\prime}}. \tag{12}\]
The \(v^{\star}\) update operation is a simple absorption of any systematic difference into the mean component of the model error \(v^{\star}\). For an initial step it is better to set \(v^{(i)}_{\text{iteration}\star}=0\) as it would absorb any differences between data and model, even though the model might be not representative of the objects. However, we find that the the best choice for \(v\) is to represent non-stellar features, which are not part of the model, like nebular lines, interstellar bands or telluric lines.
The \(U^{\star}\) update incorporates the variance in the signal difference reconstructions as well as their Wiener variances. The latter express the level of missing fluctuations of the Wiener filter reconstruction. The correction with the Wiener variance is done in analogy to the critical filter methodology developed in (Ensslin & Frommert, 2011; Ensslin & Weig, 2010), where a similar variance estimation was derived under a non-informative prior on the power spectrum of a statistical homogeneous random process. For this study we intuitively extended this to non-diagonal covariance structures. This means we adopt implicitly a non-informative hyperprior for the model error statistics as summarised by \(v\) and \(U\).
The logic behind this implicit prior is as follows. Assuming we would have managed to specify an appropriate non-informative hyperprior for \(v\) and \(U\). From this we would derive some recipe for our point estimates \(v^{\star}\) and \(U^{\star}\) using some approximations. The resulting recipe should not incorporate hidden spurious assumptions. The \(v^{\star}\) and \(U^{\star}\) estimates therefore can only be build from elements like \(\delta^{(i^{\prime})}\), \(\delta^{(i^{\prime})}\delta^{(i^{\prime})^{\dagger}}\), and \(D^{(i^{\prime})}\), the latter being a summary of the former elements. Requiring the estimators to be unbiased and to enclose the mentioned critical filter as a limiting case, which is an unbiased power spectrum estimator, then fixes the numerical coefficients in front of \(\delta^{(i^{\prime})}\), \(\delta^{(i^{\prime})}\delta^{(i^{\prime})^{\dagger}}\), and \(D^{(i^{\prime})}\) in Equation (11) to unity.
We admit that there are some frequentist elements in this derivation, since it postulates an estimator and argues for its appropriateness using bias arguments. We hope that it will be replaced by a more stringent calculation once a suitable non-informative hyperprior has been specified. For the time being, we use it in iteration with model parameter estimation.
### Method summary
The combined de-idealized model parameter estimation method comprises the following steps:
1. Specify the weighting scheme \(\omega_{ii^{\prime}}=\text{prox}(p^{(i)},p^{(i^{\prime})})\) that determines how similar the model parameters of two objects have to be so that their model error statistics can be assumed to be similar.
2. Adopt some initial guess or default values for the model parameters \(p^{(i)}\) and mode error parameters \(v^{(i)\star}\) and \(U^{(i)\star}\). A naive choice could be \(p^{(i)}=p\) for some plausible central \(p\) within the physically permitted parameter space, \(v^{(i)\star}=0\), and \(U^{(i)\star}=\sum_{i^{\prime}}\left(R^{(i^{\prime})\ddagger}(t^{(i)}-t^{[p]} \right)\left(R^{(i^{\prime})\ddagger}(t^{(i)}-t^{[p]}\right)^{\dagger}\).
3. Calculate \(\delta^{(i)}\) and \(D^{(i)}\) according to Equation (10) for all objects.
4. Update \(v^{(i)\star}\) and \(U^{(i)\star}\) according to Equation (11) for all objects.
5. Update the model parameters \(p^{(i)}\) of all objects using the combined likelihoods Equation (7) that incorporates measurement and model errors.
6. Repeat steps 3-5 until convergence.
The resulting estimate will be similar to a joint maximum a posteriori estimation of the model and model error parameters which are known to perform worse than a properly marginalised with respect to the model error parameter posterior mean estimator. However, given the large number of degrees of freedom in the signal space (e.g. a highly resolved emission spectrum of a star or galaxy), such optimal estimators can be extremely expensive computationally. The proposed method might therefore be favourable in many circumstances, despite its approximative and partly ad hoc nature.
In order to be explicit about the assumptions and approximations adopted we provide an overview below. This list should help to judge the range of applicability and to find possible improvements of the proposed method. In particular we assume
1. that the measurement noise is independent for the different objects and independent of their signals, it has Gaussian statistics with known covariance.
2. a linear and known measurement response.
3. that the model error knowledge can be approximated by a multivariate Gaussian in signal space.
4. that regions in parameter space exist and are known which have similar model error statistics as parametrized by a mean and variance.
5. no prior knowledge on the values of this model error mean and variances.
6. that an iterated point estimate of all involved parameters leads to a reasonable approximative solution of the joint inference problem.
## 3 Spectroscopic analysis pipeline
The pipeline has been developed using _python3_ with commonly used libraries such as _numpy_, _scipy_, _pandas_, _multiprocessing_ and _ctypes_ plus _astropy.io_ to read fits files. Using commonly and maintained libraries will ensure that the pipeline is easy to maintain and should be usable over a long period of time. The following section outlines the required pre-processing steps (SS 3.1), brief overview of the pipeline implementation (SS 3.2) and, description of the grid of synthetic model spectra (SS 3.3) and the observational data to verify the pipeline (SS 3.4).
### Pre-processing
The pipeline requires that all observational data are read in at the beginning as the spectra are required to be analysed all at once to determine iteratively the stellar parameters and model uncertainties (SS 2.4). After a spectrum is loaded it is corrected for the potential radial velocity shift, transformed to the wavelength sampling of the synthetic spectra grid (SS 3.3) and decomposed into principal components using the decomposition matrix calculated from the synthetic spectra to reduce the memory usage and speed up the analysis. This is essential, when analysing sample of spectra of more than a few hundred sources.
The decomposed grid of synthetic spectra is loaded into shared-memory for parallelisation purposes. The spectra are pre-convolved with combinations of varying broadening parameters of projected rotational velocity (\(v_{\text{eq}}\sin i\)) and macro-turbulent velocity (\(v_{\text{mac}}\)). The synthetic grid preparation is also a pre-processing step, which is laid out in SS 3.3. If the sample is small and/or sufficient random access memory of the computing system is available, the grid can be convolved with the star specific broadening parameters. Even though the convolution is applied utilising the fast Fourier transformation library of _scipy_, this could increase the pre-processing time scale up to a few hours per star depending on the size of the synthetic spectra grid (\(>\) 100 000).
On a standard Desktop computer 1000 spectra can be analysed in less than half a day. Larger set of samples we would advise to sort them into groups of similar objects, which will also lead to more representative model error at a specific parameter space when testing implemented physics or verifying assumptions in the theoretical model.
### Analysis pipeline
After pre-processing the observational data including observational error spectra and loading the synthetic grid of model spectra the pipeline is set up according to Equation (1) with model de-idealisation Equation (4) assuming a Gaussian noise model for the observational error (3). We are interested in the posterior distribution of the signal given the data (\(\mathcal{P}(s|d)\)) and apply the Bayes theorem
\[\mathcal{P}(s|d)=\frac{\mathcal{P}(d|s)\mathcal{P}(s)}{\mathcal{P}(d)} \tag{13}\]
with likelihood \(\mathcal{P}(d|s)\), prior \(\mathcal{P}(s)\) and evidence \(\mathcal{P}(d)\) to use, first, the likelihood \(\mathcal{P}(d|s)\) and, second, apply the Wiener filter (SS 2.3) to reconstruct the signal (\(P(d|s)\to P(s|d)\)). We modified the likelihood as described in SS 2.3 (\(\mathcal{P}(d|s)\to\mathcal{P}(d|p)\)), probability of the data \(d\) given the stellar parameters \(p\).
The best model is determined from a \(\chi^{2}\)-minimisation Ansatz with model error variance matrix \(U\) (Equation 7) to maximise the modified likelihood \(\mathcal{P}(d|p)\):
\[\chi^{2}=\left(d^{(i)}-\operatorname{R}^{(i)}s^{(i)}\right)^{\mathrm{T}} \operatorname{N}^{(i)-1}\left(d^{(i)}-\operatorname{R}^{(i)}s^{(i)}\right). \tag{14}\]
For the mean and model error variance matrix \(v\) and \(U\), we find that it is better to set \(v=0\) and multiply \(U\) by a factor \(\alpha=[10^{-5},0.35,0.7,1.0,1.0]\), which increases after each iteration, to avoid that bad spectroscopic fits have significant impact on \(v\) and \(U\) and therefore the determination of the best model. \(v\) could be also set equal to non-stellar features such as telluric bands, diffuse interstellar bands and prominent interstellar lines like Ca H and K. However, interstellar contribution can significantly vary with the line of side while telluric bands change with atmospheric conditions and airmass. In cases, where the none stellar features vary on a star by
star basis, those contribution can be combined with the observational error to give less weight to those spectral regions.
\(\omega_{ii^{\prime}}\) (Equation 11) can contain any prior information \(\mathcal{P}(p)\), e.g. parameter space of similar objects, stellar structure models or population synthesis predictions. In the current implementation we use a flat prior (\(\omega_{ii^{\prime}}=1\)).
The pipeline returns a multi-dimensional posterior distribution function for each star while \(U\) is the same for all sources analysed in one batch. Parameters and their uncertainties are determine by defining confidence intervals (Fig. 1). To increase the accuracy the posterior distribution function can be multiplied by an appropriate prior. More details on the implementation can be found in the source code of the pipeline.
### Stellar atmosphere grid
The grid of synthetic model spectra was computed with the non-LTE stellar atmosphere and radiative transfer code FASTWIND v10.6 (Santolaya-Rey et al., 1997; Puls et al., 2005; Rivero Gonzalez et al., 2012) including H, He, C, N, O and Si as explicit elements. The FASTWIND LINES-list and FORMAL_INPUT file is well tested and verified in the wavelength range from 400 to 7000 A. In the FORMAL_INPUT file we included H, He i-ii, C ii-iv, N ii-v, O ii-v and Si ii-iv in the wavelength range from \(\lambda\)3500 to 10000. On the basis of the Vienna Atomic Line Database database2 (VALD) version 3 and NIST database3 we added the following lines to the FASTWIND LINES-list: C ii\(\lambda\)6784, C iii\(\lambda\)7703 and \(\lambda\)9701-05-16, C iv\(\lambda\)4647 and \(\lambda\)6592-93, N iii\(\lambda\)5321-27-52, \(\lambda\)3935-39 and \(\lambda\)4902-24, N iv\(\lambda\)3748, \(\lambda\)5737, \(\lambda\)5776-85, \(\lambda\)61212-15-29, \(\lambda\)7103-09-11-23-27-29, \(\lambda\)7425 and \(\lambda\)9182-223, O iii\(\lambda\)3703, \(\lambda\)3707-15, \(\lambda\)3755-57-60-74-91, O iv\(\lambda\)3560-63, \(\lambda\)3729-37, \(\lambda\)7032-54, \(\lambda\)5769 and \(\lambda\)9454-88-92, O v\(\lambda\)5114 and \(\lambda\)4500, and Si ii\(\lambda\)9413. A full list of included spectral lines can be found in the Appendix A.
Footnote 2: e.g. [http://vald.astro.um.se/~vald/php/vald.php](http://vald.astro.um.se/~vald/php/vald.php)
Footnote 3: [https://www.nist.gov/pml/atomic-spectra-database](https://www.nist.gov/pml/atomic-spectra-database)
Some lines are located in the region of telluric bands, but could be of great value, if a careful telluric correction has been performed. With data from 4MOST (de Jong et al., 2019) and in particular 4MOST/1001MC (Cioni et al., 2019) we are going to verify the FASTWIND LINES-list beyond the well tested wavelength range utilising the pipeline of this study.
The grid covers the parameter space for the effective temperature from \(T_{\rm eff}=17\,800\) K (\(\log T_{\rm eff}=4.25\)) to 56 200 K (\(\log T_{\rm eff}=4.75\)), surface gravity \(\log g/(\rm g\,cm^{-2})=2.5\) to 4.5, transformed mass-loss rate (e.g. Bestenlehner et al., 2014) from \(\log\dot{M}_{\rm I}/(\rm M_{\odot}\ yr^{-1})=-6.5\) to \(-5.0\) assuming a constant radius and helium abundances by number from \(Y=0.07\) to 0.15. 3 combination of CNO abundances representing LMC baseline abundances plus semi and fully-processed CNO composition due to the CNO-cycle according to 60 \(M_{\odot}\) evolutionary
Figure 1: Probability heat map of surface gravity vs. effective temperature for VFTS-072 (left) and VFTS-076 (right). Contours indicate 2D standard-deviational ellipse confidence-intervals of 39.4%, 86.5% and 98.9% (e.g. Wang et al., 2015). Spectroscopic fits of those stars are shown in Fig. 3.
Figure 2: \(T_{\rm eff}-\log g\) plane of the computed grid of converged stellar atmospheres. The high temperature low surface gravity regime (upper left area) is empty as those models are unstable due to the Eddington limit (\(\Gamma_{\rm e}\approx 0.7\)). Low temperature and high surface gravity models would be better calculated with a plane-parallel code without stellar winds.
track by Brott et al. (2011). Figure 2 shows the parameter space of the grid with respect to \(\log g\) and \(T_{\rm eff}\).
The high temperature, low surface gravity regime (upper left area) is unpopulated as those models are unstable as they exceed the Eddington limit at an Eddington parameter \(\Gamma_{\rm e}\approx 0.7\) considering only the electron opacity \(\chi_{\rm e}\). Low temperature and high surface gravity models can be computed with FASTWIND, \(T_{\rm eff}\) between 17 800 K (\(\log T_{\rm eff}=4.25\)) and 21 400 K (\(\log T_{\rm eff}=4.33\)) and \(\log g>4.0\), but a significantly larger number of depth points or high mass-loss rates would be required to make them converge. The computational timescale exceeds 1 day in contrast to less than an hour. However, enhanced mass-loss rates are only observed, if the star is to the proximity of the Eddington limit (Bestenlehner et al., 2014). Therefore, those low temperature and high surface gravity stellar atmosphere models are better calculated with a plane-parallel geometry code without stellar winds (e.g. TLUSTY Hubeny & Lanz, 1995; Lanz & Hubeny, 2007).
The clumping factor was set to \(f_{\rm cl}=1\), i.e. a homogeneous stellar wind is adopted. We assumed a wind acceleration parameter of \(\beta=1.0\) and a fixed micro turbulence velocity of 10 km/s. The terminal velocity was calculated based on \(\log g\) and stellar radius of the model using the escape-terminal velocity relation of \(e_{\rm esc}/e_{\infty}=2.6\) for models hotter than 21 000 K and \(e_{\rm esc}/e_{\infty}=1.3\) for cooler models (Lamers et al., 1995). In total we computed of the order \(\lesssim 150\,000\) stellar atmospheres. For around \(\sim 20\%\) of those models the atmospheric structure, ionisation balance and/or radiation field failed to converge properly leading to negative fluxes, discontinuities or even failed when the spectral lines were synthesised.
The grid was then convolved with a macro-turbulent velocity of \(e_{\rm mac}=20\) km/s and varying projected rotational velocity \(v\sin i=[0,20,50,100,150,200,250,300,400]\) km/s assuming \(v_{\rm eq}\sin i\) is the dominant broadening mechanism, which is a reasonable assumption given the spectral resolution of the observational data (SS 3.4) and that typical \(e_{\rm mac}\) are of the order of a few 10s km/s. This results in a grid of \(\lesssim 1\) 100 000 synthetic spectra, which has been used to compute the decomposition matrix to decomposed the grid into its principal components reducing the size by a factor \(\sim 200\). The decomposition matrix is also used to decompose the observational data (SS 3.1).
### Observational data
To validate the methodology of the pipeline we used the VLT/FLAMES data of VLT/FLAMES Tarantula survey (VFTS Evans et al., 2011) in the traditional blue-optical wavelength regime with the LR02 (\(\lambda 3964-4567\), \(\lambda/\Delta t=6000\)), LR03 (\(\lambda 4501-5078\), \(\lambda/\Delta t=7500\)) and HR15N (\(\lambda 6470-6790\), \(\lambda/\Delta t=19\,000\)) gratings. We selected 240 O-type stars employing the same data and normalisation as used in Bestenlehner et al. (2014); Sabin-Sanjulian et al. (2014, 2017); Ramirez-Agudelo et al. (2017) to avoid the introduction of biases.
The second data set we used are the VLT/MUSE observation of \(\sim 250\) OB stars from Castro et al. (2018). The VLT/MUSE data cover the wavelength range between 4600 to 9300 A at spectral resolution of 2000 to 4000. The normalisation of the spectra was fully automated simulating the work-flow of the spectroscopic analysis of large datasets. 35 stars are in common with VFTS, which will allow us to test the reliability of line diagnostics towards the red of H\({}_{\alpha}\) (SS 3.3) and the automated normalisation routine.
## 4 Results and discussion
### VLT/FLAMES: Evans et al. (2011)
We analysed all 240 O-type stars including stars with low SNR and/or strong nebular contamination (e.g. Fig. 11) while Bestenlehner et al. (2014); Sabin-Sanjulian et al. (2014, 2017); Ramirez-Agudelo et al. (2017) provided reliable results for 173 out of 240 sources. In Fig. 3 we show the spectroscopic fits of a representative early and late O star (VFTS-072 and 076). The shaded error area in Fig. 3 reveal where a general mismatch between model and observations occurs and is the square-root of the diagonal elements of the model-error uncertainty-matrix. In particular the line centres of the Balmer and He i-ii lines seemed to be poorly fitted as a results of nebular lines, inaccurate determined line-broadening, insufficient grid resolution and range for helium abundances, fixed micro-turbulent velocity or shape of line profiles. We are also able to locate spectra lines, which are potentially not included into the FASTWIND LINES-list or require improved atomic data (e.g. Si iii\(\lambda 4552.6\), 4567.8 and 4574.7). Overall the spectroscopic fit is good for the synthetic spectra (red solid line) to the observations (blue solid line).
Figure 1 visualises the probabilities in the \(\log g-T_{\rm eff}\) plane. VFTS-072 (left panel) shows within 2-sigma a dual solution (\(\sim\)49 000 and \(\sim\)46 000 K). At \(\sim 45\,000\) K the He i disappear, but the N iv and v lines sufficiently contributed to the \(\chi^{2}\) so that the correct solutions around 49 000 K has also the highest probabilities. By looking at the 3-sigma contour we notice a degeneracy between \(\log g-T_{\rm eff}\) due to the proximity of VFTS-072 to the Eddington limit. In contrast, the heat map of VFTS-076 (right panel) is well centred on a specific \(\log g-T_{\rm eff}\) region. However, a slightly higher surface gravity could be probable within 2-sigma, which is the results of a degeneracy between surface gravity and mass-loss rates. High mass-loss rates fill in the wings of the Balmer lines mimicking a lower surface gravity.
#### 4.1.1 Model-error uncertainty-matrix
The model-error uncertainty-matrix is symmetric (\(U^{\rm T}=U\)) and shows correlation between wavelength or pixel regions. An example is given in the Appendix (Table 10), where we reduced the rank of the matrix from 11840\(\times\)11840 to 37\(\times\)37 for visualisation purposes. The strongest correlations are between the Balmer lines, which are the most prominent lines in O-type stars. On the other hand He i is present in mid to late O stars, while He ii lines are only strong in early O stars. Therefore, to visualise the model-error matrix and its correlations we plot in Fig. 4 the model uncertainties for wavelengths of H\({}_{\alpha}\), He i\(\,\lambda 4471\) and He ii\(\,\lambda 4686\).
H\({}_{\alpha}\) and He ii\(\,\lambda 4686\) are anti-correlated with each other, although we amplified the uncertainties for He ii\(\,\lambda 4686\) by a factor of 10. Wavelength regions of Balmer lines are a blend of hydrogen and He ii lines. With increasing temperature the He ii lines are stronger while the Balmer lines become weaker. In addition, the line strength between hydrogen and helium also determines their abundances, e.g. overall stronger helium lines with respect to hydrogen lines mean lower hydrogen abundances.
He i\(\,\lambda 4471\) (amplified by a factor of 25) is correlated with He ii\(\,\lambda 4686\) for helium lines, but anti-correlated for H\({}_{\delta}\) and higher order Balmer lines following the trend of H\({}_{\alpha}\). He i\(\,\lambda 4471\) show stronger correlations with Si iii and C iii, which are only present in late O stars, where He i lines are strongest as well. Under this supposition we would expect that we observe a strong correlation between He ii and the higher ionised N iv and v, which seems not to be the case. A reason might be that the number of early O stars is too small due
to the stellar mass-function to significantly contribute to the model-error (\(<5\%\)). Grouping similar objects together is advisable when testing model assumptions in stellar atmosphere codes.
#### 4.1.2 Challenges
The examples shown in Fig. 3 show low and modest nebular contamination and the pipeline derives results in good agreement with VFTS. However, the pipeline has difficulties when spectra show strong nebular lines. In the case of VFTS-142 (Fig. A1) the temperature is still reasonably well reproduced, but the surface gravity is by \(\sim\)0.3 dex too low. If the spectra is dominated by nebular lines, the pipelines will fail, e.g. VFTS 410 (Fig. A2). Often nebular lines are clipped, but in the case of VFTS-410 only few diagnostic lines would remain. Even though we perform a single star analysis, for double-lined spectroscopic binaries (SB2s) the pipeline is able to fairly fit the primary component, but struggles with the mass-loss rate and helium abundances due to the contribution of the colliding wind region of VFTS-527 (Fig. A3, Taylor et al., 2011).
The goodness-of-fit is usually evaluated by calculating the reduced-chi-square (RCS), which uses in our case the diagonal of the error co-variance matrix. Due to the nebular contamination and diffuse interstellar bands none of our fits had a RCS close to 1. To visualise how well the pipeline performs we compare our results versus tailored analysis of VFTS targets in Fig. 5. Our results agree well with Bestenlehner et al. (2014); Sabin-Sanjulian et al. (2014, 2017); Ramirez-Agudelo et al. (2017) for fits with RCS \(<100\). Effective temperatures show a tighter relation than the surface gravity. The determination of surface gravity is based on the wings of the Balmer lines, which is influenced by the line broadening and therefore how well \(e_{\rm mac}\) and \(e\sin i\) are determined. If the spectroscopic fit is poor, we derive systematically lower temperatures and surface gravities. Low temperature and gravity models have H\({}_{\alpha}\) in emission to somehow fit the none-stellar H\({}_{\alpha}\) nebular line while higher order Balmer lines remain in absorption. Overall there is good agreement considering that our analysis took less than 30 min while the VFTS analysis involved 3 PhD theses.
Looking at the error bars the pipeline obtains systematically larger uncertainties in part due to the inclusion of the model error but mostly as a result of the interpretation of the 4D posterior distribution function (PDF), which includes the degeneracies between \(T_{\rm eff}\), \(\log g\), \(\dot{M}_{\rm I}\) and \(Y\). The derived errors might be larger, but they are a complete representation of the true uncertainties. A representative prior could increase the accuracy, but might introduce additional biases (Bestenlehner et al., 2020). Temperature uncertainties are systematically larger in the region around 45,000 K as a result of the weakness of He i lines and the ionisation balance is not based on He i-ii, but on the metal ions N iii-iv-iv. Nitrogen lines in early O star are relatively weak compared to He lines and therefore contributed little to the global \(\chi^{2}\) without specific weighting of spectral lines. This can lead to an overall lower RCS, but inaccurate temperature determination (red outliers in Fig.5). A similar behaviours occurs in the transition from late O to early B stars due to the weakness of He ii lines, where the main temperature diagnostics are Si iii and iv lines. So a careful weighting scheme should be a very promising way for optimising the pipeline by increasing the accuracy while at the same time reducing the degeneracies between parameters.
### VLT/MUSE: Castro et al. (2018)
Figure 6 shows the spectroscopic fit of an Of supergiant and a B supergiant with \(\Delta T_{\rm eff}\approx 25\,000\) K and \(\Delta\log\dot{M}\gtrsim 2.5\) dex. This highlights
Figure 3: Left, spectroscopic fit of an fast rotating early O2 V-III(n)((F\({}^{n}\))) star VFTS-072 and, right, a late O9.2 III star VFTS-076 (right). Blue solid line is the observation, red solid line the synthetic spectrum and the grey shaded area is the square-root of the diagonal elements of the model-error uncertainty-matrix calculated by the pipeline.
that stars covering a large spectral type range can be successfully and reliably analysed with a single pipeline set up at the same time. However, both stars would not be considered as similar, which has implication on the model error \(U\). Such a model error is averaged over a wide parameter space and is probably not very helpful, when testing specific physics (e.g. stellar wind physics or atomic data) in the model. Similar to the VFTS data the pipeline performs not well for low signal to noise spectra (S/N \(\lesssim\) 10 to 15), spectra with strong nebular lines and spectroscopic binaries/multiples.
In Fig. 7 we compare our results with those from Castro et al. (2021) which is based on the ionisation balance of selected HeI and HeII and the wing of \(H_{\beta}\). In contrast, we used all H, He plus CNO and Si metal lines available in the VLT/MUSE wavelength range. The left panel compares effective temperatures, which shows a large scatter, but mostly agrees within their large uncertainties. Above 45 000 K, when He i disappears or weakly present in the spectra, the temperature needs to be derive based on the ionisation balance of metal lines. In the wavelength range of MUSE we have C iii-iv and N iii-iv. While C iv and N iv are located in relative clean areas of the spectra, the C iii and N iii are often found in the range of telluric bands or near the Paschen jump, where we have issues with the normalisation (Fig. 6). B type stars have per definition no He ii present in their spectra and the temperature is based in the case of early B stars on the ionisation balance of Si iii-iv lines. There is a reasonable number of lines in the MUSE range, but the temperature determination suffers with the presence of nebular lines or low SNR spectra.
The right panel of Fig. 7 compares surface gravities which show an even larger scatter and uncertainties. Even though we utilised the Paschen lines as well, there are 2 potential caveats, first, the normalisation near the Paschen jump is not straightforward as the lines overlap and therefore no continuum, second, the degree of overlapping depends not only on log \(g\) but also on the line broadening due to the narrowness of the higher order Paschen lines, which is only approximately determined during the fitting process. This results in a degeneracy between log \(g\) and \(v_{\rm eq}\) sin \(i\). Surface gravities cluster in the range of log \(g\) = 3.5 to 4.5, which is expected for a young stellar population of 1 to 2 Myr largely consisting of dwarfs and giants in the proximity of R136 (Bestenlehner et al., 2020).
To better quantify how reliable the analysis based on the VLT/MUSE data is, we compare our results to VFTS. 35 VLT/MUSE targets are in common with VFTS (Bestenlehner et al., 2014; Sabin-Sanjulian et al., 2014, 2017; Ramirez-Agudelo et al., 2017) and the comparison is shown in Fig. 8. Uncertainties are systematically larger. Effective temperatures agree within their 1\(\sigma\) uncertainties for 26 out 35 stars (left panel) with differences largely as a result of the cleanliness and quality of spectra. Surface gravities are in consent for only half of the sample, which is expected due to the challenges of the Paschen lines. Overall the agreement is reasonable considering the difficulties in the analysing of the MUSE data.
To place the stars into the Hertzpung-Russell diagram (HRD) we derived bolometric luminosity on the basis of optical \(UBV\) from Selman et al. (1999) and near-infrared \(JK_{8}\) photometry from 2MASS ((2 Micron All Sky Survey)) and Vista Magellanic Clouds survey (VMC Cioni et al., 2011) using the methodology of Bestenlehner et al. (2020, 2022). The HRD (Fig. 9) shows that most stars are populated near, and to the cool side of the zero-age main-sequence (ZAMS). There are a couple of exceptions but their uncertainties do not exclude a cooler location in agreement with the majority of the sources. This can be improved by including a meaningful prior into the analysis, e.g. based on evolutionary tracks, and could increase the accuracy of the results, as we used only a flat prior (\(\omega_{ii^{\prime}}=1\)). For example, only hydrogen deficient stars are found to be on the hot side of the ZAMS, e.g. self-stripping through stellar winds or binary evolution. A prior would give the star a higher probability to be found either on the hot or cool side of the ZAMS depending on its helium composition. Stellar parameters can be found in Table A while individual spectroscopic fits for visual inspection are in the supplementary online material. Mass-loss rates, and He, C, N, and O abundances are not included. Optical data can only provide an upper limit for most stars in our sample (flat PDF towards lower mass-loss rates) while the PDF for Helium is cut off at the primordial abundance of 25% by mass. CNO abundances are too coarsely sampled and linked to the predicted chemical evolution of 60 \(M_{\odot}\) star (SS 3.3).
## 5 Conclusions and outlook
Large spectroscopic surveys with WEAVE and 4MOST will observed 10 000s of massive stars, which need to be analysed in a homogeneous and efficient way. The pipeline presented in this work takes advantage and utilises the information that large data sets provide by determining the model uncertainties, which are included into the error budget. This methodology could be also applied to galaxies or other domains like biology and geophysics for which approximate and incomplete theoretical models exist as well.
The runtime of the pipeline scales exponentially with the number of spectra, because all stars are analysed at the time and the error-model uncertainties-matrix is iteratively updated (SS 2.4). However, once a converged error-model uncertainties-matrix is obtained, we can limit the matrix operations to the \(\chi^{2}\)-minimisation and switch to
Figure 4: Model uncertainties extracted from model-error matrix as a function of wavelength for H\({}_{\alpha}\) (solid red), He i 44471 (black dashed) and He ii \(\lambda\)4686 (solid blue). For better visualisation uncertainties for He i 44471 and He ii 44686 are multiplied a factor 25 and 10, respectively.
a star by star analyses. In this case we are able to analyse 1 star in less than a second.
Xiang et al. (2022, Tue HotPayne) applied the FGK methodology of The Payne(Ting et al., 2019) to OBA stars to derive 2 stellar labels/parameters (\(T_{\rm eff}\) and log \(g\)) plus 15 chemical abundances. They used the plane-parallel LTE atmospheric model calculated with ATLAS12 (Kurucz, 1970, 1993, 2005) and were able to analyse 330 000 spectra. While \(T_{\rm eff}\) and log \(g\) are sensible for A and mid-late B dwarfs the derived chemical abundances suffer from non-negligible systematics due to non-LTE effects. AB supergiants require spherical geometry (stellar radius \(\mathbf{\mathscr{H}}\) scale height) including stellar winds as these effects cannot be neglected. In hotter and more luminous stars (early B and O stars) a non-LTE treatment is necessary (2) such that Tue HotPayne results for \(T_{\rm eff}\) and log \(g\) are not reliable including stars with weak winds. Even the inclusion of a model error will not improve much due to the fundamental missing physics in the ATLAS12 models.
To make The HotPayne usable for OBA stars the underlying stel
Figure 5: Effective temperatures (left) and surface gravities (right) determined by the pipeline vs. the results from Bestenlehner et al. (2014); Sabin-Sangjulian et al. (2014, 2017); Ramirez-Agudelo et al. (2017).
Figure 6: Spectroscopic fit of an O2 II (Mkt42, left) and a B21b (VFTS-417, right). Blue solid line is the observation, red solid line the synthetic spectrum and the grey shaded area is the square-root of the diagonal elements of the covariant-matrix calculated by the pipeline. In the left panel the newly added N iv multiplet at \(\lambda\)\(7103-29\) is able to reproduce the observed line.
lar models must be replaced with models computed with more sophisticated and fully non-LTE stellar atmosphere codes designed for hot, massive stars with stellar winds, e.g. cmfgen(Hillier and Miller, 1998), fastwind(Santolaya-Rey et al., 1997; Puls et al., 2005) or PoWR (Grafener et al., 2012; Hamann and Grafener, 2003). The approach of the The HotPayne needs to be expanded: to include the model uncertainties into the error budget (e.g. this work) to account for the assumptions and parametrisations utilised in those complex stellar atmosphere codes. Additional stellar labels/parameters need to be incorporated such as mass-loss rate, velocity law, wind-inhomogeneity, terminal and wind-turbulent velocity plus Helium abundances. The helium abundance increases due to the CNO cycle, which in turn increases the mean molecular weight (\(\mu\)), impacts the mass-luminosity relation (\(L\propto\mu^{4}M^{3}\)), electron density, and therefore the structure and ionisation balance of the stellar atmosphere. When analysing optical spectra the wind parameters can be merged into a wind strength parameter \(Q\)(Puls et al., 1996), transformed radius \(R_{\rm t}\)(Schmutz et al., 1989; Grafener et al., 2002; Hamann and Grafener, 2004) or transformed mass-loss rate \(\dot{M}_{\rm t}\)(Bestenlehner et al., 2014, SS 3.3).
The fully automated spectroscopic analysis tool of this work reduces the human interaction to a minimum to cope with the amount of data. It is able to process \(\sim 250\) stars in less than half an hour (\(\sim 6\) CPU hours) delivering results comparable to Bestenlehner et al. (2014); Sabin-Sanjulian et al. (2014, 2017); Ramirez-Agudelo et al. (2017) over a decade. Overall the quality of the spectroscopic fits are good, but around 15% of the stars need additional attention as a result of strong nebular contamination, low S/N, multiplicity et cetera. The pipeline performances well over a wide parameter space which is support by the spectroscopic analysis of 3 benchmark stars by several groups within the X-Shooter and ULYSS collaboration (XShootU Vink et al., 2023) of optical VLT/Xshooter data (Sander et al. in preparation).
Weights of spectral lines could increase the accuracy, but need to be adjusted depending on the parameter space that would then require human interaction. Determining weights for features (spectral lines) is a typical machine learning problem and often solved with neural networks (deep learning). However, to really take advantage of our statistical approach and optimise the pipeline we will require
Figure 8: Comparison of MUSE targets in common with VFTS: effective temperatures (left) and surface gravities (right) determined by the pipeline using VLT/MUSE data vs. the results from Bestenlehner et al. (2014); Sabin-Sanjulian et al. (2014); McEvoy et al. (2015); Sabin-Sanjulian et al. (2017); Ramirez-Agudelo et al. (2017) using VLT/FLAMES data.
Figure 7: Effective temperatures (left) and surface gravities (right) determined by the pipeline vs. the results from Castro et al. (2021).
much larger data sets, which will be soon provided by WEAVE and 4MOST. Future advances of our pipeline will be released on the pipelines repository.
## Acknowledgements
JMB and PAC are supported by the Science and Technology Facilities Council research grant ST/V000853/1 (PI. V. Dhillon). MB is supported through the Lise Meitner grant from the Max Planck Society. We acknowledge support by the Collaborative Research centre SFB 881 (projects A5, A10), Heidelberg University, of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 949173).
## Data Availability
The data underlying this article are available in the article and in its supplementary material. The pipeline will be made publicly available after acceptance of this manuscript. Spectroscopic data are available via the ESO archive facility while grids of synthetic spectra can be requested from the lead author.
|
2309.09021 | Pedestrian Trajectory Prediction Using Dynamics-based Deep Learning | Pedestrian trajectory prediction plays an important role in autonomous
driving systems and robotics. Recent work utilizing prominent deep learning
models for pedestrian motion prediction makes limited a priori assumptions
about human movements, resulting in a lack of explainability and explicit
constraints enforced on predicted trajectories. We present a dynamics-based
deep learning framework with a novel asymptotically stable dynamical system
integrated into a Transformer-based model. We use an asymptotically stable
dynamical system to model human goal-targeted motion by enforcing the human
walking trajectory, which converges to a predicted goal position, and to
provide the Transformer model with prior knowledge and explainability. Our
framework features the Transformer model that works with a goal estimator and
dynamical system to learn features from pedestrian motion history. The results
show that our framework outperforms prominent models using five benchmark human
motion datasets. | Honghui Wang, Weiming Zhi, Gustavo Batista, Rohitash Chandra | 2023-09-16T15:25:03Z | http://arxiv.org/abs/2309.09021v2 | # Pedestrian Trajectory Prediction Using Dynamics-based Deep Learning
###### Abstract
Pedestrian trajectory prediction plays an important role in autonomous driving systems and robotics. Recent work utilising prominent deep learning models for pedestrian motion prediction makes limited a priori assumptions about human movements, resulting in a lack of explainability and explicit constraints enforced on predicted trajectories. This paper presents a dynamics-based deep learning framework where a novel asymptotically stable dynamical system is integrated into a deep learning model. Our novel asymptotically stable dynamical system is used to model human goal-targeted motion by enforcing the human walking trajectory converges to a predicted goal position and provides a deep learning model with prior knowledge and explainability. Our deep learning model utilises recent innovations from transformer networks and is used to learn some features of human motion, such as collision avoidance, for our proposed dynamical system. The experimental results show that our framework outperforms recent prominent models in pedestrian trajectory prediction on five benchmark human motion datasets.
## I Introduction
Human motion analysis is required for safe human-human and human-robot interaction systems. Human trajectory prediction plays an important part in human motion analysis [1] and therefore has been widely used in various fields such as autonomous driving systems [2][3] and robot navigation [4].
The research on predicting human movements starts from physics-based methods, such as the social force model [5] and the constant velocity/acceleration model [6]. In recent years, deep learning methods have become appealing for pedestrian (human) trajectory prediction with the public availability of large-scale data [7]. In these approaches, pedestrian motion patterns extracted from trajectories of previously observed humans are used to predict human future movements. Deep learning methods applied to pedestrian trajectory prediction have evolved from Recurrent Neural Networks (RNN) and their variants [8] to Transformer models [9]. Recent efforts also have found notable performance improvements by encoding goals (position or desired location) in the deep neural network together with historically observed trajectories [10].
However, since coefficients cannot be physically understood and interpreted in neural networks, the above-mentioned deep learning methods lack explainability, making it unclear why the predicted trajectories have the given shapes [11]. In addition, there is a lack of constraints which are explicitly enforced when predicting desired trajectories [12]. The lack of explainability and the explicit enforcement of known rules of desired predicted trajectories can be addressed by introducing prior knowledge into the deep learning methods [12][13].
Therefore, in this paper, we propose a dynamics-based deep learning method by introducing a novel asymptotically stable dynamical system into a learning-based transformer model. Our proposed asymptotically stable dynamical system is used to model human motion and can enforce the human trajectory to converge to the equilibrium point [14]; i.e. the destination of the pedestrian in the domain of pedestrian trajectory prediction. This property is not only in line with the goal-targeted feature of human movements, as depicted in Fig. 1, but also can provide our used transformer network with prior knowledge and explainability. In addition, our proposed method learns regularity properties of human motion such as continuity, smoothness and boundness [15], and the temporal and spatial reaction of human movements via the transformer-based model. Overall, our method has the potential to forecast pedestrian trajectories more precisely than conventional methods by accurately capturing the properties of human motion.
To summarize, the main contributions of this paper are as follows:
(1) We propose to integrate asymptotically stable dynamical systems within a neural network to model the human goal-directed motion by, and thereby enforcing the human walking trajectory converges to a predicted goal position, and providing a neural network with prior knowledge and explainability.
(2) We provide extensive evaluation on our proposed
Fig. 1: Pedestrian trajectories are generally _goal-driven_, that is pedestrians seek to arrive around a determined goal. |
2309.10570 | Unifying inflationary and reheating solution | The conventional background solution for the evolution of a single canonical
inflaton field performs admirably in extreme scenarios such as the slow-roll
phase (where the slow-roll parameter is much less than one) and the deep
reheating era (where the Hubble parameter is much smaller than the effective
mass of the potential and the field oscillates around the minimum of the
potential), but fails to accurately depict the dynamics of the Universe around
the end of inflation and the initial oscillatory phases. This article proposes
a single, unified, model-independent, parametrized analytical solution for such
models that bridges the gap between these two extremes, providing a
near-accurate comprehensive description of the evolution of the Universe. This
novel strategy has the potential to substantially enhance both quantitative and
qualitative cosmological observational predictions, and, as a consequence, can
further constrain the inflationary models more effectively using future
observations. | Manjeet Kaur, Debottam Nandi, Sharath Raghavan B | 2023-09-19T12:28:06Z | http://arxiv.org/abs/2309.10570v3 | # Unifying inflationary and reheating solution
###### Abstract
The conventional background solution for the evolution of a single canonical inflaton field performs admirably in extreme scenarios such as the slow-roll phase (where the slow-roll parameter is much less than one) and deep reheating era (where the Hubble parameter is much smaller than the effective mass of the potential and the field oscillates around the minimum of the potential), but fails to accurately depict the dynamics of the Universe near the end of inflation and the initial oscillatory phases. This article proposes a single, unified, model-independent analytical solution for such a model that bridges the gap between these extremes, providing a comprehensive description of the evolution of the Universe. This novel strategy has the potential to substantially enhance both quantitative and qualitative cosmological observational predictions.
## I Introduction
The inflationary paradigm [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], a brief period of accelerated expansion during the early Universe, not only overcomes the early Universe puzzles, such as the horizon and the flatness problems, but also explains the observational constraints [25; 26; 27]. Within the paradigm, the single canonical scalar field-driven slow-roll inflationary models are the most successful ones, where the inflaton (scalar) field slowly rolls down its potential, resulting in the quasi-exponential expansion of the Universe. As the field approaches the bottom of the potential, the inflationary stage ends, and the field smoothly starts oscillating around the minimum of its effective potential. At this stage, it couples to other (standard) particles and the inflaton field decays into those elementary particles, resulting in the transfer of energy from the inflaton field to those particles. This era is referred to as the reheating epoch [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52]. When the inflaton field completely decays, the reheating era ends, and the Universe enters into the known radiation-dominated era [29; 30; 31; 53].
For slow-roll inflation to occur, the magnitude of the slow-roll parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) must be extremely close to zero. These conditions simply refer to the exceedingly slowly changing (decreasing) Hubble parameter. This, in turn, leads to the quasi-exponential expansion of the Universe. To better comprehend the dynamics, consider the following potential chaotic inflation model [3]:
\[V(\phi)=\frac{1}{2}m^{2}\phi^{2},\]
where \(m\) is the mass of the inflaton field. The slow-roll condition is met for \(|\phi|\gg 1\), and at this stage, the evolution of the Hubble parameter as a function of cosmic time \(t\) can be expressed as \(H(t)=H_{0}-\frac{1}{3}m^{2}t\), where \(H_{0}\) is a constant (as will be demonstrated later). It results in the scale factor solution \(a(t)\propto\exp\left(H_{0}t-\frac{1}{6}m^{2}t^{2}\right)\), i.e., a solution close to exponential. As is evident, during the slow-roll era, \(H\gg m\). The inflationary era ends when \(\epsilon_{1}=1\), which implies \(H\sim m\). Shortly after the end of inflation, the field oscillates, and the reheating epoch begins. During the reheating period, the scalar field \(\phi\) oscillates around the minimum, \(\phi=0\), and as a consequence, the Hubble parameter decreases as \(H(t)\sim 2/(3t)\), indicating \(H\ll m\).
These characteristics can be generalized to any inflationary potential: during the slow-roll epoch, the magnitude of the Hubble parameter is considerably higher than the effective mass of the potential and nearly constant. The analytical solution, in this era, i.e., the slow-roll solution, is well-known. Inflation ends approximately when the Hubble parameter equals the mass and, during reheating, the Hubble parameter falls significantly over the mass and decreases very quickly with time. In this period also, we can represent the approximate analytical solution of the reheating epoch. However, both the slow-roll inflationary solution as well as standard reheating solution fail to address the smooth transition from the slow-roll regime to the oscillations, and thus, the dynamics, as well as the complexities such as the study of the perturbations at the end of inflation and the beginning of the reheating process, are incomplete. This is because the slow-roll approximation fails near the end of inflation, and hence, the slow-roll solution does not justify the genuine solution near the end of inflation. Similarly, in solving the dynamics in the reheating epoch, we use the approximation that the Hubble parameter is subdominant over the effective mass of the potential (e.g., \(H\ll m\) in the case of chaotic inflation). Thus, the reheating solution only accounts for the asymptotic oscillatory solution, while the solution near the first and subsequent oscillations misses the true dynamics.
Such difficulties also affect the inflationary and reheating constraints. As we know, there are primarily two ways to investigate the reheating era: quantitative [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54] and qualitative analysis [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. In quantitative analysis, we presume the effective behavior of the Universe during this stage and attempt to constrain both the reheating time \(N_{\rm re}\) (or the temperature \(T_{\rm re}\) at the
end of reheating) and the effective equation of state \(w_{\rm re}\)[45; 50; 55; 56] using the perturbations generated during the inflationary regime. In the case of chaotic inflation, for instance, the evolution during the preliminary stage of reheating, i.e., the preheating stage, behaves like a matter-dominated solution, so \(w_{\rm re}=0\). In contrast, for qualitative analysis, we approximate the evolution of the background and analyze the process of decay of the inflaton field. In this scenario, parametric resonance serves as the mechanism for the production of elementary particles. Thus, as dynamics near the end and around the first few oscillations are analytically not known, the qualitative and quantitative analysis of reheating are extremely difficult to analyze analytically.
This paper addresses the analytical unification of the slow-roll inflationary solution and the reheating solution into a single solution, which eventually captures the true dynamics not only during the slow-roll and asymptotic reheating epoch but also during the intermediate junction between the two epochs, i.e., during the end of inflation and the beginning of reheating epochs. Additionally, the suggested method is model-independent, meaning it can account for both small and large field models. Suppose the full solution can be achieved with the smooth transition from slow-roll to reheating. In that case, it will not only provide a broader view of the unified early Universe solution but also can provide better constraints on parameters using both qualitative and quantitative methods of reheating. In other words, the study can improve the accuracy of the \(n_{\rm s}\) (scalar spectral index of curvature perturbations) vs. \(T_{\rm re}\) relation in the quantitative picture of reheating [45; 50]. In contrast, qualitative analysis can also account for the effects of the smooth transition from inflation to reheating for perturbed variables. If improved, it can even be studied in special cases of producing primordial black holes (PBHs), primordial gravitational waves (PGWs), and other scenarios where using parametric resonance during reheating; perturbed modes can be enhanced [57; 58; 59; 60; 61; 62]. It is crucial to acknowledge that the work in this article only focuses on the background dynamics of the Universe, and analysis of the perturbations is reserved for future work as it is beyond the scope of this work.
For such analysis, in this article, we consider the single canonical scalar field minimally coupled to gravity and provide a single analytical solution for the early Universe. In doing so, rather than working in the phase space consisting of \(\{\phi,\dot{\phi}\}\), we conveniently choose \(\theta\), which represents the phase of the oscillatory solution, and the Hubble parameter \(H\). Since the particle production (resonance) occurs at the minimum of the potential and it is difficult to be certain of the same using cosmic time \(t\), the introduction of the coordinate \(\theta\) mitigates this issue. This method systematically shows how to arise at different levels of Hubble parameter values using \(\theta\), i.e., \(\theta=(2n+1)\pi/2,n\in N\) indicates the field is at the bottom of the potential, whereas \(\theta=n\pi\), \(n\in N\) denotes the field reaches the highest level of the potential, where the field velocity \(\dot{\phi}\) vanishes during reheating. Knowing this instances helps in solving the system to achieve a detailed picture, as we will show in later sections.
The following is how the article is written. The action responsible for the early Universe dynamics is defined in Sec. II, and the generic background equations are provided. Sec. III demonstrates how to obtain the usual slow-roll and reheating solutions. In this part, we also present the phase space, which comprises of \(\theta\) and \(H\), as well as the asymptotic reheating solution, which is well-known. In Sec. IV, we extend the phase space solution, which is used to get the reheating oscillatory solution, to the slow-roll phase, and later we present our primary work in the next Sec. V. In this section, we unify the inflationary and reheating solutions, i.e., the complete, yet model-independent, solution of the homogeneous Universe dominated by the canonical inflaton field, and demonstrate the result for the chaotic inflationary model using a simple yet straightforward method. We demonstrate that different values of \(\theta\) represent different instances of the early Universe. Thus, by providing the solution of \(H\) (and other background variables) in terms of \(\theta\), we explicitly imply that, in each of the instances listed, we know the value of the \(H\) (and other background variables), which clearly aids in understanding the dynamics. We also present the solution of \(\theta\) in terms of cosmic time \(t\), completing the solution. In Sec. VI, we extend our result and investigate several inflationary models, demonstrating that our method brilliantly provides the entire background solution of the dynamics, i.e., from slow-roll to reheating solution with a smooth transition, and we study the observational consequences in Sec. VII. Finally, in Sec. VIII, we conclude our work.
A few words about our conventions and notations are in order at this stage of our discussion. In this work, we work with the natural units such that \(\hbar=c=1\), and we define the reduced Planck mass to be \(M_{\rm pl}\equiv(8\pi G)^{-1/2}=1\). We adopt the metric signature of \((-,+,+,+)\). Also, we should mention that, while the Greek indices are contracted with the metric tensor \(g_{\mu\nu}\), the Latin indices are contracted with the Kronecker delta \(\delta_{ij}.\) Moreover, we shall denote the partial and the covariant derivatives as \(\partial\) and \(\nabla\). The overdots and overprimes denote derivatives with respect to the cosmic time \(t\) and the conformal time \(\eta\) associated with the Friedmann-Lemaitre-Robertson-Walker (FLRW) line-element, respectively.
## II General equations
Let us first consider a single canonical scalar field \(\phi\) minimally coupled to the gravity with a potential \(V(\phi)\), specified by the action
\[S=\frac{1}{2}\int d^{4}x\sqrt{-g}\;\left(R-g^{\mu\nu}\partial_{\mu}\phi\ \partial_{\nu}\phi-2V(\phi)\right), \tag{1}\]
where \(R\) is the Ricci scalar. The corresponding equations of motion, i.e., Einstein's equations and the equation of
the scalar field, can be written as
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = T_{\mu\nu(\phi)}, \tag{2}\] \[\nabla_{\mu}T^{\mu\nu}_{(\phi)} = 0, \tag{3}\]
where \(T^{\mu}_{\nu(\phi)}\) is the stress-energy tensor corresponding to the \(\phi\) field:
\[T_{\mu\nu(\phi)}=\partial_{\mu}\phi\ \partial_{\nu}\phi-g_{\mu\nu}\left(\frac{1} {2}\partial_{\lambda}\phi\ \partial^{\lambda}\phi+V(\phi)\right). \tag{4}\]
Using the FLRW line element, describing the homogeneous and isotropic Universe in cosmic time \(t\):
\[ds^{2}=-\mathrm{d}t^{2}+a^{2}(t)\mathrm{d}\mathbf{x}^{2}, \tag{5}\]
where \(a(t)\) is the scale factor, Eqs. (2) and (3) can be reduced to the following forms:
\[3H^{2}=\frac{1}{2}\dot{\phi}^{2}+V(\phi), \tag{6}\] \[\dot{H}=-\frac{1}{2}\dot{\phi}^{2},\] (7) \[\ddot{\phi}+3H\dot{\phi}+V_{,\phi}=0. \tag{8}\]
where, \(H\equiv\dot{a}/a\) is the Hubble parameter and \(A_{,\phi}\equiv\partial A/\partial\phi\). As one can see, the first one is a constrained equation, and between the other two, one of them is independent, leaving the degrees of the freedom of the system to one with a single evolutionary equation:
\[\ddot{\phi}+\sqrt{\frac{3}{2}}\sqrt{\dot{\phi}^{2}+2V}+V_{,\phi}=0. \tag{9}\]
The above equation is highly nonlinear; therefore, obtaining its general solution is exceedingly challenging. Using certain approximations, Eq. (9) can be solved under various conditions, as demonstrated in the following section. The primary objective of this article is, contrary to the conventional method of solving in various epochs (or conditions), to provide a complete solution of the above equation for a variety of models, as will be demonstrated later.
Finally, we now can define the two slow-roll parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) as
\[\epsilon_{1}\equiv-\frac{\dot{H}}{H^{2}},\qquad\epsilon_{2}\equiv\frac{\dot{ \epsilon_{1}}}{H\epsilon_{1}}. \tag{10}\]
These slow-roll parameters play a crucial role in defining the dynamics in the early Universe, mainly during slow-roll inflationary evolution. In the next section, with the help of these parameters, we will establish the inflationary as well as the reheating dynamics.
## III Scalar field solutions in different regimes
Given that the generic background equations are now known and have already been presented in the preceding section, one can obtain the evolution of the Universe using those equations for a given potential as well as the initial conditions. Let us first discuss the slow-roll inflation.
### Slow-roll Equations
In order to achieve a slow-roll inflation in the early Universe, the above-mentioned slow-roll parameters have to be extremely small, i.e.,
\[\epsilon_{1}\ll 1,\quad\epsilon_{2}\ll 1. \tag{11}\]
The first condition in the above equation leads to \(\dot{\phi}^{2}\ll H^{2}\), meaning the field velocity is small compared to the potential, thus the name slow-roll and the second condition leads to \(\ddot{\phi}\ll\dot{\phi}H\), implying that the field acceleration is extremely small, i.e., the first condition stay relevant for a sufficient time, which constrains the Eqs. (6) and (8) to,
\[3H^{2}\simeq V(\phi),\qquad 3H\dot{\phi}\simeq-V_{,\phi}. \tag{12}\]
These equations define the dynamics corresponding to the slowly rolling scalar fields. The two slow-roll conditions can then also be expressed directly in terms of the shape of inflationary potential as
\[\epsilon_{1}\simeq\frac{1}{2}\left(\frac{V_{,\phi}}{V}\right)^{2}\qquad \epsilon_{2}\simeq 2\left(\frac{V_{,\phi}^{2}}{V^{2}}-\frac{V_{,\phi\phi}}{V} \right), \tag{13}\]
where, \(A_{,xx}=\frac{\partial^{2}A}{\partial x^{2}}\). Given the potential as well as the field value, if the aforementioned slow-roll parameters satisfy the slow-roll conditions (11), then one can ensure the Universe is in the slow-roll stage, and the specific dynamics can be obtained by solving the slow-roll equations (12).
To illustrate the slow-roll inflationary scenario, consider the simplest model with quadratic potential, i.e., the chaotic inflation model:
\[V(\phi)=\frac{1}{2}m^{2}\phi^{2}, \tag{14}\]
where, \(m\) is the mass of the scalar field \(\phi\). Using Eqs. (13), the slow-roll parameters are:
\[\epsilon_{1}\simeq\frac{2}{\phi^{2}},\quad\epsilon_{2}\simeq\frac{4}{\phi^{2}}, \tag{15}\]
which implies that, only when \(|\phi|\gg 1\), the slow-roll conditions are met. Only in this regime the slow-roll equations (12) can be used to obtain the dynamics, and they are given as
\[3H^{2}\simeq\frac{1}{2}m^{2}\phi^{2},\qquad\dot{\phi}\simeq-m\sqrt{\frac{2}{ 3}}. \tag{16}\]
As slow-roll can be achieved only during \(|\phi|\gg 1\), during this regime, \(H\gg m.\) As a result, the solution to these equations, i.e., the slow-roll solutions, can be obtained as
\[\phi\simeq\phi_{\rm i}-\sqrt{\frac{2}{3}}mt, \tag{17}\] \[H\simeq\left(\frac{1}{\sqrt{6}}m\phi_{\rm i}-\frac{1}{3}m^{2}t \right), \tag{18}\]
and the solution of the scale factor during slow-roll can now be expressed as
\[a(t)\simeq a_{\rm i}\exp{\left(\frac{1}{\sqrt{6}}m\phi_{\rm i}t-\frac{1}{6}m^{ 2}t^{2}\right)}. \tag{19}\]
\(\phi_{\rm i}\) and \(a_{\rm i}\) are the initial values of \(\phi\) and \(a\) at \(t=0\). Note that the inflation ends at \(\epsilon_{1}=1\), and by assuming the slow-roll dynamics holds till the end of inflation, with the help of Eq. (15), one can find the field value at the end of inflation, and it is \(|\phi_{\rm end}|\simeq\sqrt{2}\). In that case, we can also solve for cosmic time \(t_{\rm end}\) denoting the end of inflation as
\[t_{\rm end}\simeq\sqrt{\frac{3}{2}}\ \frac{\phi_{\rm i}}{m} \tag{20}\]
where \(t=t_{\rm end}\) corresponds to the end of slow-roll inflation, and we assume \(|\phi_{\rm i}|\gg|\phi_{\rm end}|\) which is required for slow-roll inflation. The Hubble parameter, then, at the end of inflation, \(H_{\rm end}\), is
\[H_{\rm end}\simeq\frac{1}{\sqrt{3}}m. \tag{21}\]
Please note that very close to the end of inflation, i.e., \(|\phi|=\sqrt{2}\), the slow-roll parameters do not obey the slow-roll condition as \(\epsilon_{1},\epsilon_{2}\sim 1.\) In fact, \(\epsilon_{2}\) becomes one at \(|\phi|\simeq 2\), even before the end of inflation. Therefore, these solutions do not represent the true solutions at the end of inflation and thereafter.
### Reheating
Deep within the slow-roll regime, the first slow-roll parameter is very close to zero, i.e., \(\epsilon_{1}\ll 1\), by definition. Nonetheless, as the field value decreases, \(\epsilon_{1}\) and \(\epsilon_{2}\) increase (see, for instance, Eq. (15)), and inflation ceases when \(\epsilon_{1}\) equals 1. Consequently, during the slow-roll epoch, the potential energy predominates over the kinetic energy, and as the inflation approaches the end of it, the contribution to the kinetic energy increases while the contribution to the potential energy decreases until they are almost equal at the end. The field then begins to oscillate around the minimum of the potential, and the reheating phase commences. To derive an analytical solution for this regime, it is easier to work with the phase space orientation of the field, i.e., \(\theta\) and \(H\), as opposed to \(\phi\) and \(\dot{\phi}\)[63; 64; 65; 15; 66]. To illustrate this, let us define
\[\frac{\dot{\phi}}{\sqrt{6}}\equiv-H\sin\theta,\quad\sqrt{\frac{V}{3}}\equiv H \cos\theta, \tag{22}\]
such that the energy equation (6) satisfies. Differentiating with respect to the cosmic time \(t\) and after rearranging terms, we get,
\[\dot{\theta}=\frac{V_{,\phi}}{\sqrt{2V}}-\frac{3}{2}H\sin 2\theta, \tag{23}\] \[\dot{H}=-3H^{2}\sin^{2}\theta. \tag{24}\]
Such choice of orientation simply implies that, as \(\theta\ll 1\), \(\dot{\phi}\) is negative and significantly smaller than the Hubble parameter, indicating the slow-roll regime. On the other hand, \(\theta=\theta_{\rm end}=\sin^{-1}\left(\frac{1}{\sqrt{3}}\right)\) defines the exact epoch of end of inflation, for \(\theta=(2n+1)\frac{\pi}{2},n\in N,\) the potential vanishes, and this corresponds to the bottom at the potential, and for \(\theta=n\pi,\ n\in N,\) the field velocity is zero, and the field reaches the peak of the potential. Unlike the slow-roll approximation, these values are exact, which is one of the key reasons for using such a formulation.
Therefore, instead of using the background equations (6), (7) and (8), here we analyze the full solution of the system by solving the Eqs. (23) and (24). Keeping that in mind, let us again consider the case of chaotic inflation. Then, Eq. (23) can be rewritten as
\[\dot{\theta}=m-\frac{3}{2}H\sin 2\theta. \tag{25}\]
As mentioned earlier, after the end of inflation, the Hubble parameter falls significantly below effective mass, i.e., \(H\ll m.\) As a consequence, during reheating, the Eq. (23) can be approximated and solved as
\[\dot{\theta}\simeq m,\quad\theta=\theta_{0}+m(t-t_{0}), \tag{26}\]
where \(\theta(t=t_{0})=\theta_{0}\). Integrating Eq. (24), we can write the solution of Hubble parameter \(H\) as a function of \(\theta\) as
\[H=\frac{H_{0}}{1+\frac{3H_{0}}{4m}\left(2(\theta-\theta_{0})-(\sin 2\theta- \sin 2\theta_{0})\right)}, \tag{27}\]
where \(H=H_{0}\) at \(\theta=\theta_{0}.\) In the deep oscillating stage, \(\theta\gg\theta_{0}\) and \(t\gg t_{0}\), which can approximate the above solution to
\[H\simeq\frac{2m}{3\theta}\left(1+\frac{\sin 2\theta}{2\theta}\right), \tag{28}\]
where \(\theta_{0}\) is chosen at any time at the bottom of the potential. Substituting the solution of \(\theta\) from Eq. (26) in above equation we get,
\[H\simeq\frac{2}{3t}\left(1+\frac{\sin 2mt}{2mt}\right). \tag{29}\]
Notice that the time average of the Hubble parameter behaves as \(2/(3t)\), i.e., like a dust-matter-dominated solution with the effective equation of state \(w_{\rm eff}=0\). The
corresponding solution for the field \(\phi\) in this regime can be written as
\[\phi \simeq\frac{2\sqrt{2}}{\sqrt{3}mt}\cos mt \tag{30}\] \[\dot{\phi} \simeq-\frac{2\sqrt{2}}{\sqrt{3}t}\sin mt \tag{31}\]
Also, the first and the second slow-roll parameters can be written as
\[\epsilon_{1} = 3\sin^{2}mt, \tag{32}\] \[\epsilon_{2} = 3mt\cot mt. \tag{33}\]
This is the complete solution for the reheating era for the case of the chaotic inflationary model.
Please note that the inflation ends at \(\theta_{\rm end},\) and the field, for the first time, reaches the bottom of the potential, making the first oscillation at \(\theta=\pi/2.\) At and around this stage, \(H\sim m.\) Therefore, the reheating solution (26) and the solutions thereafter cannot be trusted as the above solutions are obtained using the approximation \(H\ll m\). Only after a few oscillations, when \(H\) falls significantly below the mass of the potential \(m,\) the reheating solutions asymptotically merge with the solutions given above.
To summarize, in this section, for the chaotic inflation model, we derive the dynamics of the Universe in two distinct regimes. For \(|\phi|\gg 1,\) slow-roll conditions are met, and using these conditions, we derive the slow-roll dynamics, which leads to a quasi-exponential scale factor solution. For \(|\phi|\ll 1,\) however, the field oscillates around the minimum of the potential and decays into other particles, referred to as the reheating epoch, and using \(H\ll m\) approximations, we also obtain the asymptotic solution in this epoch. The two approaches to achieving these two extreme solutions are also entirely distinct. As previously stated, the solution when \(H\sim m\) is still not well understood, and the two solutions given above do not justify around this regime. And because the methodologies are distinct, extrapolating these two solutions into a single solution is also exceedingly challenging. In the following section, we will demonstrate that this is, in fact, possible if we contemplate a single method of solving these two regimes, which in our case is identical to the method used to solve the reheating era, which is characterized by the variable \(\{\theta,H\}.\)
## IV Extending the phase space solution method in slow-roll regime for chaotic inflation
Let us now focus on the method to analyze the evolution of the Universe during the slow roll. As mentioned earlier, during this epoch, \(\theta\) is small, and as a consequence, using Eq. (24), we can approximate the first slow-roll parameter as
\[\epsilon_{1}\equiv-\frac{\dot{H}}{H^{2}}\simeq 3\theta^{2}. \tag{34}\]
Using Eq. (13), one can immediately obtain the relation between the variable \(\theta\) and the scalar field \(\phi\) as
\[\theta\simeq\frac{V_{,\phi}}{\sqrt{6}V}, \tag{35}\]
which, in turn, leads to
\[\dot{\theta}\simeq-\frac{1}{3\sqrt{2}}\left(\frac{V_{,\phi\phi}}{V}-\frac{V_{,\phi}^{2}}{V^{2}}\right)\frac{V_{,\phi}}{\sqrt{V}}. \tag{36}\]
In the case of chaotic inflation, the above equations take the following form:
\[\theta\simeq\sqrt{\frac{2}{3}}\frac{1}{\phi},\qquad\dot{\theta}\simeq m \theta^{2}. \tag{37}\]
Note that, \(\phi\gg 1\) leads to \(\theta\ll 1,\) and also \(\dot{\theta}\ll 1,\) which make the above assumptions self-consistent. We now can integrate the above equation as:
\[\theta\ \simeq\ \frac{\sqrt{2}}{\sqrt{3}\phi_{\rm i}-\sqrt{2}mt}, \tag{38}\]
where \(\phi(t=0)\equiv\phi_{\rm i}.\) We can also integrate Eq. (34) and obtain the relation between \(H\) and \(\theta\) as
\[H\simeq\frac{m}{3\theta}, \tag{39}\]
where we use the initial condition for chaotic inflation \(H(\theta\to 0)\rightarrow\infty.\) It can now be seen that Eqs. (38) and (39) are in agreement with the Eqs. (17) and (18).
Please note that since we now solve the system using the variables \(\{\theta,H\}\) even during slow-roll regime, the fundamental difference in the dynamics of the chaotic inflationary model comes only in the expression \(\dot{\theta}\): i.e., during reheating, it was simply \(m,\) a constant, whereas, during slow-roll, it takes the form \(m\theta^{2}.\) In the following section, we will propose a method for obtaining the entire solution using this information as an advantage.
## V Proposed full solution for chaotic inflation
For chaotic inflation, we have discussed the dynamics of the Universe in two different regimes, i.e., during the slow-roll and reheating era. Let us just summarize the method in brief. Instead of Eqs. (6), (7), and (8), expressed in \(\phi\) and \(\dot{\phi},\) we redefine these equations in terms of the variable \(\theta\) and \(H,\) and equivalently, obtain two generalized equations (23) and (24). Then, in the case of
either inflation or reheating, we express \(\dot{\theta}\) as a function of \(\theta\), i.e., during reheating epoch, \(\dot{\theta}\) is constant, whereas, during the slow-roll phase, it goes as \(\propto\theta^{2}\) (_viz._ Eqs. (26) and (37)):
\[\dot{\theta}\simeq\left\{\begin{array}{ll}m\theta^{2}&\text{Slow-roll}\\ m&\text{Reheating}\end{array}\right. \tag{40}\]
By solving these equations together with Equation (24), we obtain the dynamics in these two distinct regimes. Please note, however, that, as previously remarked, these approximations do not hold at the end of the inflation era and the beginning of the reheating era, and the solution can only be completed if we know how \(\dot{\theta}\) behaves in this adjacent era.
Therefore, to obtain a complete solution from slow-roll inflation to reheating, we require a solution in which \(\dot{\theta}\) behaves as \(m\theta^{2}\) for \(\theta\ll 1\) and as \(m\) for \(\theta\gg 1\), with a seamless transition between these two solutions. Without worrying about the actual solutions, one can make an intellectual conjecture as to the form of such functions, and the possibilities are limitless. In this paper, we find one basic yet effective form such that the solution can be solved analytically by simple functions with the form:
\[\dot{\theta} = \frac{m\theta^{2}}{1+\theta^{2}}. \tag{41}\]
Note that inflation ends exactly at \(\theta_{\text{end}}\simeq 0.6\), and the field, for the first time, reaches the bottom of the potential at \(\theta=\pi/2\sim 1.6.\) Therefore, one can verify that, for the inflationary as well as the reheating solution, the above assumption depicts near-accurate dynamics of \(\dot{\theta}\) with the solution of \(\theta\) as
\[\theta=\left(\theta_{i}^{2}+m\theta_{i}t-1\right)+\sqrt{4\theta_{i}^{2}+\left( \theta_{i}^{2}+m\theta_{i}t-1\right)^{2}} \tag{42}\]
where, \(\theta(t=0)\equiv\theta_{i}\equiv\sqrt{\frac{2}{3\phi_{i}}}\), is the initial condition, chosen during the deep slow-roll regime.
Now that we know the solution of \(\dot{\theta}\), one can again use the equation related to the first slow-roll parameter, i.e., Eq. (24) and solve the Hubble parameter and the subsequent dynamics. Similar to the previous scenario, integrating Eq. (24) and using Eq. (41), evolution of the \(H\) as function of \(\theta\) can be obtained for the above model as
\[H=\frac{4m\theta}{(6(\theta^{2}-1)-3\theta\sin 2\theta+6\cos 2\theta+12\theta \text{ Si}(2\theta))}, \tag{43}\]
where, once again, we use the initial condition, \(H(\theta\to 0)\rightarrow\infty\) for the chaotic inflation, and \(\text{Si}(x)\) is Sine Integral function. Note that \(\text{Si}(x)\to x\) for \(x\to 0\), and as a result, under the limit \(\theta\ll 1\), one can verify that the above solution coincides with the solution of slow-roll chaotic inflationary solution given in Eq. (18). On the other extreme limit, \(\text{Si}(x)\rightarrow\pi/2\) for \(x\gg 1\), and thus, for \(\theta\gg 1\), the above solution coincides with the reheating solution (28). Therefore, the above solution of the Hubble parameter is consistent with both the slow-roll as well as the reheating solutions discussed in the previous section. Similarly, the general solutions of \(\phi\) and \(\dot{\phi}\), by using Eq (22), can be written as:
\[\phi = \frac{4\sqrt{6}\theta\cos\theta}{(6(\theta^{2}-1)-3\theta\sin 2 \theta+6\cos 2\theta+12\theta\text{ Si}(2\theta))}, \tag{44}\] \[\dot{\phi} = -\frac{4\sqrt{6}m\theta\sin\theta}{(6(\theta^{2}-1)-3\theta\sin 2 \theta+6\cos 2\theta+12\theta\text{ Si}(2\theta))}, \tag{45}\]
and the two slow-roll parameters can be expressed as:
\[\epsilon_{1}=3\sin^{2}\theta, \tag{46}\] \[\epsilon_{2}=\frac{\theta\cot\theta\left(6(\theta^{2}-1)-3\theta \sin 2\theta+6\cos 2\theta+12\theta\text{ Si}(2\theta)\right)}{2(1+\theta^{2})}. \tag{47}\]
Please note that the effect of the \(\text{Si}(x)\) function may appear to be irrelevant. Nonetheless, it can be demonstrated that this function plays a crucial role in the tran
Figure 1: On the top, we plot the Hubble parameter \(H\) as function of \(\theta\) for chaotic inflation model. At the bottom, we also plot \(\theta\) as a function of cosmic time \(t\).
sition between the two epochs and provides greater precision; therefore, it cannot be neglected.
Let us now discuss the impact of the full solution, which is demonstrated in Fig. 1. As can be seen, since the expressions of all variables are now given in terms of theta, we know the precise values of these variables in each physical instance, as specified by the variable \(\theta\). Consider, for example, the variable Hubble parameter \(H\) given in Eq. (43). At \(\theta=\sin^{-1}(1/\sqrt{3})\), inflation ends precisely, and the above analytical solution yields \(H_{\rm end}\simeq 0.503m\) at this level. In contrast, using the slow-roll approximation, we previously obtained \(H_{\rm end}\simeq m/\sqrt{3}\simeq 0.577m\) using analytic techniques. Using numerical simulations, we determine that \(H_{\rm end}\simeq 0.504\), which demonstrates that our method provides a much higher degree of accuracy. On the other hand, \(\theta=\pi/2\) denotes when the field reaches the bottom of the potential for the first time. Using our approach, it is now obvious that \(H\simeq 0.167m.\) For the second time, it reaches the bottom, \(H\simeq 0.087m\), for third time, it is \(H\simeq 0.061m\), and so on. On the other hand, when the field reaches its first maxima, \(H\simeq 0.112m\), at the second maxima, \(H\simeq 0.072m\), for the third, it is \(0.053m\), and so on. In Fig. 2, we compare our result with the numerical simulations, and it can be seen that our proposed method provides near-accurate results. In fact, using the above expressions (42) and (43), once we know the initial conditions (i.e., \(\phi\) and \(\dot{\phi}\)), we can easily evaluate \(\theta\), and subsequently, the value of the Hubble parameter. Other variables such as \(\epsilon_{1}\) (Fig. 3) and \(\phi\), \(\dot{\phi}\) (see Fig. 4) are then straightforward to evaluate. Thus, our analysis reflects, without the need for numerical simulations, analytically and with great accuracy, how the Universe evolves with time during any epoch, be it inflation or the reheating epoch (see Table 1).
Figure 4: Plot of evolution scalar field \(\phi\) as function of cosmic time \(t\). Here blue(solid) line corresponds to numerical solution obtained after solving the equations of motion given by Eqs. (6), and (7) numerically. The Red(dotted) line corresponds to the evolution of scalar field \(\phi\) obtained analytically by proposed model.
Figure 3: Plot of evolution of first slow roll parameter \(\epsilon_{1}\) as function of cosmic time \(t\). Here blue(solid) line corresponds to the numerical solution obtained after solving the equations of motion given by Eqs. (6), and (7) numerically. The Red(dotted) line corresponds to the evolution of Hubble parameter \(H\) and the first slow roll parameter \(\epsilon_{1}\) obtained analytically by the proposed model.
Figure 2: We plot the Hubble parameter \(H\) as function of \(\theta\) for chaotic inflation model as a function of cosmic time \(t\) both numerically and analytically and show that the analytical solution provides a good level of accuracy of evaluating the background dynamics.
This is one of the main results of this work.
## VI Extended general solution
In this section, we will extend the solution for chaotic inflation to other inflationary models. As we saw earlier, during the slow-roll stage, in the case of chaotic inflation, \(\dot{\theta}\propto\theta^{2}\), in the generalized scenario, we can extend the exponent factor from two to any arbitrary real positive number \(n\). During reheating, however, we already know that if the potential behaves nearly as \(V(\phi)\propto\phi^{2}\), then \(\dot{\theta}\) remains constant during this time period. For simplicity and to derive simple analytical expressions, we also want to maintain this relationship in the generalized scenario, i.e., near the bottom of the potential, we want it to behave similarly to \(\propto\phi^{2}\). A prime example, which will be discussed in the following section, is the Starobinsky inflation, where the potential is nearly flat during the slow-roll regime but behaves as \(\phi^{2}\) near the bottom of the potential. Hence, for generalized inflationary models, \(\dot{\theta}\) can be expressed as
\[\dot{\theta}\simeq\left\{\begin{array}{cc}\mu\theta^{n}&\text{ Slow-roll}\\ \nu&\text{Reheating}\end{array}\right. \tag{48}\]
where \(\mu,\ \nu\), and \(n\) are all positive constants that can be correlated with the model parameters, i.e., the inflationary potential. Here, we assume the potential's nature is simple and that the transition from the slow roll to the reheating scenario is seamless. Any feature of the potential or deviation from the slow-roll, such as ultra slow-roll, is not taken into account, as \(\dot{\theta}\) may differ from the above expression in such cases.
Similar to chaotic inflation discussed in the previous section, we now can combine both cases and propose the general solution of \(\dot{\theta}\) as a function of \(\theta\) as
\[\dot{\theta}=\frac{\mu\theta^{n}}{1+\frac{\mu}{\nu}\theta^{n}}. \tag{49}\]
The above equation can be integrated to get the solution of \(\theta\) in terms of the cosmic time as
\[\mu(1-n)\theta+\nu\theta^{1-n}=C_{1}+\mu\nu(1-n)t, \tag{50}\]
where \(C_{1}\equiv(1-n)\mu\theta_{\text{i}}+\nu\theta_{\text{i}}^{1-n}\) is the constant of integration, and \(\theta(0)\equiv\theta_{\text{i}}\). The dependence of \(\theta_{\text{i}}\) on \(\phi_{\text{i}}\) depends on the form of potential. Again, using Eq. (24) along with the above Eq. (49) and other equations, we now can obtain the solution corresponding to \(H\), \(\epsilon_{1}\), \(\epsilon_{2}\), \(\phi\), and \(\dot{\phi}\) as a function of \(\theta\) as
\[H =\frac{4\mu\nu(n-1)\theta^{n}}{\mu(n-1)(4\nu C_{2}+6\theta-\ 3\sin 2 \theta)\ \theta^{n}+3\nu(n-1)\left(E_{\text{n}}(2i\theta)+E_{\text{n}}(-2i\theta) \right)\theta-6\nu\theta}, \tag{51}\] \[\epsilon_{1} =3\sin^{2}\theta,\] (52) \[\epsilon_{2} =\frac{\mu(n-1)(4\nu C_{2}+6\theta-\ 3\sin 2\theta)\ \theta^{n}+3\nu(n-1) \left(E_{\text{n}}(2i\theta)+E_{\text{n}}(-2i\theta)\right)\theta-6\nu \theta}{4(n-1)(\nu+\mu\theta^{n})}\cot\theta,\] (53) \[\dot{\phi} =-\frac{4\sqrt{6}\mu\nu(n-1)\theta^{n}\sin\theta}{\mu(n-1)(4\nu C _{2}+6\theta-\ 3\sin 2\theta)\ \theta^{n}+3\nu(n-1)\left(E_{\text{n}}(2i\theta)+E_{\text{n}}(-2i\theta) \right)\theta-6\nu\theta},\] (54) \[V(\phi) =\frac{48\mu^{2}\nu^{2}(n-1)^{2}\theta^{2}n\cos^{2}\theta}{\left( \mu(n-1)(4\nu C_{2}+6\theta-\ 3\sin 2\theta)\ \theta^{n}+3\nu(n-1)\left(E_{\text{n}}(2i\theta)+E_{\text{n}}(-2i\theta) \right)\theta-6\nu\theta\right)^{2}}, \tag{55}\]
where \(E_{\text{n}}(x)\) is the exponential integral function, and \(C_{2}\) is the integration constant, which again depends on the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\theta\) & \(H\) & \(\phi\) & \(\dot{\phi}\) & \(\epsilon_{1}\) & \(\epsilon_{2}\) \\ \hline \hline \(0\) & \(\infty\) & \(\infty\) & \(0.816\ m\) & \(0\) & \(0\) \\ \hline \(\sin^{-1}(1/\sqrt{3})\) & \(0.503\ m\) & \(1.006\) & \(-0.711\ m\) & \(1\) & \(1.544\) \\ \hline \(\pi/2\) & \(0.167\ m\) & \(0\) & \(-0.408\ m\) & \(3\) & \(0\) \\ \hline \(\pi\) & \(0.111\ m\) & \(-0.273\) & \(0\) & \(0\) & \(\infty\) \\ \hline \(3\pi/2\) & \(0.087\ m\) & \(0\) & \(0.213\ m\) & \(3\) & \(0\) \\ \hline \(\pi\) & \(0.072\ m\) & \(0.176\) & \(0\) & \(0\) & \(\infty\) \\ \hline \end{tabular}
\end{table}
Table 1: Statistics for Chaotic inflationary model for different values of \(\theta\).
shape of the potential.
The background variable solutions mentioned above describe the evolution not only during the slow-roll and reheating phases but also during the transition phase. Additionally, it is model-independent, meaning that it may be used with both small and large field models, and the values of \(\mu\,\nu,\) and \(n\) are determined by the model parameters associated with the potential. As a result, it represents the comprehensive, model-independent solution of all the dynamical variables during the entire evolution from slow-roll to reheating, which is the main outcome of this article.
Before proceeding into the next section, let us now discuss the parameters \(C_{2}\) and \(n.\) In order to determine the value or the range of it, let us again consider the slow-roll regime. During slow-roll, i.e., for \(\theta\ll 1,\) the Hubble and the slow-roll parameters take the form:
\[H =\frac{1}{C_{2}+\frac{3}{(3-n)\mu}\ \theta^{3-n}}. \tag{58}\] \[\epsilon_{1} =3\theta^{2}\] (59) \[\epsilon_{2} =2\mu\theta^{n-1}\left(C_{2}+\frac{3}{(3-n)\mu}\ \theta^{3-n}\right) \tag{60}\]
Therefore, for \(n<3,\) the Hubble parameter approaches the value \(1/C_{2}\) when \(\theta\) approaches zero. On the other hand, if we set \(C_{2}=0,\) then the Hubble parameter \(H\rightarrow\infty,\) for \(\theta\to 0\). Therefore, in the case of large field models, where \(H\rightarrow\infty,\)\(C_{2}\) equals zero. On the other hand, for small field models, as \(V(\phi)\) saturates to one value, say, \(V_{0}\) as \(\theta\to 0,\)\(H\) takes the form \(\sqrt{V_{0}/3}\). Therefore, \(C_{2}\) can be associated with these values for large and small field models as
\[C_{2}=\left\{\begin{aligned} 0,&\text{large fields,}\\ \sqrt{\frac{3}{V_{0}}},&\text{small fields.}\end{aligned}\right. \tag{61}\]
The solution immediately translates to:
\[\dot{\phi}=-\frac{\sqrt{6}\theta}{C_{2}+\frac{3}{(3-n)\mu}\ \theta^{3-n}}, \tag{62}\]
and
\[V(\phi)\simeq\left\{\begin{aligned} \frac{\mu^{2}(3-n)^{2}}{3} \frac{1}{\theta^{6-2n}},&\text{large fields,}\\ \frac{3}{C_{2}^{2}+\frac{6C_{2}}{(3-n)\mu}\theta^{3-n}},& \text{small fields.}\end{aligned}\right. \tag{63}\]
where \(n<3\) and \(C_{2}\) is given by Eq. (61). The above expression, based on the functional form of the potential, leads to the functional dependence of \(\phi\) over \(\theta.\) One can also immediately notice that \(n>3\) is prohibited as \(H\) becomes negative. At the same time, \(n>0\) is required as \(\dot{\theta}\to 0\) as \(\theta\to 0.\) Therefore, the constraint on \(n\) is
\[0<n<3. \tag{64}\]
Now that the generic solution has been provided, let's examine various inflationary models. There are typically two types of inflationary potential: large field potentials and small field potentials. Observations have, however already ruled out the possibility of large field inflationary potentials, such as chaotic inflation. In contrast, among all small field models, we will discuss two types of models and determine their complete solutions in this paper.
### First kind of small field inflationary models
This kind of model, during inflation, for \(\phi\gg 1\) can be expressed as \(V(\phi)\simeq A(1-B\phi^{-\alpha}),\ \alpha>0.\) Again, we assume the potential has a minimum at \(\phi=0,\) and around it, it has a form \(V(\phi)\propto\phi^{2}\) for \(\phi\ll 1.\) Therefore, the potential that we are interested in can be expressed as:
\[V(\phi)\simeq\left\{\begin{aligned} A\left(1-B\phi^{-\alpha} \right)&\text{Slow-roll}\\ \frac{1}{2}m^{2}\phi^{2}&\text{Reheating}\end{aligned}\right. \tag{65}\]
where \(A,\)\(B,\)\(m,\) and \(\alpha\) are constants. This kind of model is called the polynomial \(\alpha\)-attractor model [67; 68; 69; 70; 71; 72]. In this case, one can relate the model parameter \(\alpha\) to the exponent \(n,\) given in Eq. (49) as
\[n=\frac{3+2\alpha}{1+\alpha}. \tag{66}\]
It is now obvious that, \(\alpha>0\) implies:
\[2\leq n\leq 3. \tag{67}\]
Similarly, \(\mu\) and \(\nu\) can also be expressed in terms of the models parameters \(A\) and \(B\) as
\[\mu=\frac{1}{3}\sqrt{\frac{A}{2}}B^{2}\alpha^{2}(1+\alpha)\left(\frac{\alpha B }{\sqrt{6}}\right)^{-\frac{3+2\alpha}{1+\alpha}},\quad\nu=m. \tag{68}\]
Using the above forms of \(n,\mu\) and \(\nu,\) along with Eqs. (50), (51), (52), (53), (54), and (56), we can obtain the full solution of the dynamics using our proposed method. However, two other pieces of information are needed to fully solve these equations. The first one is how \(\theta_{\text{i}},\) i.e., the initial condition for \(\theta\) depends on the initial condition of the field \(\phi_{\text{i}},\) such that (50) can be properly solved. This can be obtained by using Eq. (35) as:
\[\theta_{\text{i}}=\frac{\alpha B}{\sqrt{6}}\frac{1}{\phi_{\text{i}}^{1+\alpha}}. \tag{69}\]
Note that \(\phi_{\text{i}}>\phi_{*},\) where \(\phi_{*}\) relates to the pivot scale \(k=0.05\ \text{Mpc}^{-1}.\) The other information needed is the
constant appearing in the general solution of the Hubble parameter in Eq. (51). Since the model is categorized under small field models, as mentioned earlier, \(C_{2}\) can then be expressed as
\[C_{2}=\sqrt{\frac{3}{A}}. \tag{70}\]
The evolution corresponding to above model for a specific choice of \(A\), \(B\), and \(\alpha\) can be seen in Fig. 5.
### Second kind of small field inflationary models
In this case, the potential can be expressed as:
\[V(\phi)\simeq\left\{\begin{array}{ll}A\left(1-Be^{-\alpha\phi}\right)&\text{ Slow-roll}\\ \frac{1}{2}m^{2}\phi^{2}&\text{Reheating}\end{array}\right. \tag{71}\]
where \(A\), \(B\), \(m\), and \(\alpha\) are positive constants. This kind of model can be categorized as \(\alpha\)-attractor model [67; 68; 69; 70; 71; 72]. where, again, we use the \(Be^{-\alpha\phi}\ll 1.\) Similar to earlier case, \(n,\mu,\nu,\theta_{\text{i}}\) and \(C_{2}\) can be expressed in terms of the model parameters as
\[n =2,\quad\mu=\alpha\sqrt{2}\sqrt{A},\quad\nu=m, \tag{72}\] \[\theta_{\text{i}} =\frac{\alpha\exp{(-\alpha\phi_{\text{i}})}}{\sqrt{6}},\quad C_ {2}=\sqrt{\frac{3}{A}}. \tag{73}\]
One example of such a model is the famous Starobinsky model of inflation with the potential given by
\[V(\phi)=\frac{3}{4}m^{2}\left(1-e^{-\sqrt{\frac{2}{3}}\phi}\right)^{2}. \tag{74}\]
The evolution corresponding to above example can be seen in Fig. 6.
## VII Observations
As mentioned earlier, during slow-roll, \(\theta\ll 1\). During this epoch, Eqs. (49) and (58) leads to
\[\theta_{\text{N}}\simeq\mu\theta^{n}\left(C_{2}+\frac{3}{(3-n)\mu}\ \theta^{3-n}\right). \tag{75}\]
Here, \(N\equiv\ln{a(t)}\) is the e-folding number, \(\theta_{\text{N}}\equiv\text{d}\theta/\text{d}N=\dot{\theta}/H\). Please note that, in the case of large field models,
Figure 5: Plot of evolution \(\theta\) (Top) as function of cosmic time \(t\) and Hubble parameter \(H\) (Bottom) as function of \(\theta\) corresponding to the potential given by Eq. (71) with parameters given by \(A=10^{-10}\), \(B=1\), and \(\alpha=12\).
Figure 6: Plot of evolution \(\theta\) (Top) as function of cosmic time \(t\) and Hubble parameter \(H\) (Bottom) as function of \(\theta\) corresponding to the potential given by Eq. (74) with \(m=10^{-5}\).
\(C_{2}=0\). On the other hand, for small field models, \(C_{2}\) depends on the potential, as mentioned in the Eq. (61). Using this feature of \(C_{2}\) during slow roll, Eq. (75) can rewritten as
\[\theta_{\rm N}\simeq\begin{cases}\dfrac{3}{3-n}\theta^{3},&\text{ Large fields},\\ C_{2}\mu\theta^{n}.&\text{Small fields}.\end{cases} \tag{76}\]
For these two separate cases, one can integrate the above equation, and also by using the approximation \(\theta\ll\theta_{\rm end}\). One can then obtain the relation between \(\theta\) and \(N\) as
\[\theta\simeq\begin{cases}\sqrt{\dfrac{3-n}{6N}},&\text{Large fields},\\ \left(\dfrac{1}{C_{2}\mu N}\right)^{\frac{1}{n-1}},&\text{Small fields}. \end{cases} \tag{77}\]
Here \(N\) denotes the duration e-folding number from \(\theta\) to \(\theta_{\rm end}\). This relation is needed to evaluate the perturbations for a specific \(k\) mode. The perturbations, observationally, can be characterized by mainly four parameters: the scalar spectral index \(n_{\rm s}\), the tensor spectral index \(n_{\rm t}\), tensor-to-scalar ratio \(r\) and the scalar power spectrum \(\mathcal{P}_{\rm s}\). For a single canonical scalar field minimally coupled to gravity that leads to slow-roll inflation, these parameters can be written in terms of potential, and the slow-roll parameters as
\[n_{\rm s}\simeq 1-2\epsilon_{1}-\epsilon_{2},\quad n_{\rm t} \simeq-2\epsilon_{1}, \tag{78}\] \[\mathcal{P}_{\rm s}\simeq\dfrac{H^{2}}{8\pi^{2}\epsilon_{1}}, \qquad\quad r\simeq 16\epsilon_{1}. \tag{79}\]
Using Eqs. (58), (59) and (59), the above parameters can be expressed in terms of \(\theta\) as
\[n_{\rm s}\simeq 1+\dfrac{6(4-n)\theta^{2}}{n-3}-2C_{2}\mu \theta^{n-1},\quad\ n_{\rm t}\simeq-6\theta^{2},\] \[\mathcal{P}_{\rm s}\simeq\dfrac{1}{24\pi^{2}\theta^{2}\left(C_{2} +\frac{3}{(3-n)\mu}\ \theta^{3-n}\right)^{2}},\quad r\simeq 48\theta^{2}. \tag{80}\]
Observations (BICEP/Keck [26; 27] and PLANCK [25]) suggest that, at the pivot scale (\(k_{\rm s}=0.05\,\text{Mpc}^{-1}\)), the amplitude of the scalar power spectrum is \(\mathcal{P}_{\rm s}\simeq 2.101^{+0.031}_{-0.034}\times 10^{-9}(68\%\ \text{CL})\) with the scalar spectral index being \(n_{\rm s}=0.9649\pm 0.0042\,(68\%\ \text{CL})\), while the tensor-to-scalar ratio \(r\) is bounded from above by \(r<0.028\,(95\%\ \text{CL})\). As of yet, there is no bound on the tensor spectral index \(n_{\rm t}\). To evaluate Eq. (80), an additional requirement is necessary: the relation between \(\theta\) related to the pivot scale, i.e. \(\theta_{*}\) and the e-folding number \(N\), given in Eq. (77). In general, the pivot scale leaves the Hubble horizon \(50-60\) e-folds before the end of inflation, i.e., \(N_{*}\sim 50-60\). Therefore, by using this information, one can evaluate the observational parameters for any model of inflation.
Let us first discuss the large field models with \(V(\phi)\propto\phi^{\alpha},\ \alpha>0\). In this case, one can verify that
\[n=3-\dfrac{\alpha}{2}. \tag{81}\]
It leads to \(r=4\alpha/N_{*}\) and \(n_{\rm s}=1-(2+\alpha)/(2N_{*})\), which, one can then quickly verify that for \(N_{*}\sim 50-60\), that these relations do not obey the observational constraints, as is already mentioned in the previous section. On the other hand, in the case of small field models, all observational parameters can be expressed in terms of \(C_{2},\mu\) and \(n\) as
\[n_{\rm s}\simeq 1-\dfrac{2}{(n-1)N_{*}}, \tag{82}\] \[n_{\rm t}\simeq -\dfrac{6}{(C_{2}\mu(n-1)N_{*})^{\frac{2}{n-1}}},\] (83) \[r\simeq \dfrac{48}{(C_{2}\mu(n-1)N_{*})^{\frac{2}{n-1}}},\] (84) \[\mathcal{P}_{\rm s}\simeq \dfrac{(C_{2}\mu(n-1)N_{*})^{\frac{2}{n-1}}}{24\pi^{2}C_{2}^{2}}. \tag{85}\]
Again, by using the observational constraints with \(N_{*}\sim 50-60\), one can, in general, obtain the constrained values of these parameters as
\[1.84<n<2.29,\quad C_{2}>5.87\times 10^{4}, \tag{86}\] \[0.83\times 10^{-5}<\mu<3.21\times 10^{-5}. \tag{87}\]
These are the most general constraints on small-field inflationary models.
Let us now focus separately on the two different and special cases of small field models that we discussed in the previous section. For the first kind of small field models with \(V(\phi)=A(1-B\phi^{-\alpha})\), all observational parameters can be expressed as
\[n_{\rm s}\simeq 1-\dfrac{2(1+\alpha)}{(2+\alpha)N_{*}} \tag{88}\] \[n_{\rm t}\simeq -\dfrac{(B\alpha)^{\frac{2}{2+\alpha}}}{((2+\alpha)N_{*})^{\frac {2(1+\alpha)}{2+\alpha}}}\] (89) \[r\simeq \dfrac{8(B\alpha)^{\frac{2}{2+\alpha}}}{((2+\alpha)N_{*})^{\frac {2(1+\alpha)}{2+\alpha}}}\] (90) \[\mathcal{P}_{\rm s}\simeq \dfrac{A}{12\pi^{2}}\dfrac{((2+\alpha)N_{*})^{\frac{2(1+\alpha)} {2+\alpha}}}{(B\alpha)^{\frac{2}{2+\alpha}}}. \tag{91}\]
Please note that these expressions are obtained by using Eqs. (77) and (80), which, one can verify to be true by using \(\phi\) and \(N\) relations, as used in general. It, therefore, shows that our method also provides consistent results for the perturbations. Using these observational constraints, the constraint on the model parameters, for this special case, can be obtained as:
\[2.40\leq\alpha\leq 55.14,\quad A\leq 8.70\times 10^{-10}, \tag{92}\]
and \(B\) can be as large as \(10^{126}\).
In the second special case with \(V(\phi)=A(1-B\exp(-\alpha\phi))\), similarly, the observable parameters can be written in terms of the e-folding number as
\[n_{\mathrm{s}}\simeq \ 1-\frac{2}{N_{\mathrm{*}}}, \tag{93}\] \[n_{\mathrm{t}}\simeq \ -\frac{1}{\alpha^{2}N_{\mathrm{*}}^{2}},\] (94) \[r\simeq \ \frac{8}{\alpha^{2}N_{\mathrm{*}}^{2}},\] (95) \[\mathcal{P}_{\mathrm{s}}\simeq \ \frac{A\alpha^{2}N_{\mathrm{*}}^{2}}{12\pi^{2}}. \tag{96}\]
Using the observational constraints, the constraint on the model parameters can be obtained as
\[\alpha\geq 0.34,\quad A\leq 6.33\times 10^{-10}. \tag{97}\]
It is important to note that the model parameter constraints mentioned above do not account for the effect of reheating; therefore they do not justify the actual limitations. To include the effect, we must analyze the effective equation of state parameter during reheating \(w_{\mathrm{re}}\), and the duration of reheating \(N_{\mathrm{re}}\), which is governed by the equation [49]:
\[N_{\mathrm{re}}=\frac{4}{3w_{\mathrm{re}}-1}\left(\log\left( \frac{k}{a_{0}T_{0}}\right)+N_{k}-\log(H_{k})+\right.\] \[\left.\frac{1}{4}\log(\rho_{\mathrm{end}})+\frac{1}{3}\log\left( \frac{11g_{\mathrm{s,re}}}{43}\right)+\frac{1}{4}\log\left(\frac{30}{\pi^{2}g _{\mathrm{reh}}}\right)\right). \tag{98}\]
Here, \(\{a_{0},T_{0}\}\) are present values of the scale factor and the temperature of the Universe, respectively. \(H_{k}\) is the Hubble scale when the mode leaves the Horizon; \(\rho_{\mathrm{end}}\) is the energy density at the end of inflation and \(\{g_{\mathrm{reh}},g_{\mathrm{s,re}}\}\) are the effective number of relativistic species upon thermalization and the effective number of light species for entropy during reheating, respectively. As the reheating epoch, as a result of the proposed new solution that modifies the smooth transition from slow-roll to deep oscillations, deviates slightly different than that previously expressed in the literature, we anticipate that the constraints on the model parameters and the duration of reheating will also be modified. It necessitates in-depth analysis, and thus, we will reserve it for future work.
## VIII Conclusions
We considered a single canonical scalar field model minimally coupled to gravity with a potential \(V(\phi)\) that leads to the evolution of the Universe consisting of both slow-roll inflation and oscillatory behavior around the potential's minimum, also known as the reheating era. The complete solution for the background field in this scenario remains elusive. Traditionally, we've solved it using two discrete regimes, each with its own set of conditions.
The first regime is characterized by the slow-roll condition, in which the slow-roll parameters are considerably less than unity. In this regime, the slow-roll solution is a well-established analytical solution. In contrast, the second regime takes effect when the field enters the reheating phase, and the Hubble parameters fall substantially below the effective mass of the potential.
The difficulty arises, however, when attempting to bridge the distance between these two regimes during a phase in which both slow-rolling and reheating conditions fail. In order to resolve this dilemma, our work seeks to present a model-independent, unified solution. With this objective in mind, we assume the following:
1. **Simple potential:** the nature of the potential is simple.
2. **Minimum of the potential** potential has a minimum at \(\phi=0\).
3. **Exact de-Sitter:**\(|\phi|\to\infty\) leads to \(\epsilon_{1}=0\), implying the de-Sitter Universe.
4. **Slow-roll:** the potential, for \(|\phi|\gg 1\) leads to slow-roll inflation.
5. **Near-minimum behavior:** In the vicinity of the minimum, the potential can be approximated as \(V(\phi)\propto\phi^{2}\) for simplicity.
To address this difficulty, we assume a new viewpoint by employing phase-space variables \(\{\theta,H\}\) rather than \(\{\phi,\dot{\phi}\}\). This change is advantageous because the variable \(\theta\) corresponds directly to various phases of cosmic evolution. For example, \(\theta\ll 1\) (implies \(|\phi|\gg 1\)) denotes a period of slow-roll inflationary evolution, whereas \(\theta=\sin^{-1}(1/\sqrt{3})\) (equivalent to \(\epsilon_{1}=1\)) denotes the precise end of the inflationary phase. In addition, \(\theta=\pi/2\) corresponds to \(\phi=0\), indicating that the field has reached the minimal potential. This method is particularly significant because we know that particle production, specifically resonance, occurs at the potential's minimum -- a location that is difficult to pinpoint using cosmic time, \(t\).
In our work, we first propose a unified solution for the Universe within the chaotic inflation model, where the potential is \(V(\phi)=1/2m^{2}\phi^{2}\). We provide comprehensive solutions for critical parameters such as the Hubble parameter \(H\), slow-roll parameters \(\epsilon_{1}\) and \(\epsilon_{2}\), the scalar field \(\phi\), and its time derivative \(\dot{\phi}\) -- all expressed in terms of \(\theta\). To complete the dynamics, we also furnish the solution for \(\theta\) as a function of cosmic time, \(t\). We further extend our methods to broader models of inflation. This accomplishment addresses three essential concerns:
1. **Full Dynamics:** We now possess the complete dynamical evolution of the Universe, spanning from
slow-roll inflation to reheating, including the intermediate junction between these phases, rendered smoothly.
2. **Intermediate identification:** Second, as the full solution is now known, one can immediately identify the evolution simply by knowing the correct value of \(\theta\) and analyzing the solution for the complete evolution of the Universe.
3. **Model Independence:** Our solution is not tied to a specific model; instead, it can be applied across a wide spectrum of inflationary models.
This integrated solution also provides insightful qualitative and quantitative analysis of reheating. On the qualitative front, we can now incorporate the effects of the end of inflation and the onset of reheating during the creation of particles via parametric resonance, a process effectively described by the Mathieu equation [35]. On the other hand, as stated previously, for quantitative analysis, the equation that relates the reheating e-folding number \(N_{\rm re}\) (or the temperature at the end of reheating, i.e., \(T_{\rm re}\)) to the scalar spectral index \(n_{\rm s}\), i.e., Eq. (98), can be modified by incorporating the proposed solution, which we believe can help improve the observational constraints. In conclusion, by combining these two analysis, the theoretical predictions can be substantially enhanced.
In conclusion, although our work focuses predominantly on simple models such as chaotic inflation and small-field models like \(\alpha\)-attractors, we recognize that more complex scenarios exist. These include models that deviate from slow-roll during inflation, such as those that can generate primordial black holes, as well as models with discrete behaviors around the minimum that do not conform to \(\phi^{2}\). Exploring these complex models and undertaking in-depth perturbation analysis during the reheating period are promising future research directions.
## Acknowledgements
DN is supported by the DST, Government of India through the DST-INSPIRE Faculty fellowship (04/2020/002142). MK is supported by a DST-INSPIRE Fellowship under the reference number: IF170808, DST, Government of India. DN and MK are also very thankful to the Department of Physics and Astrophysics, University of Delhi. MK and DN also acknowledge facilities provided by the IUCAA Centre for Astronomy Research and Development (ICARD), University of Delhi.
|
2309.13079 | MiChao-HuaFen 1.0: A Specialized Pre-trained Corpus Dataset for
Domain-specific Large Models | With the advancement of deep learning technologies, general-purpose large
models such as GPT-4 have demonstrated exceptional capabilities across various
domains. Nevertheless, there remains a demand for high-quality, domain-specific
outputs in areas like healthcare, law, and finance. This paper first evaluates
the existing large models for specialized domains and discusses their
limitations. To cater to the specific needs of certain domains, we introduce
the ``MiChao-HuaFen 1.0'' pre-trained corpus dataset, tailored for the news and
governmental sectors. The dataset, sourced from publicly available internet
data from 2022, underwent multiple rounds of cleansing and processing to ensure
high quality and reliable origins, with provisions for consistent and stable
updates. This dataset not only supports the pre-training of large models for
Chinese vertical domains but also aids in propelling deep learning research and
applications in related fields. | Yidong Liu, FuKai Shang, Fang Wang, Rui Xu, Jun Wang, Wei Li, Yao Li, Conghui He | 2023-09-21T09:02:28Z | http://arxiv.org/abs/2309.13079v2 | # MiChao-HuaFen 1.0: A Specialized Pre-trained Corpus Dataset for Domain-specific Large Models
###### Abstract
With the advancement of deep learning technologies, general-purpose large models such as GPT-4 have demonstrated exceptional capabilities across various domains. Nevertheless, there remains a demand for high-quality, domain-specific outputs in areas like healthcare, law, and finance. This paper first evaluates the existing large models for specialized domains and discusses their limitations. To cater to the specific needs of certain domains, we introduce the "MiChao-HuaFen 1.0" pre-trained corpus dataset, tailored for the news and governmental sectors. The dataset, sourced from publicly available internet data from 2022, underwent multiple rounds of cleansing and processing to ensure high quality and reliable origins, with provisions for consistent and stable updates. This dataset not only supports the pre-training of large models for Chinese vertical domains but also aids in propelling deep learning research and applications in related fields.
## 1 Introduction
In the realm of general-purpose large models, models like GPT-4[7] have showcased formidable comprehensive capabilities, ranging from general knowledge Q&A, content creation, coding, to reasoning, often matching or even surpassing human abilities. However, these general models still exhibit deficiencies in domain-specific knowledge, especially in areas like healthcare, law, and finance. This necessitates large models tailored for specific domains to achieve outputs that align with domain expertise, especially in the Chinese context. Currently, there are several studies on large models for specific domains, for instance, in healthcare: DoctorGLM[9], Huatuo-Llama-Med-Chinese[2]; in law: LaWGPT[6], ChatLaw[1], Lawyer LLaMA[5]; and in finance: FinGPT[10], Cornucopia-LLaMA-Fin-Chinese[11]. Most of these models are fine-tuned from open-source base models like ChatGLM and LLama. Studies have shown[8] that without incorporating relevant corpora during the pre-training phase and relying solely on fine-tuning, optimal model performance cannot be achieved. Hence, this research introduces the "MiChao-HuaFen 1.0" pre-trained corpus dataset for news and governmental
domain models, aiming to better support the corpus requirements during the pre-training phase of Chinese vertical domain models. While there are existing open-source pre-trained corpus datasets, such as "WanJuan 1.0"[3] by OpenDataLab[4], "MiChao-HuaFen" focuses on collecting data from news and governmental sources, curated from publicly accessible websites' historical data from 2022, ensuring reliable origins, high data quality, and consistent updates.
## 2 Dataset Statistics
The "MiChao-HuaFen 1.0" corpus is sourced from publicly accessible Chinese internet data, primarily from news and governmental domains. Through keyword filtering, image extraction, rule-based filtering, and format conversion, a high-quality text model corpus has been established. The final cleaned data consists of over 70 million entries, including over 1 million image links. Sample data is as follows:
* id: Unique document ID (String type).
* img_list: List of image URLs within the document (Array type).
* title: Document title (String type), in plain text or Markdown format.
* post_date: Document publication date (String type).
* content: Document content (String type), in plain text or Markdown format.
## 3 Methods
The data processing methods for "MiChao-HuaFen 1.0" largely adhere to existing pre-trained corpus cleaning methodologies, primarily following these principles:
* Source compliance: Initial data source screening ensures licensing and compliance, with language filters to guarantee Chinese content.Corpus diversity: Aim to select diverse types and sources of data.
Figure 1: Data Sample.
* Sensitive information removal: Keyword filtering and model classification are employed to filter out sensitive data sources and remove PII.
* Ensuring corpus quality: Multiple data processing techniques and rounds of review ensure corpus purity and quality, removing unsuitable training data based on our model training experience.
Based on these principles, the corpus underwent the following processing:
1. Keyword Filtering: As a primary step in web content processing, keyword filtering is crucial for ensuring content safety and accuracy. By building a keyword library, we can swiftly identify and remove content with sensitive or inappropriate terms, enhancing data quality and ensuring user safety.
2. Image Extraction: Images, as information carriers, often convey more than plain text. Thus, we extracted image links from the corpus. Using xpath technology, we can efficiently extract image links from complex web structures. All extracted image links are stored in an array field for further processing and application.
3. Rule-based Filtering: This key step refines web content to ensure corpus conciseness and relevance. We employed numerous rule strategies, such as using xpath to remove all HTML tags like <script> and <style>, and filtering out content shorter than 200 characters to ensure depth and information richness.
4. Quality Inspection: To further ensure data quality, we combined manual and automated model quality checks. Some data undergo manual sampling, while models scan the entire dataset to ensure accuracy and completeness.
5. Rule Refinement: During actual processing, there are always exceptions or special cases. For data that doesn't meet requirements, we further refine processing rules, adding new rules to our rule library. This way, the processing system continually learns and evolves, adapting to more scenarios and needs.
6. Formatting: The final step is to format the processed data, ensuring it adheres to standard Markdown format. We also retain previously extracted image links, providing content with both text and rich image information.
## 4 Conclusion
As large models become more prevalent, providing them with more specialized and targeted pre-training data has become a pivotal research direction. The release of the "MiChao-HuaFen 1.0" pre-trained corpus dataset aims to meet this demand, especially in the news and governmental verticals. The primary audience for this dataset includes, but is not limited to:
* AI researchers and scholars: For those researching Chinese domain model pre-training, this dataset offers more professional and high-quality pre-training corpora.
* Enterprises and institutions: Especially news agencies and government departments can utilize this dataset for their model pre-training, achieving more precise and compliant model outputs.
Figure 2: Data processing pipeline.
The "MiChao-HuaFen 1.0" pre-trained corpus dataset can be accessed and downloaded at [https://opendatalab.org.cn/OpenDataLab/WanJuan1_dot_0](https://opendatalab.org.cn/OpenDataLab/WanJuan1_dot_0).
To ensure data compliance and privacy, we rigorously screened and reviewed all content before release, ensuring no sensitive information is included. However, it's crucial to note that any organization or individual using the dataset must adhere to the respective usage agreement and cite our related publications when conducting research or applications.
|
2309.14299 | Intraday variations of polarization vector in blazars: a key to the
optical jet structure? | This report presents the results of optical polarimetric observations carried
out with 6-m and 1-m telescopes at SAO RAS. The study of the blazar S5 0716+714
radiation showed the presence of a period of the variability of brightness and
polarization vector variations on scales of $\sim$1.5 hours, constant on a long
time scale; multi-colour monitoring of BL Lac polarization before, during and
after the flare demonstrates the difference in the patterns of polarization
vector variability depending on the wavelength. Several geometrical models and
physical descriptions are discussed. | Elena Shablovinskaya, Eugene Malygin, Dmitry Oparin | 2023-09-25T17:14:04Z | http://arxiv.org/abs/2309.14299v1 | [
###### Abstract
This report presents the results of optical polarimetric observations carried out with 6-m and 1-m telescopes at SAO RAS. The study of the blazar S5 0716+714 radiation showed the presence of a period of the variability of brightness and polarization vector variations on scales of \(\sim\)1.5 hours, constant on a long time scale; multi-colour monitoring of BL Lac polarization before, during and after the flare demonstrates the difference in the patterns of polarization vector variability depending on the wavelength. Several geometrical models and physical descriptions are discussed.
BL Lacertae objects: BL Lac & S5 0716+714; polarization; methods: observational IAU Symposium No. 375, 2023] Intraday variations of polarization vector in blazars: a key to the optical jet structure? Elena Shablovinskaya, Eugene Malygin, Dmitry Oparin
## 1 Introduction
For several decades, the optical variability of blazars as a special type of active galactic nuclei (AGNs) with the jet oriented towards the observer has been actively investigated. The most popular is the accumulation of long-term photometric data series (e.g. Blinov et al. 2021). However, the variability of blazars is violent and stochastic; and just as it is impossible to unambiguously reconstruct the three-dimensional air masses flow in the Earth's atmosphere according to the graph of wind speed changes over time, it is difficult to understand exclusively by the blazar light curves what physical and dynamical processes the plasma is driven by.
Fortunately, blazars are highly polarized in all spectral ranges. Their polarization of a synchrotron nature is related both to the dynamics of the emitting plasma in the unresolved jet region at scales \(<\)0.01 pc from the nucleus and to the magnetic field in which the plasma moves. This initiates curiosity to study and map the rotation of the polarization vector, which is expected to be fast due to relativistic velocities and the small linear size of the jet.
Intraday variations (IDV) of the polarization vector direction are observed in the optical and radio bands. In the case of radio observations, IDV clearly showed the helical structure of the jet (e.g. Li et al. 2018). Patterns of optical polarization are not so clear to unambiguously reveal the same; however, long-term intraday polarization monitoring in perspective is a good tool for checking the inner models of the optical jet. Moreover, IDV continues to be little studied to date, which leaves a number of questions, e.g., how IDV depends on activity states and what physical processes dominate. Raising such issues, for several years we have been conducting intraday monitoring of the polarization of a sample of blazars. To date, we have obtained the most interesting results for two of them: S5 0716+714 and BL Lac. We briefly describe these results below.
## 2 Brief comments on observational technique
The observations were conducted at the 6-m BTA with SCORPIO-2 (Afanasiev & Moiseev 2011) and 1-m Zeiss-1000 of SAO RAS with StoP (Afanasiev et al. 2021) and MAGIC
(Afanasiev et al. 2022) devices. Regardless of the device, for polarimetric observations, we use double Wollaston prism (Oliva 1997; Geyer et al. 1993) to conduct observations in the so-called one-shot polarimetry mode unaltered by rapid atmosphere flickering. Thus, the Stokes parameters \(I\), \(Q\), \(U\) are measured independently for each frame. Combining it with the differential measurements by local standard stars in the field we achieve high accuracy of polarimetry (typically 0.1% for AGNs). The detailed description of prisms parameters and observational and reduction techniques could be found in Afanasiev & Amirkhanyan (2012); Shablovinskaya & Afanasiev (2019); Shablovinskaya, Malygin, & Oparin (2023).
## 3 S5 0716+714 and geometrical model
S5 0716+714 is one of the brightest and most variable blazars of the northern sky, attracting special attention due to the lack of unambiguous estimates of its redshift. For the first time, we conducted its polarimetric monitoring with a 1 min cadence during the whole night on the 6-m telescope, which allowed us to detect the polarization vector changes on small time scales: switches occurred with a period of \(\sim\)1.5 hours, and the same period was demonstrated by total flux variations (more details in Shablovinskaya & Afanasiev 2019). This corresponds to the cross-section size of an optical jet of the order of 10 a.u. Two years later, we repeated similar observations of S5 0716+714 on the 1-m telescope (initially for testing a new device). The analysis of the latest data confirmed the result obtained earlier at the BTA and showed that the size of the jet emitting in the optical range is stable1(Afanasiev et al. 2021). Variations of the polarization vector on the \(QU\)-plane are given in Fig. 1.
Footnote 1: or, at least, typical if we were able to detect it in observations spaced in time by several years.
To explain the rotation of the polarization vector based on the works by Steffen et al. (1995); Nalewajko (2009), we have constructed a simple geometric model of plasma motion in a helical magnetic field in a conical jet. It is shown that the behaviour of the polarization vector on the \(QU\)-plane can be described by this model taking into account the precession of the magnetic field with a period of \(\sim\)15 days. Moreover, following the calculations of Butuzova (2021), observed magnitude variations are predicted due to the
Figure 1: Overnight variations of the normalized Stokes parameters \(Q\) and \(U\) projected onto the \(QU\)-plane in S5 0716+714: 2018 (left, Shablovinskaya & Afanasiev 2019) and 2020 (right, Afanasiev et al. 2021) observation runs.
Doppler factor changes. However, such a simple model is not so satisfactory because it resembles Ptolemy's epicycles: the more parameters are laid down, the more complex (and in general - any) pattern of polarization variations can be obtained. This prompted us to expand our observations to be able to test physical processes in plasma.
## 4 BL Lac and physical insights
For S5 0716+714, the \(QU\)-plane revealed the complex and smooth polarization IDV trajectories. But what would the picture look like in different colours? Generally speaking, due to synchrotron losses, the patterns of polarization variability should be the same, but slightly shifted in time, and this shift is proportional to the magnetic field strength. In the case of the contribution of external polarization mechanisms, the trajectories will be stably shifted relative to each other; if the physical processes in the plasma are radically different in different optical bands, then the behaviour of the polarization vector in different colours will not correlate.
To test this, we carried out observations in two optical filters (conditionally "red" and conditionally "blue") of the blazar BL Lac in 2020-2022, when the object showed extraordinary variability. Due to the activity of the blazar, other authors also observed its multi-wavelength polarization during the same period, but our epochs did not coincide, although were often close (see Fig. 2, left).
In 4 of the 9 epochs, when the blazar was in a relatively quiet state, we observed variations of brightness and polarization vector rotations without a pronounced period. The polarization chromatism was moderate, with the dominance of the "blue" component. However, during the two extreme states of the blazar, the picture was different: during the flare, the polarization vector had sharper rotation but was weaker of \(\sim\)1-8% and dominantly "red". During the deep minimum of the blazar, on the contrary, the variability in integral and polarized light was weak, but the maximum "blue" chromatism with a degree of polarization up to \(\sim\)30% was observed [see Fig. 2, on the right and (Shablovinskaya, Malygin, & Oparin 2023) for details].
The obtained observational data allow us to draw some conclusions (i.e. exclude some models) about the physical processes inside the optical jet. First, due to the polarization
Figure 2: Left: BL Lac light curves according to AAVSO data. Our monitoring data is marked with circles. The green vertical bars indicate optical observations from (Imazawa et al., 2023), and orange dashed lines are for _IXPE_ epochs from (Middei et al., 2023). Right: \(QU\)-diagram for the polarization difference between the ”redder” and ”bluer” bands (details in Shablovinskaya, Malygin, & Oparin 2023)
IDV and its chromatism on scales of an hour, assumptions about the external nature of the dependence of polarization on wavelength are excluded. In particular, any influence of the accretion disk is excluded due to the high polarization degree in the minimum state. To explain polarization chromatism, it is possible to assume processes in a synchrotron emitting plasma, but then the electron energy distribution must be broken and change at short times. Qualitatively, the polarization behaviour can be described by the model of turbulent plasma with a shock (Marscher & Jorstad, 2021) or magnetic reconnection (Zhang et al., 2022). But in order to determine which acceleration process takes place, new numerical models of both processes are required.
## 5 Further perspectives
By S5 0716+714 and BL Lac observations, it can be seen the polarization IDV can not only resolve the plasma motion in the optical jet but also in the future be a critical test for physical models of plasma acceleration and emission. However, this still requires more extensive statistics, including different phases of activity of blazars of different types, at larger time scales and in a wider spectral range. On the other hand, the observations demonstrate the need to combine models of synchrotron radiation, the evolution of turbulent cells and the magnetic field, etc.
Observations with the SAO RAS telescopes are supported by the Ministry of Science and Higher Education of the Russian Federation. The renovation of telescope equipment is currently provided within the national project "Science and Universities". We obtained part of the observed data on the unique scientific facility "Big Telescope Alt-azimuthal" of SAO RAS as well as made data processing with the financial support of grant No075-15-2022-262 (13.MNPMU.21.0003) of the Ministry of Science and Higher Education of the Russian Federation.
|
2305.19468 | Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable
Spiking Neural Network on FPGA | This paper presents an efficient hardware implementation of the recently
proposed Optimized Deep Event-driven Spiking Neural Network Architecture
(ODESA). ODESA is the first network to have end-to-end multi-layer online local
supervised training without using gradients and has the combined adaptation of
weights and thresholds in an efficient hierarchical structure. This research
shows that the network architecture and the online training of weights and
thresholds can be implemented efficiently on a large scale in hardware. The
implementation consists of a multi-layer Spiking Neural Network (SNN) and
individual training modules for each layer that enable online self-learning
without using back-propagation. By using simple local adaptive selection
thresholds, a Winner-Takes-All (WTA) constraint on each layer, and a modified
weight update rule that is more amenable to hardware, the trainer module
allocates neuronal resources optimally at each layer without having to pass
high-precision error measurements across layers. All elements in the system,
including the training module, interact using event-based binary spikes. The
hardware-optimized implementation is shown to preserve the performance of the
original algorithm across multiple spatial-temporal classification problems
with significantly reduced hardware requirements. | Ali Mehrabi, Yeshwanth Bethi, André van Schaik, Andrew Wabnitz, Saeed Afshar | 2023-05-31T00:34:15Z | http://arxiv.org/abs/2305.19468v1 | Efficient Implementation of a Multi-Layer Gradient-Free Online-Trainable Spiking Neural Network on FPGA
###### Abstract
This paper presents an efficient hardware implementation of the recently proposed Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is the first network to have end-to-end multi-layer online local supervised training without using gradients and has the combined adaptation of weights and thresholds in an efficient hierarchical structure. This research shows that the network architecture and the online training of weights and thresholds can be implemented efficiently on a large scale in hardware. The implementation consists of a multi-layer Spiking Neural Network (SNN) and individual training modules for each layer that enable online self-learning without using back-propagation. By using simple local adaptive selection thresholds, a Winner-Takes-All (WTA) constraint on each layer, and a modified weight update rule that is more amenable to hardware, the trainer module allocates neuronal resources optimally at each layer without having to pass high-precision error measurements across layers. All elements in the system, including the training module, interact using event-based binary spikes. The hardware-optimized implementation is shown to preserve the performance of the original algorithm across multiple spatial-temporal classification problems with significantly reduced hardware requirements.
Spiking Neural Networks, Supervised Learning, Neuromorphic Hardware.
## 1 Introduction
Artificial Neural Networks (ANNs) and multi-layer perceptrons were developed as highly simplified models of biological neural computation through the use of distributed interconnected computing nodes, or neurons, which operate as a network, in contrast to the sequential architecture of conventional modern processors [1, 2]. Deep ANNs have been developed, widely used, and optimized in the past two decades, resulting in significant advances in many scientific fields. As universal function approximators, ANNs can be applied to complex problems such as pattern recognition, classification, time series analysis, and speech recognition using the backpropagation algorithm [3] for training.
During the same period, there has been a significant investigation and exploration of artificial Spiking Neural Networks (SNN), which are better models of biological neural networks by incorporating the spiking behavior of neurons observed in larger biological neural networks [4]. The investigation of SNNs is often motivated by the idea that the spiking behavior of biological nervous systems is functionally essential and provides computational and efficiency benefits [5, 6, 7, 8].
In contrast to ANNs, neurons in SNNs use precisely timed binary-valued pulse streams or spikes to transfer information. SNNs can perform sparse computations due to the inherent sparsity in their data. The ability to operate in an event-driven fashion, rather than the traditional synchronous clock-driven computational approach in ANNs, makes SNNs suitable for processing continuous-time spatio-temporal data. However, training SNNs is still an open research question, and a universal training algorithm akin to error backpropagation for ANNs is yet to be found.
The spiking outputs generated by spiking neurons can be modeled as a train of Dirac's delta functions which do not have a derivative. The hard thresholding operation that is one of the key elements of function in spiking neuron models is also not differentiable. This non-differentiability of computations poses a fundamental challenge in assigning credit to earlier nodes in a network of spiking neurons to optimize synaptic weights. SpikeProp [9], Tempotron [10], Chronotron [11], ReSuMe [12], and DL-ReSuMe [13] are some early methods introduced to apply gradient descent to train single-layered SNN models using various loss functions. More recent works focused on approximating error backpropagation to SNN architectures, like using surrogate gradients for the different non-differentiable computations in an SNN [14, 15, 16, 17, 18]. All the existing approximations of error backpropagation in SNNs batch data to accumulate gradients. They also require a symmetric backward data pathway to transfer continuous-valued gradients from the output layer through to the input layer to
update neuronal weights in hidden layers. Some even rely on non-causal operations like Back Prop Through Time(BPTT) to update the synaptic weights. However, such non-local and non-causal operations in learning are not biologically plausible, and no such evidence of symmetric backward pathways in biological nervous systems is likely to be found. Despite the lack of bio-plausibility, error backpropagation methods have become popular tools to train SNNs for specific tasks. The error backpropagation methods are often computationally expensive, requiring energy-intensive GPUs to train them offline.
Feedback alignment [19] is one of the few alternatives to error backpropagation for SNNs, and it also requires passing continuous valued error values to each neuron. Local learning rules that do not require access to the weights of other neurons and communication of continuous-valued error gradients have been desirable for training SNNs. Variations of Spike Time-Dependent Plasticity (STDP) rules were applied to perform unsupervised feature extraction to classify spatio-temporal patterns [20, 21, 22, 23, 24]. Mozafari et al. [25] used reward-modulated STPD to perform object recognition. Paredes-Valles et al. [26] used STDP rules to perform the optical-flow estimation. Local learning rules close to STDP, like Supervised Hebbian Learning [27] and ReSuMe [28], were also developed to perform supervised learning. However, multilayer versions [29, 30] rely on backpropagating continuous-valued feedback across hidden layers.
In addition to training concerns, von Neumann computer architectures are not well suited for SNN implementations due to the massive parallelism inherent in an SNN network, where a large number of neurons must be processed simultaneously. While graphics processing units (GPUs) can implement parallelism to some extent, the kernel-launch programming paradigm makes them unsuitable for these applications. On the other hand, Field Programmable Gate Arrays (FPGAs) provide flexibility in designing parallel processing and re-configurable hardware architectures.
In many applications, SNNs can provide significant efficiency in power consumption due to the sparsity of inter-neuronal communication using binary-valued spikes.Significant research has been done on implementing SNNs on FPGA and Application Specific Integrated Circuits (ASIC). Munoz et al. [31] implemented an SNN network using the Spike Response Model (SRM) and temporal coding on a Xilinx SPARTAN 3 FPGA to detect simple patterns. Wang et al. [32] introduced a re-configurable, polychronous SNN with Spike Timing Dependent Delay Plasticity to fine-tune and add dynamics to the network. Time multiplexing was used to fit 4096 neurons and up to 1.15 million programmable delay axons on a VIRTEX 6 FPGA. Currently, most hardware SNN systems involve a preconfigured network that is implemented on an FPGA device to accelerate a specific task, leveraging the parallel processing capabilities of FPGAs. In other words, the parameters of the SNN, i.e., weights and thresholds of the neurons, are calculated using a simulator and are fixed in the hardware implementation.
Bethi et al. [33] introduced an Optimized Deep Event-driven SNN Architecture (ODESA) that can be trained end-to-end using STDP-like rules which do not require continuous-valued gradients back-propagated. The ODESA training algorithm solves the credit assignment problem in SNNs by using the activity of the next layer in a network as a layer's supervisory signal. The synaptic weight adjustment in each layer only depends on the layer's trace and not on the weights of the other layers in the network. The feedback between the layers is causal and performed via binary event signals. The network does not require a symmetric backward pathway to perform training. This paper presents an efficient hardware implementation of a new SNN architecture utilizing the ODESA algorithm [33]. Each layer has its training hardware module with minimal communication links with other layers. ODESA is an event-driven algorithm and has a very sparse activity due to the hard Winner-Takes-All (WTA) constraints on the layers. All the communication between the layers and the training modules is event-based and binary-valued. The ODESA architecture and its training algorithm provide an efficient, low-power, low-resource-consuming hardware implementation of SNNs that can be trained online and on-chip.
The remainder of the paper is organized as follows: Section 2 reviews the background of Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). Section 3 provides a detailed presentation of our heuristic SNN hardware implementation combined with its training hardware using the hardware-optimized ODESA algorithm. We will present two implementations of ODESA hardware experiments and their results. Finally, Section 6 presents the conclusion and directions for future works.
## 2 Background
As the adoption of neuromorphic vision sensors increases, various dense tensor representations for sparse asynchronous event data have been investigated for learning spatio-temporal features in the data [34, 35, 36]. A time surface, a term introduced by Lagorce et al. in [37], is the trace of recent spiking activity at a given synapse at any time \(t\). The event-based time surface representations have been used in extracting features in tasks like space object detection and tracking [38, 39], neuromorphic object recognition on UAVs [40], and processing data from SPAD sensors [41]. Afshar et al. [42] introduced an algorithm to extract features from event data using neuronal layers in an unsupervised manner called Feature Extraction using Adaptive Selection Thresholds (FEAST). FEAST is a highly abstracted and computationally optimized model of the SKAN method [43, 44]. The FEAST method has been used and extended for a range of applications such as event-based
object tracking [45], activity-driven adaptation in SNNs [46], and feature extraction to solve isolated spoken digits recognition task [47, 48].
In addition to weights representing the features learned by each FEAST neuron, each neuron has a threshold parameter that represents the size of the receptive field around the features represented by the weights. For every input event, the dot product of the time surface context and the synaptic weight vector of a neuron is calculated. The dot products of all neurons in a layer are compared to their respective thresholds. Only the neurons with dot products crossing their respective thresholds are eligible for selection. The neuron with the largest dot product in the eligible neurons is regarded as the winner for the given input event. If there is no winner, or in other terms, no neuron can cross its threshold, then the thresholds of all the neurons are reduced by a constant value. However, if a neuron becomes the winner, the weights of the neuron are updated with the current event context using an exponential moving average. The threshold of the winner neuron is also increased by a fixed value.
FEAST is an online learning algorithm that clusters the incoming event contexts of all the input events into clusters equal to the number of neurons used in the FEAST layer. The neurons' thresholds represent the clusters' boundaries (see Section 2.2 in [42]). Since there is no information about the significance of an individual event, FEAST treats each receiving event with equal priority, which results in learning features representing the most commonly observed spatio-temporal patterns in the input data. However, this may not be ideal for tasks that depend on more infrequent task-specific features.
The Optimized Deep Event-driven Spiking neural network Architecture (ODESA) [33] is a supervised training method that locally trains hierarchies of well-balanced Excitatory-Inhibitory (EI) networks on event-based data. ODESA is an extension and generalization of FEAST. The output classification layer in ODESA has \(m\cdot N_{c}\) neurons (\(m\in\mathbb{N}\)) for a classification task with \(N_{c}\) classes. The output layer is divided into \(N_{c}\) groups (with \(m\) neurons each), each responsible for one of the \(N_{c}\) classes. Each layer has a hard Winner-Takes-All (WTA) condition, which ensures only one neuron can fire in response to
Fig. 1: Multi-Layer Supervision in ODESA using Spike-Timing-Dependent Threshold Adaptation. The shaded vertical lines represent the binary Global Attention Signal generated for each output label spike. The dotted vertical lines represent the binary Local Attention Signals sent to each layer from its next layer. The up and down arrows represent the reward and punishment of the individual neurons. Case 1: The predicted output spike matches the label spike, and the corresponding output neuron is rewarded. Case 2: The corresponding output neuron for the correct class is punished as it failed to spike in the presence of input from Layer 2. Case 3: All neurons in Layer 2 are punished as they failed to spike for an input spike from Layer 1 in the presence of the Global Attention Signal. Case 4: The active neuron in Layer 2 is rewarded in the presence of the Global Attention Signal. Case 5: The neurons with trace above the resent threshold are rewarded and the other neurons are punished in the presence of Local Attention Signal from Layer 2. Figure reproduced from [33].
any input spike to a layer. The supervisory label spikes drive the threshold adaptation in an ODESA output layer for a given input spike stream. Since ODESA is event-driven, it is assumed that an input spike exists for every label spike. The labeled input spikes are treated with additional attention. For the labeled input spike, if there is no spike from the correct class neuron group, the thresholds for all the neurons in the class group are lowered. If there is a spike from any of the neurons in the correct class group, the winner neuron's weights are updated with the input spike's event context, and its threshold is also updated based on the dot product. Alternatively, in the absence of an output spike from the correct class group, thresholds of all neurons in the group are reduced. This weight update and threshold increase in a neuron can be considered "rewarding a neuron" for its correct classification. Similarly, a decrease in the threshold of a neuron to make it more receptive can be considered as "punishing a neuron" for not being active.
The ODESA architecture can use multiple hidden layers with different time constants to learn hierarchical spatio-temporal features simultaneously at different timescales [33]. Each hidden layer goes through a similar threshold adaptation as the output layer based on the spiking activity of its next layer in the hierarchy. A binary attention signal is generated by each layer to its previous layer whenever a neuron in the layer is active. All the neurons which were recently active in the previous layer are rewarded and the rest of the neurons are punished. These binary signals called Local Attention Signals (LAS), help provide the necessary feedback required to train the hidden layers. This architecture is well suited for enabling online learning in hardware as the communication between layers is through binary attention signals only, and there is no need to calculate loss functions and pass continuous-valued gradients across the layers during training. A Global Attention Signal (GAS) is generated when a label is assigned to an input spike. The GAS is accessible by all layers. Each layer also has access to the LAS generated by its next layer in the hierarchy. There is no LAS for the output layer. The output layer compares the generated spikes with the labels to reward or punish activated neurons. Fig. 1 depicts the multi-layer supervision of ODESA architecture. The condensed ODESA algorithm is depicted in Fig.2 and Fig. 3 flowcharts.
When a LAS is active, the training algorithm determines the participation of a neuron in generating a spike in the next layer based on its eligibility trace. If the trace of a neuron is above a certain limit (generally set to \(10\%\) of its full scale), it is rewarded. The neurons with traces lower than the limit are punished.
## 3 ODESA hardware implementation
### _Primitive building blocks of the ODESA network_
In this Section, we introduce the primitive building blocks of the ODESA network. The primitive building blocks are reusable in different ODESA network architectures.
#### 3.1.1 Synchronizer
The Synchronizer is used to synchronize the asynchronous input events (spikes) with the system clock. The input spikes to the ODESA network are not necessarily synchronous with the system clock and can be missed. Fig.4 shows the design of a Synchronizer module. If an event happens at the input of the Synchronizer, the output will be asserted at the rising edge of the next clock. The Synchronizer will not respond to new events until it is reset by a logic through its '_rst_n' input signal. This will let the system have control over accepting or rejecting events.
#### 3.1.2 Leaky accumulator
The Leaky accumulator is a modified digital implementation of the Leaky Integrate and Fire (LIF) model of a neuron [49]. It is used to model the synaptic response to an incoming spike in the ODESA network. In our modified design, the leaky accumulator can model either linear or exponential decay. The linear decay accumulator consists of a base adjustable bit-width down-counter. Here, if an event/spike is received, the counter is reloaded to the output value of the adder and starts to count down at the rising edge of the following clocks until it decays to zero. Equation 1 shows the value of the counter in the linear decay accumulator at time \(t\) given a spike at time \(t^{\prime}\). Thus for a spike \(\delta(t-t^{\prime})\), which arrived at any time \(t^{\prime}\) between two consecutive clock cycles with time period \(T\) (i.e. \((k-1)T<t^{\prime}\leq kT\mid k\in\mathbb{N}\)), the counter value of the Leaky accumulator, \(a(t)\), at time \(t=nT\), \(n\in[k,C+k]\) can be expressed as:
\[a(t)=(C-\frac{t-kT}{T})(u(t-kT)-u(t-(C+k)T), \tag{1}\]
where \(C\) is the linear decaying constant that will be loaded to the counter when a spike happens and \(u(t)\) is the unit step function. The decay rate is controlled by either the value of the decaying constant \(C\) or the clock frequency. If a new event happens when the decaying counter is not zero, it will be reloaded by the sum of the current counter value and the constant \(C\). For a stimulus \(\delta(t-t_{1})+\delta(t-t_{2})\), where two spikes \(\delta(t-t_{1})\) and \(\delta(t-t_{2})\) occur close to each other at times \(t_{1}\), and \(t_{2}\) respectively, such that \(t_{1}<t_{2}\leq t_{1}+CT\), the counter value is calculated as:
\[a(t)=a(t_{1})+a(t_{2}). \tag{2}\]
Fig. 5 shows the block diagram of a linear decaying Leaky accumulator and a sample waveform. The Leaky accumulator activates a clear signal ('o_clr') three clock cycles after receiving a synchronized event. This signal is used to reset the Synchronizer and make it ready to receive new input events. The exponential decay is estimated by a divide by two (a shift right) at each clock cycle after loading constant \(C\) into the shift register. The output of an exponential decaying Leaky accumulator, \(a(t)\), will be:
\[a(t)=\frac{C}{2^{\frac{t}{T}-k}}\left(u(t-kT)-u(t-(\tau+k)T)\right). \tag{3}\]
For the exponential decay, \(C\) is set to \(2^{\tau}-1\), where \(\tau\) is the decay constant. In this work, we used linear decay accumulators only. Fig. 6 shows the architecture of an exponential decaying leaky accumulator.
#### 3.1.3 Synapse
The Synapse module consists of a leaky accumulator and a weight multiplier. Capturing an asynchronous event forces the leaky accumulator to generate a decaying output amplified by the weight multiplier. The Synapse weight is stored in a register ('r_weight'), and its value is determined during the network training process. The 'r_weight' register resides in the Training hardware, which will be detailed in Section 4. The output of a Synapse \(i\), \(b_{i}(t)\), will be:
\[b_{i}(t)=w_{i}\cdot a(t), \tag{4}\]
where \(w_{i}\) is the value saved in the 'r_weight' register of the Synapse \(i\). The 'TRACE' register contains the time surface value (the output of the leaky accumulator) at every clock cycle. This value is used for training the previous network layer if it exists. Fig. 7 illustrates the architecture of a Synapse.
#### 3.1.4 Neuron
Each Neuron comprises several Synapses. The outputs of all Neuron Synapses are added together. The resulting value is equivalent to the dot product calculated in ODESA [33] and it is referred to as "membrane potential" throughout this paper as used in a LIF neuron model. The membrane potential is compared with the Threshold register value. The output of the Neuron is the membrane potential value if it exceeds the Threshold register value; otherwise, it is set to zero. The Threshold register is also located in the training hardware, and its value will be assigned during the training phase. The output of a Neuron with \(m\) Synapses can be written as:
\[d(t)=\begin{cases}\sum_{i=1}^{m}b_{i}(t)&\text{if }d(t)\geq\text{Threshold}\\ 0,&\text{otherwise}.\end{cases} \tag{5}\]
Fig. 4: Synchronizer module and its timing diagram. The ‘i_spike’ signal will be synchronized with the rising edge of ‘i_clk’. If ‘i_rst_n’ is not activated new spike will be ignored.
Fig. 3: ODESA training algorithm for an output layer
Fig. 2: ODESA training algorithm for a hidden layer.
In an ODESA layer, the comparator and prioritizing module compare the output values of the neurons. The neuron with the highest membrane potential and the lowest index in the layer is declared as the winner. Subsequently, the comparator module generates a spike corresponding to the index of the winning neuron. The membrane potential of the winner Neuron is latched in its 'LAST_VALUE' register. The 'LAST_VALUE' register is used during training. We will discuss the training process in detail in Section 4.
Fig. 8 depicts an 8-input neuron block diagram. The 'i_spike' input of the Neuron receives feedback from the output spike ('o_neuron_out'). The membrane potential is latched at the rising edge of the 'i_spike' input and can be accessed via the 'o_lv' output of the Neuron.
#### 3.1.5 Comparator and Spike generator
The Neurons' outputs are received by the Comparator module that detects which Neuron has the higher membrane potential and prioritizes the neuron outputs based on the input index to the Comparator module. The lower the index, the higher the priority of the Neuron. The Comparator output for an \(n\) neuron layer, is the post-synaptic spike stream of all the Neurons in a layer, and it can be mathematically modeled as:
\[e_{i}(t)=\begin{cases}\delta(t),&\text{if:}\begin{cases}&\text{IS\_EVENT}=1, \\ &d_{i-1}(t)<d_{i}(t)\geq\{d_{i+1}(t),\ldots,d_{n}(t)\}.\end{cases}\\ 0,&\text{otherwise},\end{cases} \tag{6}\]
where \(i\) is the Neuron index in an ODESA layer with \(n\) Neurons and the 'IS_EVENT' signal indicates whether any input event has occurred during the recent clock cycles. The Comparator output is one-hot encoded indicating the winner Neuron (the one with the highest membrane potential and the lowest index). Due to the
Fig. 5: Leaky Accumulator architecture with linear decay and the circuit timing diagram with two subsequent input spikes.
Fig. 8: 8-input ODESA Neuron
Fig. 6: Leaky Accumulator architecture with exponential decay and the circuit timing diagram with two subsequent input spikes.
Fig. 7: Synapse architecture
event-driven computation, the Comparator must have an output only if the Neuron becomes a winner due to an input event. This is critical to avoid generating unwanted or unrelated spikes at the output of the ODESA layer and removing unintended spurs generated by the Comparator's combinatorial logic, which can cause intermediate spikes even when there is no input spike to the layer. The Spike generator is a sequential logic that receives a Comparator's output and allows a spike to appear at the output if an input event is recorded a few clock cycles before. The number of clock cycles that the Spike generator module can look back on is adjustable for each module. In our design, after detecting an input event, the spike generator waits until four clock cycles to receive a signal from the Comparator module. Fig. 9 shows the block diagram and the function of the Comparator. Fig. 10 illustrates the Spike generator logic and a sample waveform.
#### 3.1.6 ODESA SNN Layers
All ODESA layers, either an input, a hidden, or an output layer, have a homogeneous architecture. That is, a number of Neurons are connected to a Comparator module. Neurons can have different numbers of synapses. However, every Neuron has only one output. The number of layer outputs is equal to the number of Neurons in the layer. As discussed, only one of the layer outputs can be active at any given time. Neurons within a layer share the inputs to the layer. The outputs of a layer are fully connected to the inputs of the following layer, except for the output layer whose outputs indicate the classes in the classification problem the ODESA Network is designed to solve. Normally, different ODESA layers operate at different clock periods. The ratios of the hidden and output layers' clock period to the input layer's clock period are part of the network's configuration parameters.
We use a naming convention throughout this paper to reference the architecture of the ODESA network. The input layer is always called 'L1'. Then, we increment the Layer's number for the following layers to the output layer, e.g. 'L2', 'L3', and so forth. Any ODESA network architecture can be determined using the following naming convention: ODESA {number of input spike channels}_{number of neurons at level 1 }_{number of neurons at level n}_{number of output classes}. Fig. 11 shows an example of ODESA 8\(2\)4_4 architecture.
## 4 Training hardware
ODESA is a multi-layer supervised Spiking Neural Network architecture that can be trained to map an input spatio-temporal spike pattern to an output spatio-temporal spike pattern without requiring access to the weights and thresholds of other neurons or batching of the input data. The training algorithm is distinct for hidden layers (including the input layer) and the output layer. At each layer, the training is done through the guiding signals produced by the successive layer, the layer's output spikes, and the Label spikes. The original algorithm is detailed and implemented in software in [33]. In this work, we represent a revised version of the algorithm enhanced for hardware implementation. If any layer fires a spike, an 'IS_WINNER' signal is generated for that layer's training logic. Each layer's training logic also receives a Local Attention Signal (LAS) and a Global Attention Signal (GAS). If there is a spike at the output of a layer, a LAS signal will be generated for its preceding
Fig. 10: Spike generator module and sample waveform
Fig. 9: 2-input Comparator and Spike generator and sample waveform
layer. The GAS signal, however, is generated when a label spike exists for the current input spike and propagates through all layers. The training set which includes the input spikes and their corresponding labels is stored in the RAM. During the training phase, the input spikes are read from the RAM and injected into the input layer of the ODESA SNN. Likewise, the training hardware reads labels from the RAM and compares them with the output spikes generated by the output layer. Fig. 12 illustrates an ODESA network with the network layers and training logic for each layer.
The training hardware for each layer receives the last value of the membrane potential and the trace of all Synapses in that layer. When a Neuron becomes a winner (the 'IS_WINNER' signal is asserted), a (post-synaptic) spike is generated at the output of the layer, and the value of the Synapses' Trace registers (Fig. 7), are latched in a time surface (TS) register. The value of the membrane potential (the adder output in Fig. 8) is also registered in the Last Value (LV) register. The TS register is implemented in the Training module (not visible in Fig. 12 for simplicity) and its value represents the contribution of the Synapse to the generation of the winner's membrane potential. If the Neuron remains silent in presence of an input event and GAS signal, then the trace of the Synapse is latched to the 'NO_WINNER' register that is implemented in the Training module. The high value of the 'NO_WINNER' register indicates that the layer failed to spike for an input spike to the layer.
The update of weights and thresholds happens in the presence of the GAS signal in a "reward" or "punish" process. "reward" or "punish" is a similar process for all neurons across the layers. For a Neuron \(j\) with threshold \(T_{j}\), and \(s\) Synapses, the reward process is defined according to Equation 7.
\[\begin{split} Reward:\begin{cases}w_{ij}&\gets w_{ij}+ \eta_{w}\cdot(TS_{ij}-w_{ij}),\forall i\in[1,s],\\ T_{j}&\gets T_{j}+\eta_{T}\cdot(LV_{j}-T_{j}),\end{cases}\end{split} \tag{7}\]
where \(w_{ij}\) is the synaptic weight, and \(TS_{ij}\) is the time surface register of Synapse \(i\) of Neuron \(j\). \(LV_{j}\) is the Last Value register of Neuron \(j\). The Neuron "punish" process is just lowering Neuron's threshold by the constant value \(\Delta_{T}\) as stated in Equation 8.
\[\begin{split} Punish:\ T_{j}\gets T_{j}-\Delta_{T},\end{split} \tag{8}\]
where \(\eta_{w}<1\), and \(\eta_{T}<1\), are the learning rates of the layer, and \(\Delta_{T}\geq 1\) are the network hyper-parameters. For sake of a low-cost hardware implementation, learning rate values are chosen as negative powers of two; therefore, the "reward" can be performed by simple shift and addition operations. Usually, the \(\eta_{w}\) and \(\eta_{T}\) are set to the same value.
Since the Weight and Threshold registers contain unsigned integer values, special consideration has to be taken to ensure that the product terms of \(\eta_{w}\), and \(\eta_{T}\) in Equation 7 will never become zero, leaving the training process in a locked state. Additionally, when experiments require learning rates that are too small to perform weight updates via shift operations, the Weight and Threshold update steps are reduced to Equation 9. The sign \(<>\) function is used to determine the direction of the weight (or threshold) change and is then updated by a fixed step equal to \(\eta_{w}\) (or \(\eta_{T}\)). \(\eta_{w}\) and \(\eta_{T}\) values are set to the lowest possible step changes (e.g. 1,2,3,..) like used in Section 5.2
\[\begin{split} Reward:\begin{cases}w_{ij}&\gets w_{ij}+ \eta_{w}\cdot\mathrm{sign}(TS_{ij}-w_{ij}),\forall i\in[1,s],\\ T_{j}&\gets T_{j}+\eta_{T}\cdot\mathrm{sign}(LV_{j}-T_{j}),\end{cases} \end{split} \tag{9}\]
Algorithm 1 and 2 show the hardware-friendly ODESA training algorithms for the Hidden and Output layers, respectively.
Since the 'IS_WINNER' and GAS signals have no overlap, we use a latched version of these signals in the
Fig. 11: Two layer ODESA implementation. Layer one (L1) is the input layer with two 8-input neurons. Layer 2 (L2) is the output layer with four 2-input neurons that classify the inputs into four classes.
hardware. Fig. 13 shows the training waveforms for a hidden layer. The GAS signal is asserted at the same time 'IS_EVENT' becomes active (events and labels are read simultaneously from the RAM). The 'IS_WINNER' spikes after \(\Delta t_{1}\) time passed from 'IS_EVENT'. The 'r_IS_WINNER', and 'r_GAS' signals are latched and verified after \(\Delta t_{pass}\) time on the rising edge of the ODESA level's clock. The LAS signal is asserted after \(\Delta t_{2}\) time from 'IS_WINNER'. The 'IS_WINNER' also indicates there exists an input event for the next layer. The values of \(\Delta t_{1}\), \(\Delta t_{2}\), and \(\Delta t_{pass}\) are configurable at the Neuron's architecture. Specifically, \(\Delta t_{i}\) represents the time required for a spike to appear at the output of ODESA layer \(i\) following any input event to the layer. On the other hand, \(\Delta t_{pass}\) represents the time that the training module waits to observe the winner and update the weights and thresholds. In our design, all of these parameters are set to 3 clock cycles of the corresponding layer's clock. For an output layer, however, the WINNER is compared with the LABEL in the event of a GAS signal. If the WINNER and LABEL match, then the winner Neuron is rewarded; otherwise, the winner Neuron's weights are suppressed by a negative weight update. The negative weight update is considered the reverse of the weight reward process, i.e.,
\[w_{ij}\gets w_{ij}+\eta_{w}\cdot w_{ij}-\eta_{w}\cdot TS_{ij}\ \ \forall i\in[1,s]. \tag{10}\]
Fig. 14 demonstrates the training signals for the output layer. The LABEL is latched at the rising edge of the GAS signal. It takes \(\Delta t_{pass}\) time for the WINNER to appear at the output layer, which is then compared with the LABEL at the rising edge of the next clock and performs weight and thresholds update.
Fig. 12: ODESA network, Neuron layers, training hardware, and connections.
## 5 ODESA network Experiments
### _Experiment 1, Detection of four classes of spike patterns_
Our first experiment uses ODESA to detect four patterns consisting of 16 spikes split into two sub-patterns of 8 spikes each, which appear at a uniform time distance of \(\nu\). A label spike was attached to the last spike of each of the four patterns. The four input event patterns can be mathematically expressed in Equation 11. Fig. 15 visualizes the four spike patterns in time and the assigned class label for each pattern.
\[\forall i\in[1,8]: \tag{11}\] \[\begin{cases}Pattern\ 1:i\_event[i]=\delta(t-(i-1)\nu)+\delta(t-(8+i) \nu)\\ Pattern\ 2:i\_event[i]=\delta(t-(9-i)\nu)+\delta(t-(17-i)\nu)\\ Pattern\ 3:i\_event[i]=\delta(t-(i-1)\nu)+\delta(t-(17-i)\nu)\\ Pattern\ 4:i\_event[i]=\delta(t-(9-i)\nu)+\delta(t-(8+i)\nu)\end{cases}\]
The ODESA network implemented for this application is configured with two fully connected layers. The input layer (L1) has two Neurons with eight inputs, and the Output layer (L2) has four Neurons with two inputs. The network architecture is illustrated in Fig.11. The Network parameters used for detecting the four class patterns are listed in Table I.
In this experiment, we use 6-bit linear decaying counters at each Neuron and a decaying constant \(C=63\). The clock frequency for L1 and L2 is set to 64 kHz and 32 kHz, respectively. Thus, the linear decay to zero will take one millisecond for Neurons at L1 and two milliseconds for Neurons at L2. The distance between two spikes is set to \(\nu=8\times\) (L1 clock period).
As shown in Fig. 16, for each pattern, two spikes are generated by L1. The position and time of the spike determine the input pattern injected into the ODESA network. The L1 output spikes are used as inputs to L2. As depicted in Fig. 17, L2 comprises four 2-input neurons that perform the classification task.
The ODESA 8\(2\)4_4 implementation was performed on an Intel Cyclone V (part no. 5CSEBA6U23I7) using the Quartus 18.0 Lite design tool. Table II reports the implementation results.
The implemented ODESA network achieved an accuracy rate of 100% after completing self-training. The accuracy did not change by applying random changes in the distance \(\nu\) by \(\pm 10\%\) on a trained network.
### _Experiment 2, Iris dataset classification_
The Iris dataset [50] is one of the well-known databases in the pattern recognition literature for being not linearly separable. Different spike-encoding schemes are used to convert the Iris dataset into spikes to test the local learning rules of SNNs [9][13][29]. The data set contains three classes of 50 instances each, where each class refers to a type of iris flower. The four features
Fig. 14: ODESA Output Layer training signals.
\begin{table}
\begin{tabular}{l l l} \hline _Parameter_ & _L1_ & _L2_ \\ \hline \(\eta_{w}\) & \(2^{-3}\) & \(2^{-2}\) \\ \hline \(\eta_{T}\) & \(2^{-3}\) & \(2^{-2}\) \\ \hline \(\Delta_{T}\) & \(2^{6}-1\) & \(2^{6}-1\) \\ \hline _Weight register (bits)_ & 8 & 8 \\ \hline _Decaying counter (bits)_ & 6 & 6 \\ \hline _Clock frequency (MHz)_ & 0.064 & 0.032 \\ \hline _Input events time distance \(\nu\) (ms)_ & 31.25 & - \\ \hline \end{tabular}
\end{table} TABLE I: ODESA 8\(2\)4_4 parameters for experiment 1
Fig. 13: ODESA Hidden layer training signals.
are sepal length (d1), sepal width (d2), petal length (d3), and petal width (d4), all in centimeters within the range \([0.1,7.7]\). To convert the input feature values into spatio-temporal spike patterns, we used a latency coding that maps the value of each input dimension to the time of a spike generated from a corresponding input channel:
\[L\rightarrow\delta(t-L). \tag{12}\]
However, the length \(L\) for \(d_{1}\), \(d_{2}\), \(d_{3}\), and \(d_{4}\) has to be scaled to fit in a fixed-length frame. In our case, that is the time frame within the range \([0,30]\). The dataset conditioning we applied to the original Iris dataset follows the offset and compress formulae in Equation (12).
\begin{table}
\begin{tabular}{l l} \hline _Architecture_ & _ODESA 8\(2\)4_4_ \\ \hline _Used ALM_ & 1192 \\ \hline _Used registers_ & 976 \\ \hline _Used DSP units_ & 20 \\ \hline _L1 max. Clock frequency (MHz)_ & 28.25 \\ \hline _Dynamic power consumption (mW)_ & 1 \\ \hline \end{tabular}
\end{table} TABLE II: ODESA 8\(2\)4_4 Implementation results on Intel Cyclone V
Fig. 16: ODESA Layer L1 response to input events.
Fig. 17: ODESA Layer L2 response to input events.
Fig. 15: Experiment 1 input spike patterns.
13.
\[\begin{cases}d_{1}=\lceil 3.8\big{(}\frac{(d_{1}-1)}{2}+4\big{)}\rceil,\\ d_{2}=\lceil 3.8\big{(}\frac{(d_{2}-2)}{3}+2.5\big{)}\rceil,\\ d_{3}=\lceil 3.8d_{3}\rceil,\\ d_{4}=\lceil 9(d_{4}+0.5)\rceil.\end{cases} \tag{13}\]
Each sample in the new dataset contains the four features scaled to the timestamp ranged in \([0,30]\). Using Equation 12 the lengths \(d1\), \(d2\), \(d3\), and \(d4\) are converted to the timestamps. The dataset with timestamped events is shown in Fig. 18.
The ODESA architecture we designed for the Iris dataset with four input spikes within the timeframe \([0,30]\) is ODESA 4\(6\)3_3. The L1 includes 6 Neurons with 4 Synapses, and the L2 comprises 3 Neurons with 6 Synapses each. The clock frequency for L1 is 2.5 MHz, i.e., \(\frac{1}{20}\) of FPGA system Clock (50 MHz). Clock frequency for L2 is set to 0.625 MHz, i.e., \(\frac{1}{4}\) of L1 clock frequency. The ratio of the level one clock to the level two clock is a network parameter, which in this experiment is set to four. Samples are injected with the same clock frequency of L1. Therefore, each sample's timeframe takes a maximum of \(30\times 0.4=12\) microseconds.
The decaying counter designed for this application is eight bits wide, and the decaying constant is set to its maximum value \(C=255\) for both L1 and L2. The ODESA 4\(6\)3_3 was implemented on Intel's Cyclone V FPGA, and the results are reported in Table III.
#### 5.2.1 Training ODESA 4\(6\)3_3 for Iris dataset
Since the Iris dataset is more complex than our previous experiment with patterns of Fig. 15, it requires smaller learning rates than that can be achieved by shift operations. For weights with smaller values, the shift operation can lead to no updates. We have used the weight and threshold update steps introduced in Equation 9. The \(\eta_{w}\) for L1 is set to 1. The resulting Weights register update for each Synapse \(j\) of the Neuron \(i\) follows the rule in Equation 14.
\[\begin{cases}w_{ij}\gets w_{ij}+1,\text{if }TS_{ij}>w_{ij},\\ w_{ij}\gets w_{ij}-1,\text{if }TS_{ij}<w_{ij}.\end{cases} \tag{14}\]
The new weight update guarantees that the Synapse's weight value moves smoothly towards the time surface of that Synapse. Rewarding the threshold is also performed by incrementing the threshold value by a fine-tuned constant value \(\eta_{T}\) according to Equation 9. This constant value \(\eta_{T}\) is determined by trial. In our test, we set the \(\eta_{T}\) equal to 127 decimals (or \(0X7F\) hexadecimal).
At L2, the weight updates require higher learning rates, and a larger step size is used for L2 to achieve the same. According to Equation 15, each Synapse's weight is incremented or decremented by two in a rewarding process.
\[\begin{cases}w_{ij}\gets w_{ij}+2,\text{if }TS_{ij}>w_{ij},\\ w_{ij}\gets w_{ij}-2,\text{if }TS_{ij}<w_{ij}.\end{cases} \tag{15}\]
The threshold update is done by employing Equation 7 with \(\eta_{T}=2^{-10}\). The "punish" process uses an adaptive \(\Delta_{T}\) value according to Equation 16 to ensure the Threshold register will never cross zero.
\[\Delta_{T}=\begin{cases}2^{10}-1,\text{ if }T_{j}>2^{16}-1,\\ 2^{8}-1,\text{ if }T_{j}>2^{12}-1,\\ 2^{4}-1,\text{ if }T_{j}>2^{8}-1,\\ 1,\text{ otherwise.}\end{cases} \tag{16}\]
During our experiments, we noticed that training can be significantly accelerated by masking the LAS signals of the output layer, except for the ones that occur after a GAS signal. In other terms, generating LAS signals only when we anticipate a response from the network to the classification problem leads to fewer training epochs being necessary.
To evaluate our network's performance and accuracy, we chose 20 random splits of the IRIS dataset (30% training and 70% test splits) and run the hardware with the training dataset splits stored in its RAM. The same dataset was used for the software version of ODESA with a similar ODESA 4\(6\)3_3 configuration. We have used a smaller ODESA network with fewer input and hidden neurons compared to the network architecture
\begin{table}
\begin{tabular}{l l} \hline _Architecture_ & _ODESA 4\(6\)3_3_3 \\ \hline _Used ALM_ & 2805 \\ \hline _Used registers_ & 1195 \\ \hline _Used DSP units_ & 42 \\ \hline _LI max. clock frequency (MHz)_ & 39.88 \\ \hline _Dynamic Power consumption (mW)_ & \(<\) 2 \\ \hline \end{tabular}
\end{table} TABLE III: ODESA 4\(6\)3_3 Implementation results on Intel Cyclone V
Fig. 18: Experiment 2, Iris dataset input spike patterns.
(ODESA 20_10\(3\)3) used in [33] to fit the area available on Intel's Cyclone V. The input dimensions of the data are 4 as compared to 20 used in the original work [33]. The dataset was converted to spikes using latency coding as described in Section 5.2 as opposed to using a population code that was used in [33] to reduce the number of multipliers required for the hardware implementation. The software version of ODESA is the original algorithm from [33] that used floating-point operations and normalized weights and input time surface for calculating the dot products of the neurons. The dot product in the software version is always bounded between 0 and 1. We let the two networks (our ODESA hardware, and software ODESA) run for 400 epochs on each random split. The accuracy performance of the networks is abridged in Fig. 19. The average and maximum achieved accuracy and the standard deviations are reported in Table IV. The Software ODESA shows a consistent accuracy with a small variation of around 83%, while the hardware version accuracy changes from a maximum of 86.6% to 65%.
The hardware version of ODESA does not show a considerable drop in accuracy compared to the software version even with the usage of non-normalized integer-based weights and fixed-step weight and threshold updates as described in Equations 14, 15, and 16. But it does have a larger standard deviation compared to the software version, which is a result of using a fixed-length integer numbering system and fixed-step updates that can limit the convergence. The results show that our hardware ODESA and training algorithms perform very closely to the software version of ODESA.
## 6 Conclusion
For the first time, we presented an FPGA implementation of ODESA SNN that can be trained online in a supervised manner on hardware reset. The training data is stored in the internal RAM of the FPGA device and will be used on hardware restart to assign SNN parameters. This trainable hardware is efficient in terms of hardware resources and computing costs, making it appealing for streaming pattern detection applications, e.g., intrusion detection on IoT devices. The architecture can asynchronously update the neuron parameters at each layer independent of other layers in a network. All the communication in the hardware is event-based and via binary spikes. The architecture is capable of performing on-chip online learning and is a promising next step toward building energy-efficient continual learning edge devices.
Our work aims to draw attention to designing autonomous hardware that makes decisions based on receiving sensory inputs. Our approach could be extended to handle more complex pattern detection and classification tasks in near real-time.
## Acknowledgment
This research is supported by the Commonwealth of Australia as represented by the Defense Science and Technology Group of the Department of Defense.
|
2307.16644 | NEON: Living Needs Prediction System in Meituan | Living needs refer to the various needs in human's daily lives for survival
and well-being, including food, housing, entertainment, etc. On life service
platforms that connect users to service providers, such as Meituan, the problem
of living needs prediction is fundamental as it helps understand users and
boost various downstream applications such as personalized recommendation.
However, the problem has not been well explored and is faced with two critical
challenges. First, the needs are naturally connected to specific locations and
times, suffering from complex impacts from the spatiotemporal context. Second,
there is a significant gap between users' actual living needs and their
historical records on the platform. To address these two challenges, we design
a system of living NEeds predictiON named NEON, consisting of three phases:
feature mining, feature fusion, and multi-task prediction. In the feature
mining phase, we carefully extract individual-level user features for
spatiotemporal modeling, and aggregated-level behavioral features for enriching
data, which serve as the basis for addressing two challenges, respectively.
Further, in the feature fusion phase, we propose a neural network that
effectively fuses two parts of features into the user representation. Moreover,
we design a multi-task prediction phase, where the auxiliary task of
needs-meeting way prediction can enhance the modeling of spatiotemporal
context. Extensive offline evaluations verify that our NEON system can
effectively predict users' living needs. Furthermore, we deploy NEON into
Meituan's algorithm engine and evaluate how it enhances the three downstream
prediction applications, via large-scale online A/B testing. | Xiaochong Lan, Chen Gao, Shiqi Wen, Xiuqi Chen, Yingge Che, Han Zhang, Huazhou Wei, Hengliang Luo, Yong Li | 2023-07-31T13:25:58Z | http://arxiv.org/abs/2307.16644v1 | # NEON: Living Needs Prediction System in Meituan
###### Abstract.
Living needs refer to the various needs in human's daily lives for survival and well-being, including food, housing, entertainment, etc. At life service platforms that connect users to service providers, such as Meituan, the problem of living needs prediction is fundamental as it helps understand users and boost various downstream applications such as personalized recommendation. However, the problem has not been well explored and is faced with two critical challenges. First, the needs are naturally connected to specific locations and times, suffering from complex impacts from the spatiotemporal context. Second, there is a significant gap between users' actual living needs and their historical records on the platform. To address these two challenges, we design a system of living **Needs** prediction**ON** named NEON, consisting of three phases: feature mining, feature fusion and multi-task prediction. In the feature mining phase, we carefully extract individual-level user features for spatiotemporal modeling, and aggregated-level behavioral features for enriching data, which serve as the basis for addressing two challenges, respectively. Further, in the feature fusion phase, we propose a neural network that effectively fuses two parts of features into the user representation. Moreover, we design a multitask prediction phase, where the auxiliary task of needs-meeting way prediction can enhance the modeling of spatiotemporal context. Extensive offline evaluations verify that our NEON system can effectively predict users' living needs. Furthermore, we deploy NEON into Meituan's algorithm engine and evaluate how it enhances the three downstream prediction applications, via large-scale online A/B testing. As a representative result, deploying our system leads to a 1.886% increase _wrt._ CTCVR in Meituan homepage recommendation. The results demonstrate NEON's effectiveness in predicting fine-grained user needs, needs-meeting way, and potential needs, highlighting the immense application value of NEON.
Living Needs Prediction; Deep Neural Networks; Multi-task Learning +
Footnote †: leftmargin=*]KDD 23, August 6–10, 2023, Long Beach, CA, USA (c) 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4-007-0103-0/23/08. [https://doi.org/10.1145/3580305.3599874](https://doi.org/10.1145/3580305.3599874)
+
Footnote †: leftmargin=*]KDD 23, August 6–10, 2023, Long Beach, CA, USA (c) 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4-007-0103-0/23/08. [https://doi.org/10.1145/3580305.3599874](https://doi.org/10.1145/3580305.3599874)
+
Footnote †: leftmargin=*]KDD 23, August 6–10, 2023, Long Beach, CA, USA (c) 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4-007-0103-0/23/08. [https://doi.org/10.1145/3580305.3599874](https://doi.org/10.1145/3580305.3599874)
+
Footnote †: leftmargin=*]KDD 23, August 6–10, 2023, Long Beach, CA, USA (c) 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4-007-0103-0/23/08. [https://doi.org/10.1145/3580305.3599874](https://doi.org/10.1145/3580305.3599874)
## 1. Introduction
_Living needs_ are the various needs generated by individuals in their daily routines for daily survival and well-being. Typical living needs include necessities such as food, housing, and personal care as well as leisure activities such as entertainment, for which there exists a diverse array of life service providers. Meituan is a large platform connecting customers to life service providers, in which users can meet almost all kinds of living needs. Unlike traditional information systems such as e-commerce websites, where users can only purchase products (_i.e._, meeting one kind of living needs), Meituan allows access to various living services, such as booking
* **The impact of spatiotemporal context is complex.** As discussed above, for the same user, living needs at different locations or times are totally different. For example, the user in Figure 1 eats food delivery at noon and watches a movie at night. Additionally, the lifestyles, _i.e., the spatiotemporal pattern of living needs_, are extremely various for different users, further making it more difficult to model the spatiotemporal context.
* **There is a significant gap between users' actual living needs and their historical records on the platform.** Typically, users will face various kinds of situation in their life, and will generate multiple living needs. But they may only choose to satisfy one or a few of them on the platform, leading to a significant gap between their actual living needs and their historical records. We refer to the needs that can not be observed from historical records as _potential needs_. The case here is different from that in most recommender systems, where the actual interests and the historical record generally do not differ greatly for users, leading to another critical challenge.
To address these challenges, in this work, we described our deployed NEON system (short for living **NE**eds predicti**ON**) in Meituan, which includes three phases: **feature mining, feature fusion**, and **multitask prediction**. First, in the feature mining phase, to address the first challenge, we carefully design the spatial and temporal features for individual-level users, and to address the second challenge, we extract the behavioral-pattern features for group-level users. Second, in the feature fusion phase, we develop a feature-fusion neural network that combines internal preferences, impact from spatiotemporal context, and group behavior patterns to generate user representations, addressing both challenges. Last, as the complement to the main task of living needs prediction, we introduce the auxiliary task of needs-meeting way prediction to enhance the model's learning of spatiotemporal context, further addressing the first challenge.
The proposed NEON system plays a critical role in Meituan's recommendation engine with various downstream applications, including homepage recommendation, _Guess-you-like_ page recommendation, and message pop-up recommendation, which requires the different-aspect ability of living needs prediction. After the deployment of the proposed system, we obtain stable and significant gains in three applications, providing strong real-world evidence for NEON's effectiveness from different perspectives.
The contribution of this work can be summarized as follows.
* To the best of our knowledge, we take the first step to study the problem of living needs prediction, which is a critical problem in real-world life service platforms but has not been well explored.
* We proposed the NEON system, which includes three phases of feature mining, feature fusion layer, and multitask prediction, which well addresses the two challenges, the complex impact of spatiotemporal context and missing behavioral data.
* We deploy NEON in Meituan's recommendation engine and conduct large-scale online A/B tests on three typical downstream applications, along with extensive offline evaluations. Offline experimental results verify that NEON can accurately predict users' living needs. The downstream evaluations strongly confirm NEON's high application value, with significant performance improvement in three downstream applications, among which a representative result is a 1.886% increase _w.r.t._ CTCVR for Meituan homepage recommendation.
## 2. Problem Statement
As we discussed above, in users' daily life, they generate various living needs, such as eating, accommodation, entertainment, beauty, etc. These needs can be fulfilled by life service providers in the city, which can be accessed through platforms connecting life service providers to customers. To enhance the user experience, it's crucial for these platforms to accurately predict users' needs and recommend appropriate services. This leads to the problem of living needs prediction.
As defined in the introduction, living needs prediction aims to predict the specific living needs of a user given the spatiotemporal context in which they are located (in the following we also refer to this as given the _user scene_). To clearly define the problem, with the help of experts, we divide all the living needs that users can satisfy on the platform into ten categories, shown in Table 1. We use \(\mathcal{N}\) as the symbol for the set of all living needs. In response to these needs, all the life services on Meituan are also divided into 10 categories. The problem can be defined as follows.
**Input:** A dataset \(\mathcal{O}^{+}\) of real-world life service consumption records that reflect users' living needs. Each instance in the dataset tells the
Figure 1. Living needs prediction aims to predict the specific living need of a user given the spatiotemporal context. By predicting users’ living needs, life service platforms can recommend life services that can fulfill these needs.
kind of life service a specific user purchases, which indicates the specific living need \(n\) (\(n\in\mathcal{N}\)) of the user in a specific user scene \(i\). **Output:** A model to estimate the probability that a user will generate the living need \(n\) in user scene \(i\), formulated as \(f(i,n|\mathcal{O}^{+})\). Here \(f(\cdot)\) denotes the function that the model aims to learn.
## 3. Our Neon System
To address the challenges mentioned in the introduction, we develop the NEON system made up of three phases: feature mining, feature fusion layer, and multitask prediction. First, in the feature mining phase, we address the first challenge by carefully designing spatiotemporal features for individual-level users and address the second challenge by extracting behavioral-pattern features for group-level users. The feature fusion phase then employs a feature-fusion neural network to seamlessly integrate internal preferences, spatiotemporal context impact, and group behavior patterns to generate complete user representations, overcoming both challenges. Last, in the multitask prediction stage, to enhance the model's understanding of spatiotemporal context, we introduce an auxiliary task of needs-meeting way prediction to the main goal of predicting living needs, providing additional support in addressing the first challenge. The deep feature fusion layer and the multi-task prediction parts of our system are illustrated in Figure 2.
### Feature Mining
First of all, we use features that directly reflect user traits, such as users' profile, their recent behavior sequence, and their historical behaviors, as inputs for the model.
As mentioned above, for a specific user, his living needs are greatly affected by the spatial and temporal scenarios in which he is located. For example, _on a rainy midday, a person at work probably has the need to order food delivery; but on a sunny noon, he/she may have another need of going out to eat in the restaurant_. This complexity and variability of human living needs, driven by the flux of time and space, pose a considerable challenge in accurately modeling the impact of the spatiotemporal context. In order to tackle this challenge, we incorporate spatiotemporal context features as an integral part of our system's input.
What's more, on the platform, users may have potential needs with sparse or even non-existent history records. For example, _a person who never buys medicine on the platform may have a cold and need to buy medicine online one day_. Such potential needs are difficult for the model to grasp. To address the challenge of modeling potential needs, we introduce group behavior pattern features to help the model learn the potential living needs of users.
Below we give a detailed description of the three categories of features.
#### 3.1.1. User Features
This group of features includes user profiles and user history behavior sequences.
* **User profiles \(f_{p}^{U}\)**. The user's profile, including their age, gender, etc.
* **User recent online behavior sequence \(f_{rb}^{U}\)**. The sequence of items recently clicked by the user in the platform; the sequence of items recently ordered by the user in the platform.
* **User aggregated historical online behavior \(f_{hb}^{U}\)**. The percentage of times users buy each type of life service.
* **User offline visitation record \(f_{op}^{U}\)**. The 50 most visited POIs (point of interest) by the user in the last six months; the 50 most visited AOIs (area of interest) by the user in the last six months.
We concatenate all the mentioned features above to get a sparse user feature vector \(f^{U}\), formulated as follows:
\[f^{U}=\left[f_{p}^{U},f_{rb}^{U},f_{hb}^{U},f_{ow}^{U}\right]. \tag{1}\]
#### 3.1.2. Spatiotemporal Context Features
Users' living needs are greatly affected by time, location, and other environmental factors. Thus, we introduce spatiotemporal context features as part of the input of our system to help our system model the complex impact of spatiotemporal context, which can be listed as follows.
* **Time \(f_{t}^{ST}\)**. Current time period. More than one time period feature of different granularity is applied, including hour, day, whether it is a holiday, etc.
* **Location \(f_{l}^{ST}\)**. The POI (point of interest) embedding of the user's real-time location; the AOI (area of interest) embedding of the user's real-time location; the city embedding of the user's real-time location. The location features are hourly real-time features.
* **Weather \(f_{w}^{ST}\)**. Weather information for the user's city or region, including wind, humidity, temperature, and weather type (sunny, rainy, snowy, etc.). Weather features are refined to hourly granularity.
* **Travel state \(f_{Is}^{ST}\)**. Information about whether the user is located in his/her resident city. Possible states include _based in resident city, about to travel_, and _on travel_.
The dense spatiotemporal context feature vector \(f^{ST}\) is created by concatenating all previously mentioned context features, formulated as follows:
\[f^{ST}=\left[f_{t}^{ST},f_{l}^{ST},f_{w}^{ST},f_{Is}^{ST}\right]. \tag{2}\]
\begin{table}
\begin{tabular}{|l|p{341.4pt}|} \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & Ordering food delivery, Eating in a restaurant, \\ & Booking a hotel, Buying medicine, \\ & Specialty shopping online, Hair-dressing, \\ & Grocery shopping online, Beauty, \\ & Tourism and Entertainment \\ \hline \end{tabular}
\end{table}
Table 1. 10 types of living needs that can be satisfied in Meituan
Figure 2. Illustration of living needs prediction system NEON.
#### 3.1.3. Group Behavior Pattern Features
We introduce group behavior pattern features to supplement the sparse individual behavior of users, in order to assist in identifying the potential living needs of individual users.
* **Group aggregated behavior \(f_{a}^{G}\)**. We first segment users into groups based on their profiles. In each group, we get the group aggregated behavior by calculating the percentage of views, clicks, and purchases of each type of life service among all views, clicks, and purchases initiated by the group. For each user, the group aggregated behaviors of the groups the user belongs to are used as features. For example, a middle-aged person has group aggregated behaviors feature of middle-aged users and other groups he/she is in.
* **Popularity in the current time period \(f_{ct}^{G}\)**. We cut all time into time periods according to different criteria, such as whether it is a holiday, if it is morning, noon, or night, etc. Then in each time period, we calculate the popularity of each type of life service by calculating the percentage of times the life service is viewed/clicked/purchased among all views/clicks/purchases happening in this time period. We determine the time periods in which the current time is located, and use the popularity of each type of life service in these time periods as a feature. For example, if the user opens the app on Christmas might, popularity on holiday and popularity at night of each kind of life service are set as features.
* **Group behavior pattern in spatiotemporal context \(f_{st}^{G}\)**. By discovering group preferences in different spatiotemporal contexts, we further capture more fine-grained group behavior patterns. We calculate the percentage of views/clicks/purchases of each type of life service initiated by each group in each kind of spatiotemporal context. These fine-grained patterns are used as features of the model. For example, the group preference of middle-aged people at work at noon on working days are used to enrich the representations of every individual within this demographic in such spatiotemporal scenario.
* **User behaviors augmented by inter-need correlation \(f_{ic}^{G}\)**. There is an inherent association across different types of users' living needs. This association can be leveraged to improve prediction performance. For example, _a user who frequently purchases hairdressing services may also be inclined to purchase beauty services_. We use the association rule mining algorithm to analyze the co-occurrence of different life service categories, filter out high-correlation relationships, and employ them to augment user behavior as input features.
We combine all previously mentioned group behavior pattern features to generate the dense group behavior pattern feature vector \(f^{G}\), which is formulated as follows:
\[f^{G}=\left[f_{a}^{G},f_{ct}^{G},f_{st}^{G},f_{ic}^{G}\right]. \tag{3}\]
### Feature Fusion Layer
As mentioned in Section 2, we refer to a user in a specific spatiotemporal context as a user scene. For a user scene \(i\), after feature mining, we have dense spatiotemporal features \(f_{i}^{ST}\), dense group pattern features \(f_{i}^{G}\), and sparse user features \(f_{i}^{U}\). For brevity of presentation, we omit the subscript \(i\) in some of the expressions below. We designed a feature fusion layer to integrate these features into the input of the subsequent prediction module.
We first set up an embedding layer, which processes the high-dimensional sparse user feature vector \(f^{U}\) into a low-dimensional dense vector \(v^{U}\). To address the challenges of complex impact of spatiotemporal context and users' potential needs, we mine spatiotemporal features and group behavior pattern features in the feature mining phase, respectively. With these features as input, we use a feature merging network to model the interaction between spatiotemporal contexts, group behavior patterns, and users as follows,
\[x^{M}=h^{M}\left(\left|f^{ST},f^{G},v^{U}\right|\right), \tag{4}\]
where \(\left[\cdot\right]\) denotes concatenation operation. Here \(h^{M}\) is the feature merging network, which merges three information sources of spatiotemporal contexts, group behavior patterns, and user preference into a fusion representation \(x^{M}\).
Moreover, users have their own internal characteristics that are independent of the spatiotemporal scene they are in and the group they belong to. To model the internal characteristics of users, we generate a representation as follows,
\[x^{U}=h^{U}\left(f^{U}\right), \tag{5}\]
where \(h^{U}\) denotes the user preference network that turns raw user features into dense user preference representation. We then concatenate the two parts of representations into the full representation of the user scene:
\[x=\left[x^{M},x^{U}\right]. \tag{6}\]
In brief, we design a feature fusion layer to tackle both challenges by considering the influence of spatiotemporal context, incorporating group behavior patterns, as well as extracting individual preferences.
### Multitask Prediction
We further design a prediction module which takes user scene representation as input to predict users' living needs. The module is tasked with two objectives: _fine-grained need prediction_ and _needs-meeting way prediction_. Fine-grained need prediction is to predict the specific living need of the user. Neets-meeting way prediction is to predict the preferred way of the user to meet their needs.
Specifically, among the ten kinds of needs which we mentioned in the problem formulation, there are two ways to satisfy the needs: in-store and via-delivery. In other words, consumers can choose to satisfy their needs by visiting a physical store or by ordering online and then receiving goods via delivery. Each type of the 10 needs can be classified into one of two categories, in-store needs or via-delivery needs. We show the classification in Table 2. Actually, needs-meeting way prediction is to predict whether the preferred way of the user to meet their needs is in-store or via-delivery.
Users' preferences for needs-meeting ways are strongly affected by the spatiotemporal context. For example, _a person at work during lunchtime on a weekday is more likely to have the need to order food delivery (via delivery), while the same person in a shopping district on a weekend evening is more likely to visit a store for a meal (in-store)_. With this in mind, we include the needs-meeting way prediction task which is jointed trained with the main task of need
prediction to enhance the model's ability to learn spatiotemporal context information. Next, we describe how we get the prediction results of the two tasks. We use \(y^{W}\) and \(y^{N}\) to denote the prediction result of needs-meeting way and specific need. \(y^{k},k\in\{W,N\}\) can be generated as follows,
\[y^{k} =t^{k}(z^{k}),\] \[\text{where }z^{k} =g^{k}(x)_{0}E^{k}(x)+g^{k}(x)_{1}E^{S}(x). \tag{7}\]
Here \(t^{k}\) is the prediction neural network for task \(k\). There are a variety of choices in the specific structure of the neural network. In Section A.1, we will state our specific choice. To avoid verbosity, we will use _network_ to replace _neural network_ in the following text. The output of \(t^{N}\), \(y^{N}\), is the scores of ten types of living needs, and the output of \(t^{W}\), \(y^{W}\), is the scores of in-store and via-delivery needs-meeting ways. We use \(s^{N}_{im}\) to denote the score of need \(m\) for user scene \(i\), and use \(s^{W}_{im}\) to denote the score of needs-meeting way \(n\) for user scene \(i\). \(E^{k}\) is the expert network (Zhou et al., 2017; Wang et al., 2018) for task \(k\). \(E^{S}\) is the shared network between the two tasks. \(E^{S}\) is responsible for generating general representations that are common to both tasks, while \(E^{k}\) is responsible for learning task-specific representations that are more fine-tuned to the specific task \(k\). The gating network \(g_{k}\) determines what proportion of information input each task's prediction network receives from the shared network and the expert network. We formulate the gating network as follows,
\[g^{k}(x)=\text{Softmax}(W_{k}x), \tag{8}\]
where \(W_{k}\in\mathbb{R}^{2\times d}\) is trainable weights for task \(k\). The gating network takes \(x\) as input, and outputs the relative importance of the shared and task-specific representations for a given tasking, allowing the model to selectively attend to the most relevant information and improve its performance. In summary, to address the complexity of spatiotemporal context impact, we introduce an auxiliary task of needs-meeting way prediction which is jointly trained with the main task of fine-grained living needs prediction to enhance our system's learning of spatiotemporal context. The multitask prediction module in our system produces a score for each living need and needs-meeting way.
### Model Training
In this section we describe how our system is trained. Corresponding to the two tasks, we design two parts of loss. We design need prediction loss taking into account the fact that the frequency of different needs arising in users' lives is different. For example, _a user may need to order food delivery for lunch every workday, but rarely need to buy medicine_. In order to address the class imbalance issue for different living needs, we propose using a multi-class focal loss which can decrease the effect of needs with a high volume of training data on the final prediction loss. The need prediction loss can be formulated as follows,
\[\text{Loss}_{\text{need}}=-\sum_{i\in O}\left(\sum_{n=1}^{10} \left(1-q_{in}^{N}\right)^{T}x_{in}^{N}\log\left(q_{in}^{N}\right)\right), \tag{10}\] \[\text{where }q_{in}^{N}=\text{Softmax}\left(s_{in}^{N}\right)= \frac{e^{s_{in}^{N}}}{\sum_{n}e^{s_{in}^{N}}}. \tag{9}\]
Here \(O\) is the training set, \(s_{in}^{N}\) is the score of living need \(n\) for user scene \(i\), \(y\) is the hyperparameter which decides the importance of difficult samples, \(x_{in}^{N}\) is \(1\) if \(n\) is the ground truth need for user scene \(i\), else it is \(0\). For the needs-meeting way prediction task, we use BCE loss as prediction loss. We formulate it as follows,
\[\text{Loss}_{\text{way}}=-\sum_{i\in O}\left(\sum_{m=1}^{2}x_{im} ^{W}\log\left(q_{im}^{W}\right)\right), \tag{12}\] \[\text{where }q_{im}^{W}=\text{Softmax}\left(s_{im}^{W}\right)= \frac{e^{s_{in}^{W}}}{\sum_{m}e^{s_{im}^{W}}}. \tag{11}\]
Here \(O\) is the training set, \(s_{im}^{W}\) is the score of needs-meeting way \(m\) (in store or via delivery) for user scene \(i\). \(x_{im}^{W}\) is \(1\) if \(m\) is the ground truth needs-meeting way for user scene \(i\), else it is \(0\). In our system, the feature integration module and multitask prediction module are trained end to end. The entire loss function is:
\[\text{Loss}=\lambda_{1}\text{Loss}_{\text{need}}+\lambda_{2}\text{Loss}_{ \text{way}}. \tag{13}\]
\(\lambda_{1}\) and \(\lambda_{2}\) are hyperparameters that control the importance of the two parts of loss.
## 4. Offline Evaluation
### Experimental Settings
#### 4.1.1. Dataset
We conduct an offline experiment on a real-world dataset at a billion-scale. The dataset comprises a sampling of all 2022 purchase records on the platform, based on the percentage of purchases for each type of life service. It consists of over 7 billion actual purchase records from 65 million users. The details of datasets are provided in Appendix A.2.
#### 4.1.2. Metrics
We design three metrics, namely Sort Accuracy (SA), Via-delivery Sort Accuracy (VDSA), and In-store Sort Accuracy (ISSA), to measure the performance of our system and baseline systems in predicting living needs. SA measures the overall accuracy of sorting living needs based on their scores. VDSA focuses on the accuracy of predicting needs that are satisfied via delivery, while ISSA focuses on the accuracy of predicting needs for in-store scenarios. Detailed definitions of these metrics are provided in Appendix A.3.
#### 4.1.3. Baselines
To illustrate the effectiveness of our system, we compare it with two baselines widely in actual production environments, including **DIN (Dang et al., 2018)**, **DNN (Dang et al., 2018)**, **DCN (Wang et al., 2018)**, **ESMM (Dang et al., 2018)**, and **MMOE (Zhou et al., 2017)**. We will provide a detailed description of these baselines in Appendix A.4.
\begin{table}
\begin{tabular}{|p{85.4pt}|p{142.3pt}|} \hline Living needs that can be satisfied via delivery & Speciality shopping online, \\ \hline Living needs that can be satisfied in store & Groecy shopping online, \\ \hline \end{tabular}
\end{table}
Table 2. The classification of the 10 living needs that can be satisfied on Meituan
### Overall Performance
We test the performance of our proposed system and baselines on the living needs prediction task, and show the results in Table 3. We can have the following observations.
* **Our system steadily outperforms all baselines on all metrics.** The improvement of our system compared to the best baseline is 0.86%, 9.30%, and 3.57% _w.r.t._ VDSA, ISSA, and SA, respectively. The significant performance gain confirms the effectiveness of our system on the living needs prediction task. Furthermore, such a significant improvement in the ability to predict living needs will result in a huge benefit in real-world production scenarios, which will be further confirmed through online evaluation.
* **Our system achieves greater improvement on ISSA.** The task of predicting users' in-store living needs is relatively difficult, since the in-store consumption data is sparser, and the relationships between spatiotemporal context and in-store needs are more complex. On this task, all methods perform the worst, and our system outperforms baselines on a large margin, with an improvement of 9.30% _w.r.t._ ISSA. This further confirms our model's ability to tackle the complex impact of spatiotemporal context and discovering potential needs.
### Ablation Study
As mentioned in Section 3.3, we introduce the task of needs-meeting way prediction to enhance the system's learning of spatiotemporal context. To study the effectiveness of the multitask prediction design, we remove it from our system to observe the impact of the design on the system performance. Specifically, we change the system structure by removing the needs-meeting way prediction network \(t^{W}\) and taking the sum of \(z^{N}\) and \(z^{W}\) as the input of the need prediction network \(t^{N}\), and test the performance of the changed system. The results are shown in Table 4. The results show that our proposed system outperforms the system without multi-task prediction design. Our system performs better _w.r.t._ VDSA by 1.16%, _w.r.t._ ISSA by 2.39%, and _w.r.t._ SA by 1.14%. The significant performance improvement confirms the validity of the multitask prediction design.
## 5. Online Evaluation
For life service platforms such as Meituan, understanding and predicting users' living needs are important in all business scenarios. In this section, we evaluate the performance of three downstream recommendation tasks when deploying NEON into Meituan's recommender engine, and the illustration of deployment is shown in Figure 3. Specifically, the three typical applications are homepage recommendation, _Guess-you-like_ recommendation, and message pop-up recommendation. The performance of NEON on these applications reflects its effectiveness on fine-grained need prediction, needs-meeting way prediction, and potential need prediction, respectively. We elaborate on the online testing of three tasks one by one as follows.
### Homepage Recommendation
In this section, we evaluate the performance of NEON when deployed to homepage recommendation, which reflects its ability on fine-grained living need prediction.
#### 5.1.1. Deployment Scenario
Typically, users open Meituan mobile APP to meet their certain living need. In the overall recommendation list on homepage offered by Meituan, users probably only focus on the items which belong to the type of life service that can fulfill their living needs, and choose one item from the items. Whether an item is in the category of life service that the user _need_ is at least as important as whether the user _likes_ the item. To emphasize the importance of recommending _needed_ items, the engine follows a two-step approach to generate the final recommendation
\begin{table}
\begin{tabular}{c c c c} \hline & **VDSA** & **ISSA** & **SA** \\ \hline w.o. & 0.9089 & 0.8084 & 0.8968 \\ with & 0.9175 & 0.8277 & 0.9070 \\ \hline Improvement & 0.95\% & 2.39\% & 1.14\% \\ \hline \end{tabular}
\end{table}
Table 4. Performance of NEON with/without multitask prediction.
Figure 3. Online deployment of NEON in Meituan Recommendation Engine. In summary, NEON helps to generate the quotas of categories of life services in the recommendation list (homepage recommendation and _guess-you-like_ recommendation), or decide the one category to be recommended to the user (message pop-up recommendation).
\begin{table}
\begin{tabular}{c c c c} \hline
**Method** & **VDSA** & **ISSA** & **SA** \\ \hline DIN & 0.9044 & 0.7467 & 0.8700 \\ DNN & 0.9060 & 0.7500 & 0.8718 \\ DCN & 0.9051 & 0.7466 & 0.8701 \\ ESMM & 0.9080 & 0.7476 & 0.8708 \\ MMOE & 0.9097 & 0.7573 & 0.8757 \\ NEON & **0.9175** & **0.8277** & **0.9070** \\ \hline Improvement & 0.86\% & 9.30\% & 3.57\% \\ \hline \end{tabular}
\end{table}
Table 3. Offline experimental performance of NEON and baselines.
list from the pre-ranking result. For a user scene, it first decides the quotas for all the ten categories of life services and then generates the final list according to the quotas and the prediction scores of the items. The supply quotas are generated based on the scores our system outputs along with other criteria.
In short, our NEON system is used to generate the _quotas_ of different kinds of life services in the homepage recommendation list.
#### 5.1.2. Experiment Setting
We will first give a detailed description of how our system is used in generating the quotas for each category of life services. At first, we recall local life service items into the recall pool using various strategies, such as popularity and collaborative filtering. Then the engine outputs a preliminary recommendation list based on the recall pool, called pre-ranking list. After that, we take the Softmax normalized scores output by our system as the proportional quotas for each category of life service. We further adjust the quotas taking the proportion of each category in the pre-ranking list, supply distribution by category, and order distribution by category into account. The generated quotas are the proportion of each category in the lists received by users. The lists are generated considering both quotas of categories and prediction scores of items.
We compare the homepage recommendation performance of our whole recommendation engine with and without our system through online A/B tests. The tests last for one week and involved around 4.5 million users.
In our tests, the users are randomly divided into two buckets of similar size and assigned different methods for calculating quotas for each category. Specifically, for the first group, we use the method described previously, while for the second group, we generate quotas based only on the proportion of each category in the pre-ranking list, supply distribution by category, and order distribution by category. We maintain consistency across all other modules to ensure a fair comparison.
For metrics, we use _Click Through Rate_ (CTR), _Conversion Rate_ (CVR), _Click to Conversion Rate_ (CTCVR) to measure the quality of the final recommendation list, which are widely-used measurements (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2019).
#### 5.1.3. Performance
The results of our A/B tests are shown in Table 5. From the results, we can have the following observations:
* There is a significant improvement with respect to all metrics. The increase _w.r.t._ CTR and CVR are 0.230% and 1.653%, respectively, which is a notable improvement.
* The increase _w.r.t._ CTCVR is 1.886%. CTCVR indicates how likely users are to purchase the recommended items. Such improvement can result in a substantial rise in total consumption on the platform.
* The rise _w.r.t._ GTV-CC is 3.627%. The remarkable uplift demonstrates the outstanding capability of our system in predicting users' living needs that has no historical record.
We further calculate the _Kullback-Leibler divergence_(Kullback and Leibler, 2015) (KLD) between real user order distribution by category and the average proportional allocation given by the online engine with/without our system. Lower KLD indicates the quotas match better with real user order distribution, which can be regarded as real user needs distribution. The results of different time periods throughout the day are shown in Table 6. The time periods are separated based on the business characteristics of the platform during each hour. Similar hours are grouped within a single period.
From the results, it can be seen that in all time periods, the Kullback-Leibler Divergence between the actual user order distribution by category and the average proportional allocation generated by the online engine with NEON is significantly less than that without our NEON. The percentage of decrease (improvement) is 12.90% on average. This indicates that the quotas produced by the online engine with our system are more aligned with the actual user consumption distribution, or the real-life user living needs, compared to the ones generated without our system. This can be regarded as evidence of our model's effectiveness in addressing the intricate impact of spatiotemporal context, and further confirms our system's strong ability for predicting fine-grained user needs.
### Guess-you-like Recommendation
This section assesses NEON's performance in guess-you-like recommendation, highlighting its ability on needs-meeting way prediction.
#### 5.2.1. Deployment Scenario
Meituan designs a _Guess-you-like_ page, which user may be guided to during their leisure time. Typically, users browse this page to satisfy their _unnecessary_ living needs, such as entertainment or beauty, etc. In this page, they purchase items they need and also _like_. To provide more choices for users in the categories of life services they need, the recommendation list should have more items in those categories. In this page, we assume that users are more concerned with being recommended items they like. Therefore, we do not maintain the proportion of a category in the "Guess-you-like" recommendation list just because there is a slight possibility that it is essential, as users can use modules other than Guess-you-like such as the search button to find necessary life services. To ensure enough items are in the needed category, the online engine generates new quotas for Guess-you-like page based on the quotas for homepage recommendation list and the in store/via delivery score output by our system.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Time Period} & \multicolumn{2}{c}{Kullback-Leibler Divergence} & \multirow{2}{*}{Improvement} \\ \cline{2-3} \cline{2-3} & w/o NEON & & with NEON \\ \hline
0–4 & 0.2578 & 0.2201 & -14.62\% \\
5–8 & 0.2237 & 0.1907 & -14.75\% \\
9-10 & 0.2179 & 0.1902 & -12.71\% \\
11-12 & 0.2408 & 0.2138 & -11.21\% \\
13-16 & 0.2323 & 0.2053 & -11.62\% \\
17-19 & 0.2353 & 0.2096 & -10.92\% \\
20-24 & 0.2333 & 0.1995 & -14.49\% \\ \hline \hline \end{tabular}
\end{table}
Table 6. KLD of different time periods between real user order distribution and quotas given with/without NEON.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Metric & w/o NEON & with NEON & Improvement \\ \hline CTR & 3.0601\(\times 10^{-2}\) & 3.0672\(\times 10^{-2}\) & +0.230\% \\ CVR & 1.1497\(\times 10^{-1}\) & 1.1687\(\times 10^{-1}\) & +1.652\% \\ CTCVR & 3.5183\(\times 10^{-3}\) & 3.5846\(\times 10^{-3}\) & +1.886\% \\ \hline \hline \end{tabular}
\end{table}
Table 5. Results of A/B tests on homepage recommendation
In summary, the needs-meeting way prediction results of NEON are used to further adjust the _quotas_ of different life services in _Guess-you-like_ recommendation list.
#### 5.2.2. Experiment Setting
We conduct online A/B tests involving about 7 million users over a period of two weeks. We randomly divide users into two buckets, each of which has a similar amount of users, and assign them different methods of generating the quotas of categories in the Guess-you-like page. Specifically, for the first bucket we adopt the same strategy as in the homepage recommendation. For the second bucket, we calculated the score of both needs-meeting ways, and turn higher the quotas of categories whose needs-meeting way gets a higher score. As for metrics, we use CTR, CVR, CTCVR, _Negative Feedback Rate for Unique Visitor_ (NFR-UV), _Negative Feedback Rate for Page View_ (NFR-PV) to measure the quality of the recommendation list in the Guess-you-like page. NFR-UV and NFR-PV emphasize that the recommendation list should not contain items users dislike.
#### 5.2.3. Performance
Under the aforementioned settings, we conduct extensive online A/B experiments. The results are shown in Table 7. We list our observations as follows:
* The performance gain _w.r.t._ CVR, CTCVR, and OV are 0.280%, 0.218%, and 0.310%, respectively. Such improvement resulting from adjusting the quotas with the assistance of the needs-meeting way prediction module can lead to a significant increase in the consumption amount.
* The NFR-UV and NFR-PV decrease by 1.57% and 3.31% respectively, indicating that the recommendation engine is able to recommend fewer items that users dislike in the Guess-you-like page by adjusting the quotas with the aid of the needs-meeting way prediction module in our system.
We further calculate the performance increase on some popular categories of life services in Guess-you-like page. The results are shown in Table 8. From the result, we can observe that:
* For all these categories there is average relative improvement of 3.801% _w.r.t._ CVR and 3.765% _w.r.t_ CTCVR. With such improvements, all these categories can enjoy a notable increase in order volume on Meituan platform.
* Among all these categories, the rise on Kids category is the most significant, which is 8.723% _w.r.t._ CVR and 9.691% _w.r.t_ CTCVR.
* There are also slight decreases _w.r.t._ CTR of 0.034% for Food Delivery, -0.449% for Beauty, and -0.595% for Hotel. These decreases are within the typical fluctuations observed in the market.
The above results, which demonstrate the successful implementation of our system in guess-you-like recommendation, further illustrate the efficacy of our system, particularly in needs-meeting way prediction.
### Message Pop-up Recommendation
In this section, we test our model's performance in message pop-up recommendation to observe its potential need prediction ability.
#### 5.3.1. Deployment Scenario
In the case where a user has a living need that has never or rarely been fulfilled on Meituan platform, he/she probably won't launch Meituan mobile app, as they may not be aware that this need can be satisfied on Meituan or may not be accustomed to fulfilling this need on Meituan. So, if our engine runs in the background of the phone and detects the user's potential living need, it will send a message pop-up to the user with a recommendation for a life service solution, if the user allows it. The message pop-up recommends the user a life service that is in the category of the highest score output by our system.
In brief, with the ability of potential needs prediction, NEON is deployed for message pop-up recommendation.
#### 5.3.2. Experiment Setting
We conduct online A/B tests involving one million users over a period of two weeks. The users are randomly divided into two buckets of similar volume and are assigned different strategies for selecting items to be sent in message pop-ups. For the first bucket, the message pop-up recommends items that belong to the category with the highest score output by our system. For the second bucket, in each hour of the day, we calculate the popularity and average CTR of each category. We distribute the traffic for message pop-ups to the categories with the most popularity and highest average CTR.
We use CTR, CVR, CTCVR, OV, _Number of Cold-start Customers_ (NCC) to measure the performance of our system in potential need prediction. NCC represents the number of customers who purchase a lifestyle service that they have never acquired on the platform previously.
#### 5.3.3. Performance
The results of the online A/B tests are shown in Table 9. We can observe that:
* By replacing the algorithm based on popularity and average CTR with our living need prediction system NEON to determine the category of recommendation in message pop-up, all metrics show
\begin{table}
\begin{tabular}{c c c c} \hline Metric & w/o NEON & with NEON & Improvement \\ \hline CTR & 9.5707\(\times 10^{-2}\) & 9.5647\(\times 10^{-2}\) & -0.063\% \\ CVR & 1.3568\(\times 10^{-1}\) & 1.3606\(\times 10^{-1}\) & +0.280\% \\ CTCVR & 1.3014\(\times 10^{-2}\) & 1.2985\(\times 10^{-2}\) & +0.218\% \\ NFR-UV & 1.0128\(\times 10^{-4}\) & 1.0294\(\times 10^{-4}\) & +1.605\% \\ NFR-PV & 8.3283\(\times 10^{-5}\) & 8.6726\(\times 10^{-5}\) & +3.342\% \\ \hline \end{tabular}
\end{table}
Table 7. Overall results of A/B tests on _Guess-you-like_ page recommendation.
\begin{table}
\begin{tabular}{c c c c c} \hline Category & Metric & w/o NEON & with NEON & Improvement \\ \hline \multirow{3}{*}{Food} & CTR & 1.8298\(\times 10^{-2}\) & 1.8292\(\times 10^{-2}\) & -0.034\% \\ & CVR & 2.8841\(\times 10^{-1}\) & 2.8987\(\times 10^{-1}\) & +0.506\% \\ & CTCVR & 5.2774\(\times 10^{-3}\) & 5.3023\(\times 10^{-3}\) & +0.472\% \\ \hline \multirow{3}{*}{Beauty} & CTR & 1.8769\(\times 10^{-2}\) & 1.8684\(\times 10^{-1}\) & -0.449\% \\ & CVR & 1.2583\(\times 10^{-3}\) & 1.2889\(\times 10^{-3}\) & +2.427\% \\ & CTCVR & 2.3616\(\times 10^{-4}\) & 2.4081\(\times 10^{-4}\) & +1.967\% \\ \hline \multirow{3}{*}{Kids} & CTR & 1.7717\(\times 10^{-2}\) & 1.7874\(\times 10^{-2}\) & +0.890\% \\ & CVR & 7.8825\(\times 10^{-3}\) & 8.5701\(\times 10^{-3}\) & +8.723\% \\ & CTCVR & 1.3965\(\times 10^{-1}\) & 1.5319\(\times 10^{-4}\) & +9.691\% \\ \hline \multirow{3}{*}{Hotel} & CTR & 1.6567\(\times 10^{-2}\) & 1.6469\(\times 10^{-2}\) & -0.595\% \\ & CVR & 4.1454\(\times 10^{-2}\) & 4.2923\(\times 10^{-2}\) & +3.546\% \\ \cline{1-1} & CTCVR & 6.8678\(\times 10^{-1}\) & 7.0690\(\times 10^{-4}\) & +2.930\% \\ \hline \end{tabular}
\end{table}
Table 8. The A/B tests results on several popular categories of life services in _Guess-you-like_ page.
significant improvement. CTR, CVR, CTCVR, and OV increase by 8.21%, 78.64%, 85.71%, and 95.92%, respectively.
* NCC increases by 74.26%, indicating that the online deployment of our system in message pop-up recommendations results in a 74.26% increase in the number of customers purchasing a life service that they have not previously acquired on the Meituan platform, via the message pop-up feature.
Our system is able to accurately detect the potential needs of users for a specific lifestyle service, even in instances where they have never previously purchased that service on the Meituan platform, and accordingly deliver targeted message pop-ups to them. This result strongly demonstrates the exceptional capability of our system in predicting users' potential needs.
In summary, the success achieved by our NEON system on three downstream recommendation tasks proves its effectiveness on fine-grained need prediction, needs-meeting way prediction, and potential need prediction, respectively. The significant performance gain in real applications can lead to huge benefits.
## 6. Related Work
As discussed above, in this work we explore the problem of living needs prediction, defined as predicting the specific living **needs** of a user given the **spatiotemporal** context. Thus, there are two closely related research topics: demand forecasting and spatiotemporal activity prediction.
### Demand Forecasting
Demand forecasting aims at predicting the quantity of a product or service that consumers will purchase. It helps in making informed decisions on inventory management, production scheduling, pricing strategy, etc. The problem of demand forecasting is broad and multifaceted, affecting many different industries, including restaurant (Han et al., 2015), manufacturing (Shen et al., 2016), retail (Shen et al., 2016), tourism (Shen et al., 2016), energy (Garshan et al., 2016), transportation (Shen et al., 2016), etc.
To address the problem of demand forecasting, researchers have proposed various methods which can be broadly classified into three categories: statistical models (Han et al., 2015; Han et al., 2015; Han et al., 2015; Li et al., 2016; Wang et al., 2016), machine learning models (Han et al., 2015; Li et al., 2016; Wang et al., 2016; Li et al., 2016), and deep learning models (Han et al., 2015; Han et al., 2015; Li et al., 2016; Wang et al., 2016). Statistical models, such as exponential smoothing (Wang et al., 2016), are well-suited for long-term demand forecasting as they are based on historical trends and patterns. However, they are not adept at handling variations or outliers in the data, making them unsuitable for volatile or short-term demand forecasting. On the other hand, machine learning models for demand forecasting, such as Random Forest based models (Han et al., 2015; Li et al., 2016), are efficient at short-term demand forecasting, but their performance drops when it comes to long-term forecasting. Deep learning models such as LSTM based models (Li et al., 2016) and GAN based models (Li et al., 2016) have the ability to capture complex patterns and dependencies in the data, making them suitable for both short-term and long-term demand forecasting. However, they require a large amount of data to work well.
Existing demand forecasting methods can not handle the problem of living needs prediction. These methods focus on the overall demand for a particular product or service in a market, but in this work, we aim at predicting the need of a specific consumer. What's more, demand forecasting methods predict demands in several months or years, while in this work we predict a user's need at a specific time and location.
### Spatiotemporal Activity Prediction
Spatiotemporal activity prediction aims to predict the activity of a user at a given time and location. Previous works have employed various methods to perform spatiotemporal activity prediction. One popular approach is to build a tensor using historical data and then conduct tensor factorization to learn intrinsic association (Fan et al., 2016; Li et al., 2016; Wang et al., 2016). For example, Fan _et al._(Fan et al., 2016) propose to integrate tensor factorization with transfer learning for online activity prediction. Additionally, WDGTC (Fan et al., 2016) proposes a low-rank tensor decomposition and completion framework for passenger flow prediction by introducing L1-norm and Graph Laplacian penalties. Recently, researchers have introduced Graph Convolutional Networks (Hu et al., 2015) (GCNs) to achieve high performance in spatiotemporal activity prediction. For example, SA-GCN (Wang et al., 2016) develops a Graph Convolutional Network with meta path-based objective function for App-usage prediction task. Furthermore, DisenHCN (Hu et al., 2015) utilizes a heterogeneous hypergraph to model fine-grained user similarities, resulting in significant performance gains.
However, existing works on spatiotemporal activity prediction focus on predicting the specific activities of people, while our work focuses on the general living needs which are the driving force behind specific consumption behaviors. What's more, these works mainly focus on either online or offline activities, but in our work, we predict living needs can be satisfied both in store (offline) and via delivery (online) by different kinds of life services, which is beyond the capabilities of existing methods.
## 7. Conclusion and Future Work
In this work, we approach the new problem of living needs prediction, which is critical in life services platforms. We present the NEON system in Meituan, consisting of three phases, feature mining, feature fusion, and multitask prediction. Large-scale online A/B testing in three downstream applications, along with extensive offline evaluation, strongly confirm the effectiveness of our system. As for future work, we plan to test NEON's performance in more downstream applications.
## Acknowledgement
This work is supported in part by National Key Research and Development Program of China under 2022YFB3104702. This work is supported in part by National Natural Science Foundation of China under 62272262, 61971267, and 61972223. This work is supported in part by a grant from the Guoqiang Institute, Tsinghua University under 2021GQG1005. This work is supported in part by Beijing National Research Center for Information Science and Technology. This work is also supported by Meituan.
\begin{table}
\begin{tabular}{l l l l l l} \hline Metric & CTR & CVR & CTCVR & OV & NCC \\ \hline Improv. & +8.21\% & +78.64\% & +85.71\% & +95.92\% & +74.26\% \\ \hline \end{tabular}
\end{table}
Table 9. Results of A/B tests on message pop-up recommendation. |
2306.17673 | Properties of the $T_{cc}(3875)^+$ and $T_{\bar c\bar c}(3875)^-$ (and
their heavy-quark spin partners) in nuclear matter | We discuss the modification of the properties of the tetraquark-like
$T_{cc}(3875)^+$ and $T_{\bar c\bar c}(3875)^-$ states in dense nuclear matter.
We consider the $T_{cc}^+$ and $T_{\bar c\bar c}^-$ in vacuum as purely
isoscalar $D^{\ast} D$ and $\overline{D}{}^{\ast} \overline{D}$ $S$-wave bound
states, respectively, dynamically generated from a heavy-quark effective
interaction between the charmed mesons. We compute the $D$, $\overline{D}$,
$D^*$, and $\overline{D}{}^{*}$ spectral functions embedded in a nuclear medium
and use them to determine the corresponding $T_{cc}^+$ and $T_{\bar c\bar c}^-$
self energies and spectral functions. We find important modifications of the
$D^{\ast} D$ and $\overline{D}{}^{\ast} \overline{D}$ scattering amplitudes and
of the pole position of these exotic states already for $\rho_0/2$, with
$\rho_0$ the normal nuclear density. We also discuss the dependence of these
results on the $D^{\ast} D$ ($\overline{D}{}^{\ast} \overline{D}$) molecular
component in the $T_{cc}^+$ ($T_{\bar c\bar c}^-$ ) wave-function. Owing to the
different nature of the $D^{(*)}N$ and $\overline{D}{}^{(*)}N$ interactions, we
find characteristic changes of the in-medium properties of the $T_{cc}(3875)^+$
and $T_{\bar c\bar c}(3875)^-$, which become increasingly visible as the
density increases. The experimental confirmation of the found distinctive
density-pattern will give support to the molecular picture of these
tetraquark-like states, since in the case they were colourless compact quark
structures the density behaviour of their respective nuclear medium spectral
functions would likely be similar. Finally, we perform similar analyses for the
isoscalar $J^P=1^+$ heavy-quark spin symmetry partners of the $T_{cc}^+$
($T_{cc}^{*+}$) and the $T_{\bar c\bar c}^-$ ($T_{\bar c\bar c}^{*-}$) by
considering the $D^{*0}D^{*+}$ and $\overline{D}{}^{*0} D^{*-}$ scattering
$T-$matrices. | Victor Montesinos, Miguel Albaladejo, Juan Nieves, Laura Tolos | 2023-06-30T14:03:13Z | http://arxiv.org/abs/2306.17673v2 | Properties of the \(T_{cc}(3875)^{+}\) and \(T_{\overline{cc}}(3875)^{-}\) (and their heavy-quark spin partners) in nuclear matter
###### Abstract
We discuss the modification of the properties of the tetraquark-like \(T_{cc}(3875)^{+}\) and \(T_{\overline{cc}}(3875)^{-}\) states in dense nuclear matter. We consider the \(T_{cc}^{+}\) and \(T_{\overline{cc}}^{-}\) in vacuum as purely isoscalar \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\)\(S\)-wave bound states, respectively, dynamically generated from a heavy-quark effective interaction between the charmed mesons. We compute the \(D\), \(\overline{D}\), \(D^{*}\), and \(\overline{D}^{*}\) spectral functions embedded in a nuclear medium and use them to determine the corresponding \(T_{cc}^{+}\) and \(T_{\overline{cc}}^{-}\) self energies and spectral functions. We find important modifications of the \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) scattering amplitudes and of the pole position of these exotic states already for \(\rho_{0}/2\), with \(\rho_{0}\) the normal nuclear density. We also discuss the dependence of these results on the \(D^{*}D\) (\(\overline{D}^{*}\overline{D}\)) molecular component in the \(T_{cc}^{+}\) (\(T_{\overline{cc}}^{-}\) ) wave-function. Owing to the different nature of the \(D^{(*)}\) and \(\overline{D}^{(*)}N\) interactions, we find characteristic changes of the in-medium properties of the \(T_{cc}(3875)^{+}\) and \(T_{\overline{cc}}(3875)^{-}\), which become increasingly visible as the density increases. The experimental confirmation of the found distinctive density-pattern will give support to the existence of molecular components in these tetraquark-like states, since in the case they were mostly colorless compact quark structures (\(cc\overline{\ell}\bar{\ell}\) and \(\bar{c}\bar{c}\ell\ell\), with \(\ell=u,d\)), the density behaviors of the \(T_{cc}(3875)^{+}\) and \(T_{\overline{cc}}(3875)^{-}\) nuclear medium spectral functions, though different, would not likely be the same as those found in this work for molecular scenarios. Finally, we perform similar analyses for the isoscalar \(J^{P}=1^{+}\) heavy-quark spin symmetry partners of the \(T_{cc}^{+}\) (\(T_{cc}^{*+}\)) and the \(T_{\overline{cc}}^{-}\) (\(T_{\overline{cc}}^{*-}\)) by considering the \(D^{*0}D^{*+}\) and \(\overline{D}^{*0}D^{*-}\) scattering \(T-\)matrices.
## I Introduction
Over the past decades a plethora of new hadronic states has been experimentally observed. More precisely, the spectroscopy of charmonium-like states, the so-called \(XYZ\), has received an incredible boost, having the \(X(3872)\)[1] a prominent and pioneer role. Also the discovery of the \(P_{c}\) and \(P_{cs}\) baryonic states by LHCb [2; 3; 4; 5; 6], and more recently mesons, such as \(T_{cs}(2900)\)[7; 8] and \(T_{cc}(3875)^{+}\)[9; 10], have captured the attention of the hadronic community, as different theoretical interpretations of their nature have been postulated--they can be understood as multiquark states (tetraquarks or pentaquarks), hadroquarkonia states, hadronic molecules, cusps due to kinematic effects, or a mixture of different components (see, for example, the recent reviews [11; 12; 13; 14; 15; 16; 17]).
In particular, the interest on the properties and nature of the \(T_{cc}(3875)^{+}\) state is growing by the day within the hadronic community. This very narrow state is observed in the \(D^{0}D^{0}\pi^{+}\) mass distribution, with a mass of \(m_{\rm thr}+\delta m_{\rm exp}\), being \(m_{\rm thr}=3875.09\) MeV the \(D^{*+}D^{0}\) threshold and \(\delta m_{\rm exp}=-360\pm 40^{+4}_{-0}\) keV, and a width \(\Gamma=48\pm 2^{+0}_{-14}\) keV [10]. Among the possible interpretations, the molecular picture [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35] is being supported by its closeness to the \(D^{0}D^{*+}\) and \(D^{+}D^{*0}\) thresholds, whereas the tetraquark interpretation has been put forward [36; 37], even before its discovery. However, the proximity of the state to the \(D^{0}D^{*+}\) and \(D^{+}D^{*0}\) thresholds makes it necessary to consider the hadronic degrees of freedom for the analysis of the experimental data [15].
More information on this state in different experimental setups is therefore very welcome in order to further learn about its nature and properties. Recently, the femtoscopic correlation functions for the \(D^{0}D^{*+}\) and \(D^{+}D^{*0}\) channels in heavy-ion collisions (HICs) have become of major interest. Work on that direction has been recently performed for the \(T_{cc}(3875)^{+}\) state in Ref. [38], using coordinate space wave functions and potentials, or even more recently in Ref. [39] using momentum space interactions.
Another possible way to gain some insight about the nature of the \(T_{cc}(3875)^{+}\) is to analyze its behavior under the extreme density and/or temperature conditions present in HICs at RHIC, LHC or FAIR energies. Indeed, analyses of that type have been performed, for example, for the \(X(3872)\) state. Using the coalescence model, the ExHIC collaboration [40; 41; 42] showed that considering the \(X(3872)\) as a molecular state implies a production yield much larger than for the tetraquark configuration, in particular if one also takes into account the evolution in the hadronic phase [43; 44], due to the fact that the production and absorption cross sections in HICs are expected to be larger for a molecular state. Moreover, the nature of the \(X(3872)\) in HICs has been also studied with instantaneous coalescence models [45; 46], a statistical hadronization approach [47; 48], or using a thermal-rate equation scheme [48]. However, these analyses on the production of \(X(3872)\) in HICs did not take into account the possible in-medium modification of the \(X(3872)\) in the hadronic phase. The inclusion of these modifications has been performed in posterior studies of the \(X(3872)\) in a hot meson bath [49; 50] and in a dense nuclear medium [51]. The in-medium mass shifts of heavy mesons such as the \(X(3872)\) and \(Z_{c}(3900)\) have also been studied by means of sum rules [52; 53].
In this work we address the behavior of \(T_{cc}(3875)^{+}\) in a nuclear environment, with the objective of analyzing the finite-density regime of HICs in experiments such as CBM at FAIR. We follow our previous work on the \(X(3872)\) in dense nuclear matter [51]. We start from a picture of the \(T_{cc}(3875)^{+}\) generated as a bound state from the leading-order interaction of the \(D\) and \(D^{*}\) mesons, constrained by heavy-quark spin symmetry (HQSS). HQSS also allows us to have access to the \(D^{*}D^{*}\) partner of the \(T_{cc}(3875)^{+}\), which we name the \(T_{cc}^{*}(4016)^{+}\), and has been predicted by several theoretical groups [23; 24; 54]. We then implement the changes of the \(D\) and \(D^{*}\) propagators in nuclear matter in order to obtain the in-medium \(T_{cc}(3875)^{+}\) and \(T_{cc}^{*}(4016)^{+}\) scattering amplitudes and spectral functions. Later on, we consider generalizations of the \(DD^{*}\) and \(D^{*}D^{*}\) interactions, allowing for scenarios in which the \(T_{cc}(3875)^{+}\) and \(T_{cc}^{*}(4016)^{+}\) are not purely molecular states. In this manner, we can extract the modification of the mass and the width of these states in nuclear matter for different scenarios, in view of the forthcoming results on HICs at CBM (FAIR).
In addition, we also pay attention to the \(T_{\bar{c}c}(3875)^{-}\) and \(T_{\bar{c}c}^{*}(4016)^{-}\), antiparticles of the \(T_{cc}(3875)^{+}\) and \(T_{cc}^{*}(4016)^{+}\), and whose properties in vacuum are trivially related to those of the \(T_{cc}^{(\epsilon)+}\) by the charge-conjugation symmetry. If these exotic states had a predominant molecular origin, the nuclear environment would induce different modifications to charmed \(D^{(*)}D^{*}\) than to anti-charmed \(\overline{D}^{(*)}\overline{D}^{*}\) pairs of interacting mesons. This is due to the different strength of the \(D^{(*)}N\) and \(\overline{D}^{(*)}N\) interactions, which should lead to visible changes among the medium-properties of the \(T_{cc}^{(*)+}\) and \(T_{\bar{c}c}^{(*)-}\). These differences become larger as the density increases. The nuclear medium breaks the particle-antiparticle symmetry leading to quite different \(D^{(*)}\) and \(\overline{D}^{(*)}\) spectral functions. This is similar to what occurs in the strange sector when one studies the properties of kaons and anti-kaons embedded in dense matter. Kaons (\(K^{0},K^{+}\)) contain a \(\bar{s}\) antiquark and therefore their strong interaction with nucleons cannot produce hyperons, which however can be excited by \(\overline{K}^{0}\) and \(K^{-}\) anti-kaons that provide the negative unit of strangeness (quark \(s\)) needed to conserve flavor.1 In the case of \(D^{(*)}N\) interactions, there exists the possibility of exciting the odd-parity spin \(J=1/2\) and \(3/2\)\(\Lambda_{c}(2595)\) and \(\Lambda_{c}(2625)\) resonances [56; 57], while in the \(\overline{D}^{(*)}N\) case, only exotic pentaquarks with negative charm quantum number could be excited [58].
Footnote 1: Strangeness measurements exploiting the distinct \(K^{0}\) and \(\overline{K}^{0}\) strong interactions on nucleons have been employed to derive new Bell’s inequalities for entangled \(K^{0}\overline{K}^{0}\) pairs produced in \(\phi\)–decays [55]. Indeed, if a dense piece of ordinary (nucleonic) matter is inserted along the neutral kaon trajectory, by detecting the products from strangeness conserving strong reactions, the incoming state is projected either into \(K^{0}\) (\(K^{0}p\to K^{+}n\)) or into \(\overline{K}^{0}\) (\(\overline{K}^{0}p\to\Lambda\pi^{+}\), \(\overline{K}^{0}n\to\Lambda\pi^{0}\), \(\overline{K}^{0}n\to pK^{-}\)). Due to the different size of the corresponding cross sections, the slab of nuclear matter acts as a \(K^{0}\) regenerator since the probability of disappearance of the neutral antikaon \(\overline{K}^{0}\) is significantly larger.
However, if the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}c}(3875)^{-}\) were colorless compact tetraquark structures (\(cc\bar{c}\bar{\ell}\) and \(\bar{c}\bar{c}\ell\ell\), with \(\ell=u,d\)), the density behavior of their nuclear medium spectral functions would be presumably different. In this case, the interaction with the medium depends on whether the tetraquark state is composed of two light quarks or two light antiquarks, as they will behave differently in the presence of density. In fact, within the quark model picture, the interaction with the medium would be at the quark level via different color-spin interactions between a \(T_{cc}(3875)^{+}\) and a nucleon or a \(T_{\bar{c}c}(3875)^{-}\) and a nucleon. Hence, the study of the asymmetrical density-pattern of the properties of the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}c}(3875)^{-}\) inside of a nuclear environment could become an interesting additional tool to disentangle the structure (compact or molecular) of the exotic \(T_{cc}(3875)^{+}\). This is a novel and important result of this work, which did not apply to our previous study of the \(X(3872)\) in nuclear matter carried out in Ref. [51], because the latter state has well defined charge-conjugation.2
Footnote 2: Note that the behavior of both \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}c}(3875)^{-}\) in a hot pion bath will be identical since \(D^{(*)}\pi\) and \(\overline{D}^{(*)}\pi\) interactions are equal in the SU(2) limit.
The manuscript is organized as follows. In Sec. II we present the \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) scattering amplitudes and the dynamical generation of the \(T_{cc}(3875)^{+}\), \(T_{\bar{c}c}(3875)^{-}\) and their heavy-quark spin partners in vacuum and finite density. We start by discussing the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}c}(3875)^{-}\) in the vacuum (Subsec. II.1) and embedded in isospin-symmetric nuclear matter (Subsec. II.2). In Subsec. II.3, we show the in-medium pseudo-scalar and vector heavy-light meson spectral functions, which determine the density modifications of the \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) amplitudes. We also introduce
the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}c}(3875)^{-}\) self-energies both in vacuum and in nuclear matter (Subsec. II.4), we discuss the type of interaction kernels to be used in our work (Subsec. II.5), and we connect to the pole positions in the nuclear medium (Subsec. II.6). In Subsec. II.7 we introduce the heavy-quark spin partners of the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}c}(3875)^{-}\), that is, \(T_{cc}^{*}(4016)^{+}\) and \(T_{\bar{c}c}^{*}(4016)^{-}\). In Sec. III we present our results for \(T_{cc}(3875)^{+}\) (Subsec. III.1), \(T_{\bar{c}c}(3875)^{-}\) (Subsec. III.2), as well as \(T_{cc}^{*}(4016)^{+}\) and \(T_{\bar{c}c}^{*}(4016)^{-}\) (Subsec. III.3). The conclusions are given in Sec. IV.
## II Formalism
In this work we closely follow Ref. [51], in which the in-medium modifications of \(D^{*}\overline{D}\) scattering and the \(X(3872)\) properties are described. We briefly summarize here this formalism focusing on the appropriate modifications.
### Vacuum \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) scattering amplitudes
We start by considering the \(T_{cc}(3875)^{+}\) as a \(D^{*}D\) state with isospin and spin-parity quantum numbers \(I(J^{P})=0(1^{+})\). The \(T_{cc}^{+}\) is thus assumed to be an isoscalar, with a minor isospin breaking from the different masses of the channels involved. This is consistent with the experimental analysis, where no trace of a peak is seen in the partner isospin \(I=1\) channel \(D^{+}D^{*+}\)[9; 10; 23]. We consider the particle basis \(\left\{D^{*+}D^{0},\,D^{*0}D^{+}\right\}\) and a heavy-quark effective field theory (HQET) interaction diagonal in the isospin basis. We only take into account the \(S\)-wave part of the interaction since the \(T_{cc}(3875)^{+}\) is located almost at the \(DD^{*}\) threshold. In the particle basis, the interaction reads
\[\mathcal{V}=\frac{1}{2}\begin{pmatrix}V_{0}+V_{1}&V_{1}-V_{0}\\ V_{1}-V_{0}&V_{0}+V_{1}\end{pmatrix}, \tag{1}\]
where \(V_{0}\) and \(V_{1}\) are HQET contact interactions in the isospin 0 and isospin 1 channels, respectively. We have used the isospin convention \(\bar{u}=|1/2,-1/2\rangle\) and \(\bar{d}=-|1/2,+1/2\rangle\), which induces \(D^{0}=|1/2,-1/2\rangle\) and \(D^{+}=-|1/2,+1/2\rangle\). The potentials \(V_{0}\) and \(V_{1}\) will be, in general, functions of \(s=E^{2}\), the square of the total energy of the two-meson pair in the center of mass (c.m.) frame.
The unitary \(T-\)matrix, denoted as \(\mathcal{T}(s)\), is obtained by solving the Bethe-Salpeter equation (BSE) in the so-called on-shell approximation [59]:
\[\mathcal{T}^{-1}(s)=\mathcal{V}^{-1}-\mathcal{G}(s)\,, \tag{2}\]
where the diagonal \(\mathcal{G}(s)\) matrix is constructed out of the two-meson loop functions,
\[\mathcal{G}(s)=\begin{pmatrix}G_{D^{*+}D^{0}}(s)&0\\ 0&G_{D^{*0}D^{+}}(s)\end{pmatrix}\,, \tag{3}\]
and where
\[G_{UW}(s)=i\int\frac{d^{4}q}{(2\pi)^{4}}\Delta_{U}(P-q)\Delta_{W}(q),\qquad \Delta_{Y}(q)=\frac{1}{(q^{0})^{2}-\vec{q}^{\;2}-m_{Y}^{2}+i\varepsilon}, \tag{4}\]
with \(\Delta_{Y}\) the propagator of a certain \(Y\) meson of mass \(m_{Y}\) in the free space3 and \(P^{2}=s\). We will need to introduce an ultraviolet cutoff to regularize the \(d^{3}q\) integration and render the two-point function \(G_{UW}\) finite.
Footnote 3: For simplicity, we neglect the widths of the \(D^{*}\) and \(\overline{D}^{*}\) mesons in the vacuum.
The formalism for the \(T_{\bar{c}c}(3875)^{-}\) state runs in parallel to that of the \(T_{cc}(3875)^{+}\), making use of invariance under charge-conjugation symmetry in the free-space. Thus, the \(\overline{D}^{*}\overline{D}\) unitary \(T-\)matrix is given also by Eq.(2) taking \(\left\{D^{*-}\overline{D}^{0},\overline{D}^{*0}D^{-}\right\}\) now as the particle basis.
Isospin breaking effects in the unitary \(T-\)matrices are generated by the kinetic terms in the two meson-loop functions, which disappear when the mass splitting between mesons with different charges are neglected, i.e. \(m_{D^{(*)+}}=m_{D^{(*)0}}=m_{D^{(*)-}}=m_{\overline{D}^{(*)0}}=m_{D^{(*)+}}\equiv m _{D^{(*)}}\). In that limit, \(\mathcal{G}(s)\) becomes a diagonal matrix.
Isoscalar \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) scattering amplitudes in isospin-symmetric nuclear matter
For simplicity, we will work here in the isospin limit and will only consider the modifications of the \(T-\)amplitudes due to the changes of the two-particle loop-function \(G_{UW}\) induced by the self-energies \(\Pi_{Y}(q^{0},\,\vec{q}\,;\,\rho)\) that the \(D^{(*)}\) and \(\overline{D}^{(*)}\) will acquire as result of their interactions with the nucleons of the medium. The self-energies vanish in the vacuum (\(\rho=0\)), but they produce significant changes in the dispersion relations of the mesons inside of nuclear matter of density \(\rho\).
Indeed, when the mesons are embedded in the nuclear medium, their spectral functions depart from pure delta functions, with the position of the quasi-particle peaks being displaced with respect to the free mass position, and becoming broader as the density increases. Moreover, richer structures are found produced by several resonant-hole excitations that appear around the quasi-particle peaks [60; 61; 62].
The meson spectral functions, \(S_{Y=D,\overline{D},D^{*},\overline{D}^{*}}\), are defined through the Kallen-Lehmann representation of the propagators,
\[\Delta_{Y}(q\,;\rho)=\frac{1}{(q^{0})^{2}-\omega_{Y}^{2}(\vec{q}\,^{2})-\Pi_{ Y}(q^{0},\vec{q}\,;\,\rho)}=\int_{0}^{\infty}d\omega\left(\frac{S_{Y}(\omega,| \vec{q}\,|)}{q^{0}-\omega+i\varepsilon}-\frac{S_{\bar{Y}}(\omega,|\vec{q}\,|) }{q^{0}+\omega-i\varepsilon}\right) \tag{5}\]
with \(\omega_{Y}(\vec{q}\,^{2})=\sqrt{m_{Y}^{2}+\vec{q}\,^{2}}\). From the above equation, it follows that for \(q^{0}>0\)
\[S_{D^{(*)},\overline{D}^{(*)}}(q^{0},\,\vec{q}\,;\,\rho)=-\frac{1}{\pi}\text{ Im}\;\Delta_{D^{(*)},\overline{D}^{(*)}}(q^{0},\,\vec{q}\,;\,\rho)=-\text{Im}\Pi_{D^{(*)}, \overline{D}^{(*)}}(q^{0},\vec{q}\,;\,\rho)\,\frac{\left|\Delta_{D^{(*)}, \overline{D}^{(*)}}(q^{0},\vec{q}\,;\,\rho)\right|^{2}}{\pi} \tag{6}\]
The \(S-\)wave meson self-energies and the spectral functions can be found, for example, in Ref. [51]. These depend on \(q^{0}\) and the modulus of \(\vec{q}\), but not of any direction when taking the spherical symmetric nuclear medium in the laboratory frame, where it is at rest.4 In the isospin limit, the isoscalar \(D^{*}D\) [\(T(s\,;\,\rho)\)] and \(\overline{D}^{*}\overline{D}\) [\(\overline{T}(s\,;\,\rho)\)] scattering amplitudes inside of the nuclear environment are obtained from the solution of the corresponding single-channel BSE in the on-shell approximation
Footnote 4: From now on, we also consider the center of mass of the meson pair system to be at rest in the laboratory frame, and take \(\vec{P}=0\), so that hence \(P^{2}=P^{0}{}^{2}=s\).
\[T^{-1}(s\,;\,\rho) =V_{0}^{-1}(s)-\Sigma(s\,;\,\rho) \tag{7a}\] \[\overline{T}^{-1}(s\,;\,\rho) =V_{0}^{-1}(s)-\overline{\Sigma}(s\,;\,\rho) \tag{7b}\]
where \(\Sigma(s\,;\,\rho)\) and \(\overline{\Sigma}(s\,;\,\rho)\) are the density dependent \(D^{*}D\) (\(G_{D^{*}D}\)) and \(\overline{D}^{*}\overline{D}\) (\(G_{\overline{D}^{*}\overline{D}}\)) loop functions, respectively, calculated using Eq. (4) with the nuclear dressed meson propagators \(\Delta_{Y}(q\,;\rho)\) introduced in Eq. (5). From the spectral representation of the meson propagators, it follows for \(E>0\)[51]
\[\Sigma(s=E^{2}\,;\,\rho) =\frac{1}{2\pi^{2}}\left\{\mathcal{P}\int_{0}^{\infty}d\Omega \left(\frac{f_{D^{*}D}(\Omega\,;\,\rho)}{E-\Omega+i\varepsilon}-\frac{f_{ \overline{D}^{*}\overline{D}}(\Omega\,;\,\rho)}{E+\Omega-i\varepsilon} \right)-i\pi f_{D^{*}D}(E\,;\,\rho)\right\} \tag{8a}\] \[\overline{\Sigma}(s=E^{2}\,;\,\rho) =\frac{1}{2\pi^{2}}\left\{\mathcal{P}\int_{0}^{\infty}d\Omega \left(\frac{f_{\overline{D}^{*}\overline{D}}(\Omega\,;\,\rho)}{E-\Omega+i \varepsilon}-\frac{f_{D^{*}D}(\Omega\,;\,\rho)}{E+\Omega-i\varepsilon} \right)-i\pi f_{\overline{D}^{*}\overline{D}}(E\,;\,\rho)\right\} \tag{8b}\]
where \(\mathcal{P}\) stands for the principal value of the integral and in addition
\[f_{UW}(\Omega\,;\,\rho)=\int_{0}^{\Lambda}dq\,q^{2}\int_{0}^{\Omega}d\omega\; S_{U}\left(\omega,\,|\,\vec{q}\,|;\,\rho\right)S_{W}\left(\Omega-\omega,\,|\, \vec{q}\,|;\,\rho\right). \tag{9}\]
In Eq. (9) we have included a sharp cutoff \(\Lambda=0.7\,\)GeV in the integral over momentum to regularize the ultraviolet divergence as we explained in Sec. II.1. In the free space, the spectral function of charmed \(D^{(*)}\) and anti-charmed \(\overline{D}^{(*)}\) mesons are equal and reduce to \(\delta(q^{2}-m_{Y}^{2})\). Hence \(\Sigma(s\,;\,\rho=0)=\overline{\Sigma}(s\,;\,\rho=0)\) from which follows that free space masses and widths of the \(T_{cc}^{+}\) and \(T_{\overline{cc}}^{-}\) are the same, as required by charge-conjugation symmetry. However, in a nuclear environment \(S_{D^{*}}\neq S_{\overline{D}^{*}}\), since the charmed and anticharmed meson-nucleon interactions are quite different, as discussed in the Introduction.
### Pseudo-scalar and vector heavy-light meson self-energies and spectral functions
The meson self-energies are computed following a unitarized self-consistent procedure in coupled channels, as described in Refs. [60; 61] for the \(D^{(*)}\) mesons and in Ref. [62] for the \(\overline{D}^{(*)}\) mesons (see also Ref. [63] for a review). The needed \(D^{(*)}N\) and \(\overline{D}^{(*)}N\)\(T\)-matrices in the free space are obtained by solving a coupled-channels BSE defined by a _S_-wave transition meson-baryon kernel, in the charm \(C=\pm 1\) sectors, derived from an effective Lagrangian that implements HQSS [56; 57; 58]. The effective Lagrangian accounts for the lowest-lying pseudoscalar and vector mesons and \(1/2^{+}\) and \(3/2^{+}\) baryons and it reduces to the Weinberg-Tomozawa interaction term in the sector where Goldstone bosons are involved, and it incorporates HQSS in the sector where (anti-)charm quarks participate.
The whole theoretical scheme, both in the vacuum and in the nuclear medium, is briefly summarized in Section IID of Ref. [51], where some details can be found. We will only highlight here some of the results found in Refs. [60; 61; 62] for the in-medium \(D^{(*)}\) and \(\overline{D}^{(*)}\) spectral functions. They are plotted in Fig. 1 for zero momentum as a function of the (anti-)charmed meson energy \(E=q^{0}\) for three different densities, \(\rho/\rho_{0}=0.1\), \(0.5\), and \(1\), with \(\rho_{0}\) the normal nuclear density (\(\rho_{0}=0.17\,\text{fm}^{-3}\)).
The most prominent structure in all cases corresponds to the so called quasi-particle peak, which position ( \(q^{0}=E_{\text{qp}}\)
is obtained from the self-consistent solution of
\[E_{\text{qp}}^{2}\left(\,\vec{q}\,\right)=\,\vec{q}^{\,\,2}+m_{Y=\left\{D^{(*)}, \,\overline{D}^{(*)}\right\}}^{2}+\text{Re}\,\,\Pi(E_{\text{qp}}(\,\vec{q}\,), \,\vec{q}\,). \tag{10}\]
In addition, these spectral functions show a much richer structure as a result of the presence of several resonance-hole excitations.
More precisely, the \(D\) meson spectral function is depicted in the upper left-hand side panel of Fig. 1. We observe that the \(D\) meson quasiparticle peak moves to lower energies with respect to the free mass with increasing density, as already shown in Ref. [60]. Furthermore, several resonance-hole states appear around the quasiparticle peak. On the one hand, the \(\Lambda_{c}(2556)N^{-1}\) and \(\Lambda_{c}(2595)N^{-1}\) excitations appear in the low-energy side of the \(D\) spectral function. On the other hand, the \(\Sigma_{c}^{*}N^{-1}\) state shows up on the right-hand side of the quasiparticle peak.
As for the \(D^{*}\) meson spectral shown in the upper right-hand side panel, the quasiparticle peak moves to higher energies with density while fully mixing with the sub-threshold \(J=3/2\)\(\Lambda_{c}(2941)\) state. The mixing of \(J=1/2\)\(\Sigma_{c}(2868)N^{-1}\) and \(J=3/2\)\(\Sigma_{c}(2902)N^{-1}\) is seen on the left-hand side tail of the peak. We also observe other dynamically-generated resonance-hole states for lower and higher energies, as described in [60].
With regards to the \(\overline{D}\) and \(\overline{D}^{*}\) spectral functions, those are shown in the lower left-hand side panel and lower right-hand side one, respectively. The \(\overline{D}\) spectral function results from the self-energy of \(\overline{D}\), shown in Ref. [62]. The quasiparticle peak is located below the \(\overline{D}\) mass and also below the \(\Theta_{c}(2805)N^{-1}\) state. The \(C=-1\) pentaquark-like \(\Theta_{c}(2805)\) corresponds to a weakly bound state, seen in the \(I=0\), \(J=1/2\) amplitude that strongly couples to \(\overline{D}N\) and \(\overline{D}^{*}N\), although it has not been detected experimentally yet (see Ref. [58] for more details). Also, the upper energy tail of the \(\overline{D}\) spectral function shows \(I=1\) resonance-hole states. As for the \(\overline{D}^{*}\) spectral function, it depicts the contribution of several \(I=0\) and \(I=1\) resonant-hole states close to the quasiparticle peak, which is located slightly above to \(2\,\text{GeV}\). Those states are described in Ref. [58].
(\mathbf{T_{\text{cc}}(3875)^{+}}\) [\(\mathbf{T_{\text{cc}}(3875)^{-}}\)] self-energy in the free space and in the nuclear medium
As in Ref. [51], let us consider a bare \(\widehat{T}_{cc}^{+}\) field with bare mass \(\hat{m}\) and coupling \(\hat{g}\) to the \(D^{*}D\) meson pair. We perform the re-summation of the diagrams in Fig. 2, which account for the effects induced by the insertion of internal loops on the \(D^{*}D\) interaction driven by the exchange of the bare \(\widehat{T}_{cc}^{+}\) particle. In a first step the bare parameters are renormalized to obtain the known values of the physical mass (\(m_{0}\)) and \(D^{*}D\) coupling (\(g_{0}\)) of the \(T_{cc}^{+}\) in the vacuum. Next, we additionally take into account the renormalization of the heavy-light charmed mesons inside of the nuclear medium of density \(\rho\). The dressed \(T_{cc}^{+}\) propagator is determined by its self-energy, and it reads
\[\Delta_{T_{cc}^{+}}(p^{2};\,\rho)=\frac{i}{p^{2}-m_{0}^{2}-\Pi_{T_{cc}^{+}}(p^ {2};\,\rho)+i\varepsilon},\qquad\Pi_{T_{cc}^{+}}(p^{2};\,\rho)=\frac{g_{0}^{2} }{1+g_{0}^{2}\Sigma_{0}^{\prime}(m_{0}^{2})}\left[\Sigma(p^{2};\,\rho)-\Sigma _{0}(m_{0}^{2})\right]. \tag{11}\]
The in-medium pole position of the resonance \(m^{2}(\rho)\) and its density dependent coupling to the meson pair inside of the nuclear environment are given by [51]
\[m^{2}(\rho) =m_{0}^{2}+\frac{g_{0}^{2}}{1+g_{0}^{2}\Sigma_{0}^{\prime}(m_{0}^ {2})}\left[\Sigma[m^{2}(\rho);\,\rho]-\Sigma_{0}(m_{0}^{2})\right], \tag{12}\] \[g^{2}(\rho) =\frac{g_{0}^{2}}{1-g_{0}^{2}\left[\Sigma^{\prime}[m^{2}(\rho);\, \rho]-\Sigma_{0}^{\prime}(m_{0}^{2})\right]}, \tag{13}\]
Figure 2: Contributions to the self-energy of the \(T_{cc}^{+}\). The circles represent the bare coupling (\(\hat{g}\)) of the \(T_{cc}^{+}\) to the meson pairs, and the squares stand for the interaction of the charm mesons with nuclear matter.
where we have defined \(\Sigma_{0}(s)=\Sigma(s;\,\rho=0)\), and the symbol \({}^{\prime}\) stands for the derivative with respect \(s\). Note that \(m(\rho)\) is in general a complex quantity, with its imaginary part being originated by that of \(\Sigma[m^{2}(\rho);\,\rho]\) calculated using Eq. (8a). Even assuming that in the free-space the \(T_{cc}(3875)^{+}\) is bound, and therefore \(\Sigma_{0}(m_{0}^{2})\) is real, the in-medium self-energy might acquire an imaginary part since new many-body decay modes, induced by the quasielastic interactions of the \(D\) and \(D^{*}\) mesons with nucleons, are open. The \(T_{cc}^{+}\) spectral function can be evaluated from \(S_{T_{cc}^{+}}(p^{2};\,\rho),=-\text{Im}\ \Delta_{T_{cc}^{+}}(p^{2};\,\rho)/\pi\). The corresponding expressions for the \(T_{\overline{c}\overline{c}}^{-}\) are straightforwardly obtained from those given above by simply replacing \(\Sigma(s;\,\rho)\) with \(\overline{\Sigma}(s;\,\rho)\), calculated using the \(\overline{D}\) and \(\overline{D}^{*}\) propagators inside of the nuclear medium.
Isoscalar \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) interactions and \(T_{\text{ec}}(3875)^{+}\) [\(T_{\text{gc}}(3875)^{-}\)] molecular contents in the free space
As we have already mentioned, we work in the isospin limit, and set the in-vacuum masses as \(m_{D^{(*)}}=(m_{D^{(*)+}}+m_{D^{(*)0}})/2\). Thus, we cannot consider the physical \(T_{cc}(3875)^{+}\) mass and we will take instead a binding energy of \(B=0.8\) MeV with respect to the \(D^{*}D\) threshold, \(m_{0}=(m_{D}+m_{D^{*}}-B)\). This is motivated by the analysis of the \(T_{cc}(3875)^{+}\) as a molecular \(D^{*}D\) state in the isospin limit performed in Ref. [23]. To guarantee the existence of pole below threshold at \(s=m_{0}^{2}\) in the first Riemann sheet of the isoscalar \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) amplitudes, it follows from Eqs. (7) that:
\[V_{0}^{-1}(s=m_{0}^{2})=\Sigma_{0}(m_{0}^{2})=\Sigma(m_{0}^{2};\,\rho=0)= \overline{\Sigma}_{0}(m_{0}^{2})=\overline{\Sigma}(m_{0}^{2};\,\rho=0) \tag{14}\]
We remind here that a three-momentum sharp cutoff \(\Lambda=0.7\) GeV is used to evaluate the two-meson loop function in Subsec. II.2 and hence the numerical value of \(\Sigma_{0}(m_{0}^{2})\) is completely fixed. For the sake of simplicity, we will drop out from now on the subindex "0" in the potential, since we will always refer to the isoscalar amplitudes.
If the potential \(V\) was a constant, this is to say does not depend on \(s\), then the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}\bar{c}}(3875)^{-}\) would be pure \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) hadronic-molecules [64]. As done in the previous analysis on nuclear medium effects of the \(X(3872)\)[51], we will consider two families of energy dependent interactions,
\[V_{A}(s) = \frac{1}{\Sigma_{0}(m_{0}^{2})}+\frac{\Sigma_{0}^{\prime}(m_{0}^{ 2})}{\left[\Sigma_{0}(m_{0}^{2})\right]^{2}}\frac{1-P_{0}}{P_{0}}(s-m_{0}^{2}), \tag{15}\] \[V_{B}^{-1}(s) = \Sigma_{0}(m_{0}^{2})-\Sigma_{0}^{\prime}(m_{0}^{2})\frac{1-P_{0} }{P_{0}}(s^{2}-m_{0}^{2}). \tag{16}\]
where
\[P_{0}=-g_{0}^{2}\Sigma_{0}^{\prime}(m_{0}^{2})\,, \tag{17}\]
according to the Weinberg compositeness condition [65] re-discussed in [64], is the molecular probability content of the \(D^{*}D\) bound state of mass \(m_{0}\), and \(g_{0}^{2}\) is the residue of the vacuum \(T-\)matrix [\(T(s\,;\,\rho=0)\)] at the pole \(s=m_{0}^{2}\). These interactions correspond to retain the first two orders of the Taylor expansion around of \(s=m_{0}\) either of the potential \(V(s)\) (type \(A\)) or of the inverse of the potential \(V^{-1}(s)\) (type \(B\)). Moreover, it can be shown [51] that \(V_{B}(s)=\hat{g}^{2}/(s-\hat{m}^{2})\), and hence this interaction between the \(D^{*}D\) mesons is generated by the exchange of the bare \(\hat{T}_{cc}^{+}\) introduced in the previous section. The two types of kernels are diagrammatically represented in Fig. 3. The \(V_{A}(s)\) potential (left panel of Fig. 3) depends also on energy and thus it might contain also some contributions related to the exchange of genuine compact quark-model structures, beyond the constant terms which give would rise to purely molecular states [64].
Figure 3: Diagrammatic representation of \(V_{A}\) (left-hand side) and \(V_{B}\) (right-hand side) \(D^{*}D\) potentials.
Pole positions of the isoscalar \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) amplitudes in the nuclear medium
One could also define the in-medium renormalized pole position and coupling of the \(T_{cc}^{+}\) to the \(D^{*}D\) meson pair from the solution of the BSE of Eq. (7a) in nuclear matter using the kernel potentials \(A\) or \(B\)
\[0=T_{A,B}^{-1}[m^{2}(\rho)\,;\,\rho] = V_{A,B}^{-1}[m^{2}(\rho)]-\Sigma[m^{2}(\rho)\,;\,\rho], \tag{18}\] \[\frac{1}{g^{2}(\rho)} = \frac{dT_{A,B}^{-1}(s\,;\rho)}{ds}\Big{|}_{s=m^{2}(\rho)} \tag{19}\]
In the case of the \(V_{B}\) potential (right panel of Fig. 3), the above equations lead exactly to Eqs. (12) and (13) obtained after dressing in the dense medium the \(D^{*}D\) interaction driven by the exchange of a bare \(\hat{T}_{cc}^{+}\). However, for the type \(A\) interaction, there appear some further density corrections [51] governed by the factor \(\xi(\rho)=\Sigma_{0}(m_{0}^{2})/\Sigma[m^{2}(\rho);\,\rho]\):
\[m^{2}(\rho) = m_{0}^{2}+\frac{g_{0}^{2}}{1+g_{0}^{2}\Sigma_{0}^{\prime}(m_{0} ^{2})}\left[\Sigma[m^{2}(\rho);\,\rho]-\Sigma_{0}(m_{0}^{2})\right]\xi(\rho), \tag{20}\] \[g^{2}(\rho) = \frac{g_{0}^{2}\xi^{2}(\rho)}{1-g_{0}^{2}\left[\Sigma^{\prime}[m ^{2}(\rho);\,\rho]\xi^{2}(\rho)-\Sigma_{0}^{\prime}(m_{0}^{2})\right]}. \tag{21}\]
We should bear in mind that the \(V_{A}\) potential contains additional physics to the exchange of a bare \(\widehat{T}_{cc}^{+}\) state, as it is the case for \(V_{B}\) and therefore it should not be surprising that the in-medium \(D^{*}D\)\(T-\)matrix is not completely determined by the \(T_{cc}^{+}\) self-energy.
In order to obtain the \(T_{cc}(3875)^{+}\) pole position \([m(\rho)]\) when it is produced in a nuclear environment we could either make use of Eqs. (12) and (20) or we could perform an analytic continuation of the \(T-\)matrix obtained by solving the BSE (Eq. (7a) with interactions of type \(A\) or \(B\)) and search for a pole on the complex plane. In this work we choose the latter of the options and we look for poles of the \(T-\)matrix in the complex plane. However, independently of the chosen method we face the problem that we need to evaluate the in-medium loop function \(\Sigma(s\,;\,\rho)\) for complex values of \(s\), and the formula given in Eq. (8a) is only valid in the real axis. Using it in the complex plane would require knowing the meson spectral functions \(S_{U}\) for complex values of its arguments, which cannot be computed within the standard scheme presented above in Subsec. II.3. We follow here Ref. [51], and we approximate the in-medium loop function as the vacuum two-meson one, but evaluated with complex meson masses. For the case of the \(D^{*}D\) loop function we would write:
\[\Sigma(E;\,\rho)\simeq G\left[E;\,m_{D^{*}}^{\rm(eff)}(\rho),\,m_{D}^{\rm(eff) }(\rho)\right]\equiv G^{\rm(eff)}\left(E;\,\rho\right)\,. \tag{22}\]
Even though we treat this as an approximation, the \(G^{\rm(eff)}\) effective loop function with complex meson masses should replicate the numeric calculation for the \(\Sigma\), given that the in-medium modifications for the \(\Sigma\) loop function come from the fact that the mesons develop a given width when embedded in the medium.
The discussion in this subsection can be completely carried over to the \(T_{\widetilde{cc}}^{-}\) case, replacing \(\Sigma[s;\,\rho]\) with \(\overline{\Sigma}[s;\,\rho]\).
### The HQSS partner of the \(T_{cc}(3875)^{+}\)
It is a known result that HQSS predicts the existence of degenerate doublets of states. For the case of the \(T_{cc}(3875)^{+}\), its HQSS partner, which we will name \(T_{cc}^{*}(4016)^{+}\), would be a \(I(J^{P})=0(1^{+})\) state near the \(D^{*}D^{*}\) production threshold [23; 24; 54]. The formalism developed above for the \(T_{cc}(3875)^{+}\) is easily adapted to describe its HQSS sibling, which will show up as a pole in the isoscalar \(D^{*}D^{*}\) channel. We will assume that the form of the new potential \(V_{*}\) is equal to the one used to describe the \(T_{cc}(3875)^{+}\) [Eqs. (15) and (16)], which should be correct up to order \(\Lambda_{\rm QCD}/m_{c}\), and we will only change the \(T_{cc}^{*}(4016)^{+}\) vacuum mass (\(m_{0}^{*}\)) and the two-meson loop function. The latter is now constructed employing only the nuclear-medium \(D^{*}\) and \(\overline{D}^{*}\) spectral functions,
\[\Sigma_{*}(E;\,\rho) = \frac{1}{2\pi^{2}}\int_{0}^{\infty}d\Omega\left(\frac{f_{D^{*}D^{* }}(\Omega,\,;\,\rho)}{E-\Omega+i\varepsilon}-\frac{f_{\overline{D}^{*} \overline{D}^{*}}(\Omega,\,;\,\rho)}{E+\Omega-i\varepsilon}\right), \tag{23}\]
with \(f_{D^{*}D^{*}}\) and \(f_{\overline{D}^{*}\overline{D}^{*}}\), defined in Eq. (9). Given that the interaction potential is the same in both cases, the most notable source of HQSS breaking comes from the fact that \(m_{D^{*}}-m_{D}\sim m_{\pi}\). For the illustrating purposes of this
work, we will assume that the vacuum mass \(m_{0}^{*}\) of the \(T_{cc}^{*+}\) state will be shifted from the mass \(m_{0}\) of the \(T_{cc}^{+}\) by a similar amount, \(m_{0}^{*}-m_{0}\sim m_{D^{*}}-m_{D}\sim m_{\pi}\).
One can similarly compute the in medium \(\overline{D}^{*}\overline{D}^{*}\) loop function \(\overline{\Sigma}_{*}(E;\,\rho)\). It will deviate from \(\Sigma_{*}(E;\,\rho)\) for finite nuclear densities because of the different interactions of the \(D^{*}\) and \(\overline{D}^{*}\) vector mesons with nucleons.
## III Results
### Results for the \(T_{ce}(3875)^{+}\)
Let us now discuss the results that we obtain for the \(I(J^{P})=0(1^{+})\)\(D^{*}D\) amplitude in the nuclear medium \(|T(E;\,\rho)|^{2}\) [Eq. (7a)]. For the different plots we use the energy \(E\) of the \(D^{*}D\) pair in the c.m. frame, with \(s=E^{2}\). In order to do so, first we need to calculate the in-medium modified \(D^{*}D\) loop function \(\Sigma(E;\,\rho)\) [Eq. (8a)]. Actually the \(T-\)matrix in the medium of Eq. (7a) can be rewritten as (we recall here that the subindex "0" in the potential has been suppressed since we will always refer to the isospin zero amplitudes),
\[T^{-1}(s\,;\,\rho) = V_{\rm eff}^{-1}(s\,;\,\rho)-\Sigma(s\,;\,\rho=0)\,, \tag{24a}\] \[V_{\rm eff}^{-1}(s\,;\,\rho) = V^{-1}(s)+\delta\Sigma(s\,;\,\rho)\,, \tag{24b}\]
where \(\delta\Sigma(s\,;\,\rho)=[\Sigma(s\,;\,\rho=0)-\Sigma(s\,;\,\rho)]\).
In Fig. 4 we show \(\Sigma(E;\,\rho)\) for different values of the nuclear density \(\rho\) ranging from zero to \(\rho_{0}\), where \(\rho_{0}=0.17\) fm\({}^{-3}\) is the normal nuclear matter density. On the one hand, for the imaginary part (dashed lines) we observe that the unitarity cut starting sharply at the \(D^{*}D\) threshold in vacuum gets smoothed out as the density increases. We also observe that the loop function develops an imaginary part even for energies below threshold. This is because the \(D\) and \(D^{*}\) mesons acquire some width, given by their spectral functions, when they are embedded in the medium due to the collisions of the \(D\) and \(D^{*}\) mesons with nucleons. On the other hand, the real part (solid lines) also flattens for increasing densities, and shifts towards a larger value, \(\mathop{\rm Re}\delta\Sigma(s\,;\,\rho)<0\). This would imply that the effect of the medium is to generate repulsion in the \(D^{*}D\) interaction, when it is attractive in vacuum. We also note the imaginary part of the self-energy is sizable and comparable to the shift in the real part and therefore cannot be neglected.
Having calculated the in-medium modified \(D^{*}D\) loop function \(\Sigma(E;\,\rho)\), the \(D^{*}D\)\(T\)-matrix in the nuclear environment can then be determined from the \(T_{cc}(3875)^{+}\) mass and its \(D^{*}D\) probability (\(m_{0}\) and \(P_{0}\)) in vacuum (\(\rho=0\)). For the present analysis, we compute the in-medium effects that enter into the calculation of the amplitude through the
Figure 4: \(D^{*}D\) loop function (Eq. (8a)) for various values of the nuclear matter density \(\rho\) as a function of the \(D^{*}D\) pair energy \(E\) in the c.m. frame. The solid and dashed lines stand for the real and imaginary) parts, respectively.
vacuum potentials \(V_{A}(E)\) or \(V_{B}(E)\), Eqs. (15) and (16), respectively, for different values of the molecular probability \(P_{0}\).
In Fig. 5 we show, for different densities and molecular probabilities \(P_{0}=0.2\) and \(0.8\), the squared modulus of the amplitudes \(T(E;\,\rho)\) normalized to be one at the maximum using the potentials \(V_{A}(s)\) (left column) and \(V_{B}(s)\) (right column). When comparing the amplitudes computed using the \(V_{A}(E)\) potential and the ones obtained from the \(V_{B}(E)\) potential, we conclude that for high values of the molecular \(D^{*}D\) component the predictions of both potentials are very similar. As discussed in Ref. [51], this results from the fact that the zero of \(V_{A}(s)\) and the bare pole of \(V_{B}(s)\) are far from the energies considered. For small values of \(P_{0}\) (\(P_{0}=0.2\) in the upper plots) both potentials are very different leading to distinct in-medium \(T\)-matrices, despite giving rise to the same mass (\(m_{0}\)) and \(D^{*}D\) coupling (\(g_{0}\)) in the free space.
As for the density dependence of the in-medium \(T\)-matrices at small and large \(P_{0}\), we find that the medium effects on the \(T\)-matrices are significantly larger for the scenarios where a high molecular probability is considered. For large values of \(P_{0}\), the width increases with density and the maximum peak is shifted to larger energies. This behavior is correlated to the one discussed above for the self-energy in Fig. 4. When considering a small molecular component, the changes to the \(T_{cc}(3875)^{+}\) become less important but, as mentioned before, the \(T\)-matrices differ depending on the potential used. The amplitudes deduced from \(V_{A}(E)\) show the zero that this type of potential has below \(E_{0}\), with the position of the zero being independent of the nuclear density, as discussed in Ref. [51]. However, the amplitude below and above \(E_{0}\) shows a clear dependence on the density as the potential and scattering amplitude vanish. On the contrary, when using the \(V_{B}(E)\) interaction, we basically observe the peak induced by the bare pole present in the potential. The in-medium effects are in this case even smaller than when considering the \(V_{A}(E)\) potential, and for \(P_{0}=0.2\) the amplitude is almost density independent. Hence, any experimental input on \(|T(E;\,\rho)|^{2}\), in particular for energies below \(E_{0}\), might shed light on the dynamics of the interacting \(D^{*}D\) pair.
Figure 5: Squared modules of the \(D^{*}D\) amplitudes obtained by solving the BSE using the \(V_{A}(s)\) potential of Eq. (15) (left column) and the \(V_{B}(s)\) potential of Eq. (16), as a function of the center-of-mass energy \(E\), for different values of the nuclear density \(\rho\) (different colors on the graphs) and for different values of the molecular probability \(P_{0}\). Note that the amplitudes have been normalized to be one at their maximum.
### Results for the \(T_{c\bar{c}}(3875)^{-}\)
We now turn our attention into comparing the results obtained for the \(T_{c\bar{c}}(3875)^{-}\) with those presented in Sec. III.1 for the \(T_{c\bar{c}}(3875)^{+}\). We recall here that this is an important novelty with respect to the analysis in Ref. [51] for the \(X(3872)\), where this distinction, as explained earlier, does not apply since the \(X(3872)\) has well defined \(C-\)parity. Let us start by discussing Fig. 6, where we simultaneously show the energy dependence of the \(\overline{D^{*}D}\) (solid lines) and the \(D^{*}D\) (dashed lines) loop functions for various nuclear densities. Both, real and imaginary parts of the \(\overline{D}^{*}\overline{D}\) and \(D^{*}D\) loop functions are the same in vacuum (\(\rho=0\)) thanks to charge-conjugation symmetry, which ensures that the \(\overline{D}^{*}\overline{D}\) and the \(D^{*}D\) meson pairs have the same masses. However, when considering a density different from zero (even as small as \(0.1\,\rho_{0}\)), notable differences appear between both loop functions. This distinctive density-pattern stems from the very different \(D^{(*)}N\) and \(\overline{D}^{(*)}N\) interactions, which were already apparent in the spectral functions presented in Fig. 1.
We can also define for the in-medium \(\overline{D}^{*}\overline{D}\) pair an effective potential \(\overline{V}_{\rm eff}(s\,;\,\rho)\), as done in Eq. (24) for the \(D^{*}D\) system, and since the free space terms are equal then it follows
\[\overline{V}_{\rm eff}^{-1}(s\,;\,\rho)-V_{\rm eff}^{-1}(s\,;\,\rho)=\frac{V_{ \rm eff}(s\,;\,\rho)-\overline{V}_{\rm eff}(s\,;\,\rho)}{V_{\rm eff}(s\,;\, \rho)\overline{V}_{\rm eff}(s\,;\,\rho)}=\Sigma(s\,;\,\rho)-\overline{\Sigma}( s\,;\,\rho) \tag{25}\]
Focusing now on the real part of the loop function for different densities shown in Fig. 6, we observe that the \(\overline{D}^{*}\overline{D}\) real parts always lie below the \(D^{*}D\), with the difference being more prominent for energies below threshold. The fact that the real part of the \(\overline{D}^{*}\overline{D}\) loop function for all densities is smaller than its \(D^{*}D\) counterpart, and both are negative imply that \({\rm Re}\left[\Sigma(s\,;\,\rho)-\overline{\Sigma}(s\,;\,\rho)\right]>0\). This would mean that the medium generates in general a more repulsive interaction in the case of the \(T_{c\bar{c}}^{+}\) than in the case of the \(T_{c\bar{c}}^{-}\), as can be deduced from Eq. (25) above. Thus, we might expect to generate the \(T_{c\bar{c}}^{+}\) at larger energies than the \(T_{c\bar{c}}^{-}\). As for the imaginary part of the loop function, the one for \(T_{c\bar{c}}^{-}\) is comparable to the shift in the real part for all densities and should not be neglected, as already seen for the \(T_{c\bar{c}}^{+}\) in Sec. III.1. We also see that the density-dependent imaginary parts of the \(\overline{D}^{*}\overline{D}\) loop change with energy in a more abrupt manner as compared to the ones for the \(D^{*}D\) case. As a consequence, for the smaller energies below the two-meson threshold we find that \(|{\rm Im}\overline{\Sigma}|<|{\rm Im}\Sigma|\), while for energies well above the threshold we have \(|{\rm Im}\overline{\Sigma}|>|{\rm Im}\Sigma|\). The imaginary parts for \(\overline{D}^{*}\overline{D}\) and \(D^{*}D\) become comparable for energies which are below but near the vacuum threshold. However, it is not possible to determine whether \(T_{c\bar{c}}^{-}\) or \(T_{c\bar{c}}^{+}\) will have a larger width. This is due to the fact that the widths of the states depend on the energy at which they are produced for a given density and we expect the energy to be different. It could happen that both states have similar widths if they are produced close to the two-meson threshold as the imaginary parts of the two-meson loop functions become alike.
Next, in Fig. 7 we show several plots containing the modulus squared of the in medium \(\overline{D}^{*}\overline{D}\) and \(D^{*}D\)\(T\)-matrices (solid and dashed lines, respectively), both computed using the BSE of Eqs. (7b) and (7a) as well as using the type-A (left column) and type-B (right column) interaction kernels. We consider three different values for the density (upper
Figure 6: Real (left) and imaginary (right) parts of the \(\overline{D}^{*}\overline{D}\) (solid lines) and \(D^{*}D\) (dashed lines) loop functions of Eqs. (8b) and (8a), respectively. We show results for different values of the nuclear medium density as a function of the c.m. energy of the meson pair.
rows for \(0.1\rho_{0}\), middle rows for \(0.5\rho_{0}\) and lower rows for \(\rho_{0}\)) and the values \(P_{0}=0.2\) (orange lines) and \(P_{0}=0.8\) (blue lines) for the molecular probability.
We observe that the width of the \(T_{cc}^{-}\) grows with increasing density, being this effect more notable for high values of \(P_{0}\), in a similar manner as for the \(T_{cc}^{+}\) state, as already discussed in Sec. III.1. Differences between the position and the width of the \(T_{\overline{c}\overline{c}}^{-}\) and \(T_{cc}^{+}\) states arise with \(P_{0}\) and density. On the one hand, we find that the position of the \(T_{\overline{c}\overline{c}}^{-}\) peak always lies below the \(T_{cc}^{+}\) peak when considering high enough values of the molecular probability and density. However, the difference in energy between both states is almost not noticeable for low values of \(P_{0}\) and density, as expected. On the other hand, we observe that the \(T_{\overline{c}\overline{c}}^{-}\) state tends to be narrower than the \(T_{cc}^{+}\) for high enough values of the molecular probability and density. However, this effect is not as pronounced as the shift of the peaks, and
Figure 7: In medium \(\overline{D}^{*}\overline{D}\) (solid lines) and \(D^{*}D\) (dashed lines) modulus square amplitudes obtained by solving the BSE using the \(V_{A}(s)\) (left) and \(V_{B}(s)\) (right) potentials, for vacuum molecular probabilities \(P_{0}=0.2\) (orange) and \(P_{0}=0.8\) (blue), and for different nuclear densities \(\rho\).
it is difficult to appreciate in the plots of Fig. 7. In summary, we can conclude that the behaviors of the \(T_{cc}^{+}\) and \(T_{\overline{cc}}^{-}\) are quite different when they are embedded in a nuclear medium, and they are very sensitive to their molecular probability in the free space.
By means of the approximation in Eq. (22) for the \(D^{*}D\) loop function embedded in the nuclear medium, and a similar one for that of the \(\overline{D}^{*}\overline{D}\) meson pair, we can now compute the isoscalar \(D^{*}D\) [\(T(s\,;\,\rho)\)] and \(\overline{D}^{*}\overline{D}\) [\(\overline{T}(s\,;\,\rho)\)] scattering amplitudes inside of the nuclear environment in the whole complex plane, for different medium densities \(\rho\) and vacuum probabilities \(P_{0}\). We search for poles in the complex plane and find a pole on the first Riemann sheet (as defined in Ref. [51]) of the \(T(s\,;\,\rho)\) and \(\overline{T}(s\,;\,\rho)\) amplitudes, off the real axis.5 These complex poles are displayed in Fig. 8, reinforcing the conclusions of the previous paragraph. A simple visual inspection of the two top plots of the
Figure 8: Top: Complex pole positions of the \(T_{\overline{cc}}(3875)^{-}\) (left) and the \(T_{cc}(3875)^{+}\) (right) for different values of the density (\(\rho\)) and vacuum molecular probabilities (\(P_{0}\)) obtained using the potential \(V_{A}(s)\). The points that lie on the dashed lines correspond to results for different values of \(P_{0}\), which vary from 0 (right upper end) to 1 (left lower end) in steps \(\Delta P_{0}=0.1\). The zigzag lines represent the cut of the effective loop function \(G^{\rm(eff)}(s;\,\rho)\) for different densities, as detailed in Sect.IIIB of Ref. [51]. Bottom: Same as the top plots, but for the \(T_{\overline{cc}}^{*}(4016)^{-}\) (left) and the \(T_{cc}^{*}(4016)^{+}\), heavy quark spin partners of the \(T_{\overline{cc}}(3875)^{-}\) and the \(T_{cc}(3875)^{+}\).
figure clearly shows the quite different \((\rho,P_{0})\) pattern followed by the \(T_{cc}^{+}\) and \(T_{\overline{c}c}^{-}\) poles produced by the presence of the nucleons. In general, the \(T_{cc}^{+}\) in the medium becomes broader than the \(T_{\overline{c}c}^{-}\), with the effective mass of the former (latter) displaced to higher (smaller) values than its nominal mass position in the free space. The future measurement of this behavior should certainly shed light into the intricate dynamics of the \(T_{cc}^{+}\) tetraquark-like state discovered by LHCb.
### The \(T_{cc}^{\star}(4016)^{+}\) and the \(T_{\overline{c}c}^{\star}(4016)^{-}\)
HQSS makes plausible the existence of an isoscalar \(J^{P}=1^{+}\)\(D^{*}D^{*}\) partner of the \(T_{cc}(3875)^{+}\), which we have named as the \(T_{cc}^{\star}(4016)^{+}\). It has been predicted by several theoretical groups [23; 24; 54], and as discussed above in Subsect. II.7, one should expect its mass to be higher than that of the \(T_{cc}(3875)^{+}\) by an amount of the order \((m_{D^{\star}}-m_{D})\sim m_{\pi}\).6 In addition, the change of its properties inside of a nuclear medium will be also different to those described above for the \(T_{cc}^{+}\) since \(D\) and \(D^{*}\) spectral functions are different. From the comparison of the top-left and
Figure 9: Top: Real (left) and imaginary (right) parts of the \(D^{*}D^{*}\) (solid lines) and \(D^{*}D\) (dashed lines) loop functions of Eqs. (23) and (8a), respectively. We show results for different values of the nuclear medium density as a function of \(k\), the c.m. three momentum of the heavy-light meson pair, since the \(D^{*}D^{*}\) and \(D^{*}D\) thresholds are different. Bottom: Real (left) and imaginary (right) parts of the \(\overline{D}^{*}\overline{D}^{*}\) (solid lines) and \(D^{*}D^{*}\) (dashed lines) loop functions. We show results for different values of the nuclear medium density as a function of the c.m. energy of the heavy-light meson pair.
bottom-left plots of Fig. 8 and the solid and dashed curves in the top plots of Fig. 9, we conclude that medium effects are larger for the \(T_{cc}^{*}(4016)^{+}\) than for the \(T_{cc}(3875)^{+}\). This is because, within the model of Refs. [56] and [57], the \(D^{*}N\to D^{*}N\) interaction is stronger than the \(DN\to DN\) one.
As it happened for the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}\bar{c}}(3875)^{-}\), the nuclear environment would induce different modifications to charmed \(D^{*}D^{*}\) than to anti-charmed \(\overline{D}^{*}\overline{D}^{*}\) pairs of interacting mesons, which will result into a different \((\rho,P_{0})\) behavior for the \(T_{\bar{c}\bar{c}}^{*}(4016)^{-}\), antiparticle of the \(T_{cc}^{*}(4016)^{+}\), when it is produced in a nuclear medium. This is now due to the different strength of the \(D^{*}N\) and \(\overline{D}^{*}N\) interactions. The bottom plots of Figs. 8 and 9 illustrate the differences induced by the presence of nuclear matter, which become larger as the density and molecular probability increase. The nuclear medium breaks the particle-antiparticle symmetry leading to quite different \(D^{(*)}\) and \(\overline{D}^{(*)}\) spectral functions.
## IV Conclusions
We have studied the behavior of the \(T_{cc}(3875)^{+}\) and the \(T_{\bar{c}\bar{c}}(3875)^{-}\) in the nuclear environment. We have considered both states to be isoscalar \(S\)-wave bound states that are generated as poles in the \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) scattering amplitudes, respectively. The in-medium effects have been incorporated by dressing the \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) loop functions with the corresponding spectral functions of the charmed mesons. We have then analyzed the \(D^{*}D\) and \(\overline{D}^{*}\overline{D}\) amplitudes in matter for energies around the common in-vacuum mass of the \(T_{cc}(3875)^{+}\) and the \(T_{\bar{c}\bar{c}}(3875)^{-}\) states so as to determine the modification of the pole positions in the medium.
For the interaction kernel we have considered two families of energy dependent interactions, consistent with heavy-quark spin symmetry, that allow for the analysis of the molecular probability content of these states. Indeed, the different analytical properties of these interactions manifest clearly at finite density, thus permitting to explore the connection between the in-medium behavior of the \(T_{cc}(3875)^{+}\) and the \(T_{\bar{c}\bar{c}}(3875)^{-}\) states and their nature.
In contrast to low molecular probabilities, we have found that the medium effects on the \(T_{cc}(3875)^{+}\) and the \(T_{\bar{c}\bar{c}}(3875)^{-}\) amplitudes are sizable when large values of the molecular component are considered, leading to large widths for both states and shifts in mass at finite density with respect to their nominal values. In addition and due to the different nature of the \(D^{(*)}N\) and \(\overline{D}^{(*)}N\) interactions, the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}\bar{c}}(3875)^{-}\) states behave differently in matter. By analysing the evolution with density of the states in the complex energy plane we have seen very distinctive patterns. As a general rule, the \(T_{cc}(3875)^{+}\) in matter becomes broader than the \(T_{\bar{c}\bar{c}}(3875)^{-}\), whereas the mass of the former is moved to larger values than the nominal mass and the mass of the latter is displaced to smaller ones. Therefore, we expect that future measurements of these states in dense matter will give some important insights into their nature and their molecular content.
Finally, taking advantage of HOSS, we have also performed similar studies for the isoscalar \(J^{P}=1^{+}\) HQSS partners of the \(T_{cc}^{+}\) (\(T_{cc}^{*+}\)) and the \(T_{\bar{c}\bar{c}}^{-}\) (\(T_{\bar{c}\bar{c}}^{-}\)) by considering the \(D^{*}D^{*}\) and \(\overline{D}^{*}\overline{D}^{*}\) scattering amplitudes. We have found that the medium effects become larger for the \(T_{cc}^{*}(4016)^{+}\) than for the \(T_{cc}(3875)^{+}\), as the \(D^{*}N\to D^{*}N\) interaction is stronger than the \(DN\to DN\) one. Also, similarly to the \(T_{cc}(3875)^{+}\) and \(T_{\bar{c}\bar{c}}(3875)^{-}\) states, the different strength of the \(D^{*}N\) and \(\overline{D}^{*}N\) interactions leads to a distinctive behavior of the \(T_{cc}^{*}(4016)^{+}\) and its antiparticle with density, specially for large values of the molecular content.
All in all, we can conclude that an interesting venue to discern the molecular nature of \(T_{cc}(3875)^{+}\), the \(T_{\bar{c}\bar{c}}(3875)^{-}\), and their HQSS partners would be to experimentally determine their behavior in a dense nuclear environment, such as the one generated in HICs under the expected conditions at the CBM (FAIR) or with fixed nuclear targets such as \(\bar{p}\)-nuclei in PANDA (FAIR).
###### Acknowledgements.
This work was supported by the Spanish Ministerio de Ciencia e Innovacion (MICINN) under contracts No. PID2019-110165GB-I00 and No. PID2020-112777GB-I00, by Generalitat Valenciana under contract PROME-TEO/2020/023, and from the project CEX2020-001058-M Unidad de Excelencia "Maria de Maeztu"). This project has received funding from the European Union Horizon 2020 research and innovation programme under the program H2020-INFRAIA-2018-1, grant agreement No. 824093 of the STRONG-2020 project. M. A. and V. M. are supported through Generalitat Valenciana (GVA) Grants No. CIDEGENT/2020/002 and ACIF/2021/290, respectively. L. T. also acknowledges support from the CRC-TR 211 'Strong-interaction matter under extreme conditions'- project Nr.
315477589 - TRR 211 and from the Generalitat de Catalunya under contract 2021 SGR 171.
|
2309.13284 | Strong resolving graph of the intersection graph in commutative rings | The intersection graph of ideals associated with a commutative unitary ring
$R$ is the graph $G(R)$ whose vertices all non-trivial ideals of $R$ and there
exists an edge between distinct vertices if and only if the intersection of
them is non-zero. In this paper, the structure of the resolving graph of $G(R)$
is characterized and as an application, we evaluate the strong metric dimension
of $G(R)$. | E. Dodongeh, A. Moussavi, R. Nikandish | 2023-09-23T06:51:44Z | http://arxiv.org/abs/2309.13284v1 | # Strong resolving graph of
###### Abstract
The intersection graph of ideals associated with a commutative unitary ring \(R\) is the graph \(G(R)\) whose vertices all non-trivial ideals of \(R\) and there exists an edge between distinct vertices if and only if the intersection of them is non-zero. In this paper, the structure of the resolving graph of \(G(R)\) is characterized and as an application, we evaluate the strong metric dimension of \(G(R)\).
2020 _Mathematics Subject Classification_: 13A99; 05C78; 05C12.
\({}^{\dagger}\)Corresponding author
## 1 Introduction
Metric and strong metric dimension of a graph are two of the most applicable parameters with several usages in robotic, computer science, chemistry, optimization etc. Although these invariants have been computed for some classes of well-known graphs, they are still the subjects of many researches; for instance see [1, 6, 10, 11, 21]. Among the reasons for considerable interest in characterizing these parameters for graphs associated with algebraic structures one may cite variety of uses and complexity of computations. Some examples in this direction may be found in [2, 4, 8, 9, 12, 15, 16, 18, 19, 25]. This paper
has a such goal and aims to discuss the strong metric dimension in intersection graphs of ideals of commutative rings.
For graph theory terminology, we follow [22]. Let \(G=(V,E)\) be a graph with \(V=V(G)\) as the vertex set and \(E=E(G)\) as the edge set. A complete graph of order \(n\) is denoted by \(K_{n}\). Also, distance between two distinct vertices \(x\) and \(y\) is denoted by \(d(x,y)\). By \(\mbox{diam}(G)\), we mean the diameter of \(G\). Moreover, the induced subgraph by \(V_{0}\subseteq V\) is denoted by \(G[V_{0}]\). The open and closed neighborhood of the vertex \(x\) are denoted by \(N(x)\) and \(N[x]\), respectively. The complement of \(G\) is denoted by \(\overline{G}\). The independence number and vertex cover number of the graph \(G\) are denoted by \(\beta(G)\) and \(\alpha(G)\), respectively. Let \(S=\{v_{1},v_{2},\ldots,v_{k}\}\) be an ordered subset of \(V\) and \(v\in V\setminus S\). Then the representation vector of \(v\) with respect to \(S\) is denoted by \(D(v|S)\) which is defined as follows: \(D(v|S)=(d(v,v_{1}),d(v,v_{2}),\ldots,d(v,v_{k}))\). An ordered subset \(S\subseteq V(G)\) is called _resolving_ provided that distinct vertices out of \(S\) have different representation vectors with respect to \(S\). The cardinality of any resolving set of minimum cardinality is called the _metric dimension of_ \(G\) and denoted by \(dim_{M}(G)\). Two different vertices \(u,v\)_are mutually maximally distant_ if \(d(v,w)\leq d(u,v)\), for every \(w\in N(u)\) and \(d(u,w)\leq d(u,v)\), for every \(w\in N(v)\). For a graph \(G\), _the strong resolving graph of_ \(G\), is denoted by \(G_{SR}\) and its vertex and edge set are defined as follow: \(V(G_{SR})=\{u\in V(G)|\,there\ exists\ v\in V(G)\ such\ that\ u,v\ are\ mutually\ maximally\ distant\}\) and \(uv\in E(G_{SR})\) if and only if \(u\) and \(v\) are mutually maximally distant. Two vertices \(u\) and \(v\) are _strongly resolved_ by some vertex \(w\) if either \(d(w,u)\) is equal to \(d(w,v)+d(v,u)\) or \(d(w,v)\) is equal to \(d(w,u)+d(v,u)\). A set \(W\) of vertices is a _strong resolving set of_ \(G\) if every two distinct vertices of \(G\) are strongly resolved by some vertex of \(W\) and a minimum strong resolving set is called _strong metric basis_ and its cardinality is _the strong metric dimension of_ \(G\). We denote the strong metric dimension of \(G\), by \(sdim(G)\).
Throughout this paper, all rings are assumed to be commutative with identity. The set of all non-trivial ideals of \(R\) is denoted by \(I(R)\). The ring \(R\) is called _reduced_ if it has no nilpotent elements other than \(0_{R}\). For undefined notions in ring theory, we refer the reader to [5].
_The intersection graph of ideals of a ring_ \(R\), denoted by \(G(R)\), is a simple and undirected graph whose vertex set is \(I(R)\) and two distinct vertices are adjacent if and only if they have non-zero intersection. This graph was first introduced and studied by Chakrabarty et.al in [7] and many beautiful properties of it were obtained. Later, many researchers investigated different aspects of this concept; see for instance [3, 13, 24]. In [14], the metric dimension of intersection graphs of rings was discussed. In this paper, we characterize the structure of the resolving graph of \(G(R)\) and as an application \(sdim(G(R))\) is computed.
\(G(R)_{SR}\) and \(sdim(G(R))\); \(R\) is reduced
In this section, for a given ring \(R\), first it is shown that \(sdim_{M}(G(R))\) is finite if and only if \(|I(R)|<\infty\). Then the graph \(G(R)_{SR}\) and its vertex cover number are determined, when \(R\) is reduced. Finally, \(sdim(G(R))\) is given in an explicit formula.
**Proposition 2.1**: _Let \(R\) be a ring that is not a field. Then \(sdim_{M}(G(R))<\infty\) if and only if \(|I(R)|<\infty\)._
**Proof.** First assume that \(sdim_{M}(G(R))\) is finite. Then \(dim_{M}(G(R))\) is finite too, as \(dim_{M}(G(R))\leq sdim_{M}(G(R))\). Let \(W=\{W_{1},\ldots,W_{n}\}\) be a metric basis for \(G(R)\), where \(n\) is a non-negative integer. By [3, Theorem 2.1], there exist \(2^{n}\) possibilities for \(D(X|W)\), for every \(X\in V(G(R))\setminus W\). Thus \(|V(G(R)|\leq 2^{n}+n\) and hence \(R\) has finitely many ideals. The converse is trivial. \(\Box\)
To compute \(sdim_{M}(G(R))\), it is enough to consider rings with finitely many ideals, by Proposition 2.1. Therefore, from now on, we suppose that all rings \(R\) have finitely many ideals. These rings are direct product of finitely many fields, if they are reduced.
We state a series of lemmas to calculate \(sdim(G(R))\).
**Lemma 2.1**: ([17, Theorem 2.1]) _For any connected graph \(G\), \(sdim_{M}(G)=\alpha(G_{SR})\)._
**Lemma 2.2**: (Gallai's theorem) _For any graph \(G\) of order \(n\), \(\alpha(G)+\beta(G)=n\)._
The following remark introduces a notion which will be used several times.
**Remark 2.1**: Let \(R\cong\prod_{i=1}^{n}R_{i}\), where \(R_{i}\) is a ring for every \(1\leq i\leq n\), and \(I=I_{1}\times\cdots\times I_{n}\in V(G(R))\). Then by \(I^{c}=I_{1}^{c}\times\cdots\times I_{n}^{c}\), we mean a vertex of \(G(R)\) such that \(I_{i}^{c}=0\) if \(I_{i}\neq 0\) and \(I_{i}^{c}=R_{i}\) if \(I_{i}=0\), for every \(1\leq i\leq n\). The ideal \(I^{c}\) is called the complement of \(I\). We note that different ideals may have a same complement.
**Lemma 2.3**: _Let \(n\geq 2\) be a positive integer and \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\). Then the following statements hold._
1)_\(V(G(R)_{SR})=V(G(R))\)._
2) _Suppose that \(I,J\in V(G(R)_{SR})\), then \(IJ\in E(G(R)_{SR})\) if and only if \(IJ\notin E(G(R))\)._
**Proof.** 1) For every \(I=I_{1}\times\cdots\times I_{n}\in V(G(R))\), since \(I\cap I^{c}=\emptyset\), we deduce that \(d(I,I^{c})=2=diam(G(R))\). Thus \(I\) and \(I^{c}\) are mutually maximally distant and so \(I\in V(G(R)_{SR})\) i.e., \(V(G(R)_{SR})=V(G(R))\).
2) First suppose that \(IJ\notin E(G(R))\). Since \(d(I,J)=2\), obviously \(IJ\in E(G(R)_{SR})\).
Conversely, suppose that \(IJ\in E(G(R)_{SR}\), for some \(I,J\in V(G(R)_{SR})\). If \(I\sim J\), then since \(I\neq J\), we have \(I\sim J^{c}\) or \(J\sim I^{c}\). Thus \(d_{G(R)}(J,J^{c})=2>1=d(I,J)\) or \(d_{G(R)}(I,I^{c})=2>1=d(I,J)\), and so \(I,J\) are not mutually maximally distant, a contradiction. This completes the proof. \(\Box\)
Now, we have the following immediate corollary.
**Corollary 2.1**: _Let \(n\geq 2\) be a positive integer and \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\). Then \(G(R)_{SR}=\overline{G(R)}\)._
The next example explains Corollary 2.1 in case \(n=3\).
**Example 2.1**: Suppose that \(R\cong\prod_{i=1}^{3}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq 3\). Thus \(|V(G(R))|=6\). Let
\[V_{1}=\mathbb{F}_{1}\times\mathbb{F}_{2}\times 0,\quad V_{2}=\mathbb{F}_{1} \times 0\times\mathbb{F}_{3},\quad V_{3}=0\times\mathbb{F}_{2}\times\mathbb{F}_{3},\]
\[V_{4}=0\times 0\times\mathbb{F}_{3},\quad V_{5}=0\times\mathbb{F}_{2} \times 0,\quad V_{6}=\mathbb{F}_{1}\times 0\times 0\]
Then \(\overline{G(R)}\) and \(G(R)_{SR}\) are identical.
**Lemma 2.4**: _Let \(n\geq 2\) be a positive integer and \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\). Then \(\beta(G(R)_{SR})=2^{n-1}-1\)._
**Proof.** By Lemma 2.3, \(V(G(R)_{SR})=V(G(R))\). Let \(I=I_{1}\times\cdots\times I_{n}\in V(G(R)_{SR})\) and \(NZC(I)\) be the number of zero components in \(I\). Obviously, \(1\leq NZC(I)\leq n-1\). Assume that
\(A_{1}=\{I\in V(G(R)_{SR})|NZC(I)=1\}\),
\(A_{2}=\{I\in V(G(R)_{SR})|NZC(I)=2\}\),
\(\vdots\)
and \(A_{n-1}=\{I\in V(G(R)_{SR})|NZC(I)=n-1\}\).
It is easily seen that \(V(G(R))=\cup_{i=1}^{n-1}A_{i}\) and \(A_{i}\cap A_{j}=\emptyset\), for every \(i\neq j\) and so \(\{A_{1},\ldots,A_{n-1}\}\) is a partition of \(V(G(R))\). Take the following facts into observation:
**Fact 1.** Let \(I,J\in A_{i}\), for some \(1\leq i\leq n-1\). If \(I\) is not adjacent to \(J\) in \(G(R)_{SR}\), then by Lemma 2.3, \(I\sim J\) in \(G(R)\).
**Fact 2.** Let \(1\leq i\leq[\frac{n}{2}]-1\), for even \(n\) and \(1\leq i\leq[\frac{n}{2}]\), otherwise. Then \(S_{i}\subseteq A_{i}\) is the largest subset of \(A_{i}\) such that \(IJ\notin E(G(R)_{SR})\), for every \(I,J\in S_{i}\) (indeed, \(S_{i}\) is the largest independent subset of \(A_{i}\) in \(G(R)_{SR}[A_{i}]\)). For every \(I,J\in A_{i}\), we have \(I\cap J\neq 0\), so by Fact 1, \(I\) is not adjacent to \(J\) in \(G(R)_{SR}\). Thus \(|S_{i}|=|A_{i}|={n\choose i}\).
**Fact 3.** Let \(t=\frac{n}{2}\), where \(n\) is even. Then for every \(I\in A_{t}\), \(I\) is only adjacent to \(I^{c}\). Thus \(|S_{t}|=\frac{|A_{t}|}{2}=\frac{{n\choose t}}{2}\), where \(S_{t}\subseteq A_{t}\) is the largest subset of \(A_{t}\) such that \(IJ\notin E(G(R)_{SR})\), for every \(I,J\in S_{t}\).
Now let \(S^{\prime}=\cup_{i=1}^{[t]}S_{i}\) and \([t]\leq i\leq n-1\). Then there exists \(J\in S^{\prime}\) such that \(I\cap J=0\), for every \(I\in A_{i}\). Thus \(S^{\prime}\cap(\cup_{i=t+1}^{n-1}A_{i})=\emptyset\). Furthermore, \(|S^{\prime}|={n\choose 1}+\cdots+{n\choose t}=2^{n-1}-1\), where \(n\) is odd and \(|S^{\prime}|={n\choose 1}+\cdots+{n\choose t-1}+\frac{{n\choose t}}{2}=2^{n-1}-1\), where \(n\) is even. Hence \(S^{\prime}\) is the largest independent subset of \(V(G(R)_{SR}\) in \(G(R)_{SR}\) and so \(\beta(G(R)_{SR})=|S^{\prime}|=2^{n-1}-1\). \(\Box\)
**Theorem 2.1**: _Let \(n\geq 2\) be a positive integer and \(R\cong\prod_{i=1}^{n}\mathbb{F}_{i}\), where \(\mathbb{F}_{i}\) is a field for every \(1\leq i\leq n\). Then \(sdim(G(R)_{SR})=2^{n}-2^{n-1}-1\)._
**Proof.** The result follows from Lemmas 2.1, 2.4, Gallai's theorem and the fact that \(|V(G(R)_{SR})|=2^{n}-2\). \(\Box\)
\(G(R)_{SR}\) and \(sdim(G(R))\); \(R\) is non-reduced
As it has been mentioned in Section 2, we consider rings \(R\) with finitely many ideals. Then there exists positive integer \(m\) such that \(R\cong R_{1}\times\cdots\times R_{m}\), where \((R_{i},m_{i})\) is a local Artinian ring, for all \(1\leq i\leq m\). If every \(m_{i}\) is principal, then by [5, Propostion 8.8], every \(R_{i}\) is a principal ideal ring (PIR, for short) with finitely many ideals (**we suppose throughout this section that \(|I(R_{i})|=n_{i}\), for \(1\leq i\leq m\)**). Moreover, ideals of every \(R_{i}\) make an inclusion chain.
In this section, we study the structure of \(G(R)_{SR}\) and compute \(sdim(G(R))\) for such rings \(R\).
First, the case in which no fields appear in decomposition of \(R\) is investigated.
**Remark 3.1**: Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Assume that \(I=I_{1}\times\cdots\times I_{m}\) and \(J=J_{1}\times\cdots\times J_{m}\) are vertices of \(G(R)\), where \(I_{i},J_{i}\in R_{i}\), for every \(1\leq i\leq m\). Define the relation \(\thicksim\) on \(V(G(R))\) as follows: \(I\thicksim J\), whenever "\(I_{i}=0\) if and only if \(J_{i}=0\)", for every \(1\leq i\leq m\). It is easily seen that \(\thicksim\) is an equivalence relation on \(V(G(R))\). By \([I]\), we mean the equivalence class of \(I\). Let \(X_{1}\) and \(X_{2}\) be two elements of \([X]\). Since \(X_{1}\thicksim X_{2}\), \(X_{1}\cap X_{2}\neq 0\), i.e., \(X_{1}\) and \(X_{2}\) are adjacent. Moreover, \(N(X_{1})=N(X_{2})\) and the number of these equivalence classes is \(2^{m}-1\).
**Lemma 3.1**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field, for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Then the following statements hold:_
1)_\(V(G(R)_{SR})=V(G(R))\)._
2) _For every \(I,J\in V(G(R)_{SR})\), if \([I]=[J]\), then \(IJ\in E(G(R)_{SR})\)._
3) _For every \(I,J\in V(G(R)_{SR})\), if \([I]\neq[J]\), then \(IJ\in E(G(R)_{SR})\) if and only if \(IJ\notin E(G(R))\)._
**Proof.** 1) It is enough to show that \(V(G(R))\subseteq V(G(R)_{SR})\). Let \(I=I_{1}\times\cdots\times I_{m}\in V(G(R))\), \(NZC(I)\) be the number of zero components of \(I\) and \(A_{i}=\{I\in V(G(R)|NZC(I)=i\}\), for \(0\leq i\leq m-1\). Then \(V(G(R))=\cup_{i=0}^{m-1}A_{i}\). Suppose that \(I=I_{1}\times\cdots\times I_{m}\in V(G(R))\setminus A_{0}\). Since \(d(I,I^{c})=2=diam(G(R))\), we conclude that \(I,I^{c}\) are mutually maximally distant and so \(I\in V(G(R)_{SR})\). Now, let \(I\in A_{0}\). Then \(d(I,V)=d(J,V)=1\), for every \(J\in A_{0}\) and \(V\in V(G(R))\setminus\{I,J\}\). Thus \(I,J\) are mutually maximally distant and so \(I\in V(G(R)_{SR})\).
2) If \([I]=[J]\subset V(G(R)_{SR})\), then by Remark 3.1, \(N(I)=N(J)\). Thus \(I,J\) are mutually maximally distant and so \(IJ\in E(G(R)_{SR})\).
3) If \(IJ\notin E(G(R))\), then clearly \(IJ\in E(G(R)_{SR})\). To prove the other side, suppose to the contrary, \(IJ\in E(G(R))\). Since \([I]\neq[J]\), if \([I]=A_{0}\) or \([J]=A_{0}\), then \(d(I,I^{c})=2>d(I,J)=1\) or \(d(J,J^{c})=2>d(I,J)=1\). Thus \(I,J\) are not mutually maximally distant and so \(IJ\notin E(G(R)_{SR})\), else since \([I]\neq[J]\), we conclude that \(I\sim J^{c}\) or \(J\sim I^{c}\). Thus \(d(I,I^{c})=2>d(I,J)=1\) or \(d(J,J^{c})=2>d(I,J)=1\). Hence \(I,J\) are not mutually maximally distant and \(IJ\notin E(G(R)_{SR})\), a contradiction. \(\Box\)
**Lemma 3.2**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Then \(G(R)_{SR}=K_{\prod_{i=1}^{m}(n_{i}+1)-1}+H\), where \(H\) is a connected graph._
**Proof.** Using the notations in the proof of Lemma 3.1, \(V(G(R)_{SR})=V(G(R))\). If \(I,J\in A_{0}\), then \([I]=[J]\) and so \(IJ\in E(G(R)_{SR})\). Thus the induced subgraph \(G(R)[A_{0}]\) is complete. Also, by Lemma 3.1, if \(I\in A_{0}\) and \(J\in V(G(R)_{SR})\setminus A_{0}\), then \(IJ\notin E(G(R)_{SR})\). Furthermore, \(|A_{0}|=\Pi_{i=1}^{m}(n_{i}+1)-1\). Thus \(G(R)[A_{0}]=K_{\Pi_{i=1}^{m}(n_{i}+1)-1}\).
Next, we show that \(H\) is a connected graph, where \(V(H)=\cup_{i=1}^{m-1}A_{i}\). We have to find a path between arbitrary vertices \(I=I_{1}\times\cdots\times I_{m}\) and \(J=J_{1}\times\cdots\times J_{m}\) in \(V(H)\). To see this, we consider the following cases:
**Case 1.**\([I]=[J]\).
If \([I]=[J]\), then by Lemma 3.1, \(I\) and \(J\) are adjacent in \(G(R)_{SR}\).
**Case 2.**\([I]\neq[J]\).
If \(IJ\notin E(G(R))\), then by Lemma 3.1, \(IJ\in E(G(R)_{SR})\). Thus suppose that \(IJ\in E(G(R))\), so \(I\cap J\neq 0\). If \(I\subset J\) or \(J\subset I\), then there exists \(1\leq i\leq n\) such that \(I_{i}=J_{i}=0\), as \(I,J\notin A_{0}\). In this case \(I\sim V\sim J\), where \(V=0\times\cdots\times 0\times R_{i}\times 0\times\cdots\times 0\). Thus we may assume that \(I\nsubseteq J\) and \(J\nsubseteq I\). Hence there exist \(1\leq i\neq j\leq m\) such that \(I_{i}\neq 0\neq J_{j}\) and \(I_{j}=0=J_{i}\). In this case \(I\sim V_{1}\sim V_{2}\sim J\), where \(V_{1}=0\times\cdots\times 0\times R_{j}\times 0\times\cdots\times 0\) and \(V_{2}=0\times\cdots\times 0\times R_{i}\times 0\times\cdots\times 0\). Thus \(H\) is a connected graph. \(\Box\)
The next example explains Lemma 3.2 in case \(n=2\).
**Example 3.1**: Suppose that \(R\cong R_{1}\times R_{2}\), where \(R_{i}\) is a PIR non-field for \(i=1,2\). Let \(I(R_{i})=\{I_{i1},I_{i2}\}\), for \(i=1,2\). Thus \(|V(G(R))|=14\). Suppose that
\[V_{1}=R_{1}\times 0,\quad V_{2}=0\times R_{2},\quad V_{3}=I_{11}\times 0,\quad V_{4 }=I_{11}\times I_{21},\]
\[V_{5}=I_{11}\times I_{22},\quad V_{6}=I_{11}\times R_{2},\quad V_{7}=I_{12} \times 0,\quad V_{8}=I_{12}\times I_{21},\quad V_{9}=I_{12}\times I_{22},\]
\(V_{10}=I_{12}\times R_{2},\quad V_{11}=0\times I_{21},\quad V_{12}=0\times I_{22}, \quad V_{13}=R_{1}\times I_{21},\quad V_{14}=R_{1}\times I_{22}\)
Then Figure 2 shows how \(G(R)_{SR}\) extract from \(G(R)\).
**Lemma 3.3**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Then \(\beta(G(R)_{SR})=2^{m-1}\)._
**Proof.** By Lemma 3.2, \(G(R)_{SR}=K_{\prod_{i=1}^{m}(n_{i}+1)-1}+H\), where \(H=G(R)_{SR}[\cup_{i=1}^{m-1}A_{i}]\) is a connected graph. Thus \(\beta(G(R)_{SR})=1+\beta(H)\). We show that \(\beta(H)=2^{m-1}-1\). Clearly, for every \(I,J\in V(H)\) if \(d_{G(R)}(I,J)=diam(G(R))\), then \(IJ\in G(R)_{SR}\). Therefore, to find the largest independent set in \(H\), we have to investigate cliques in \(G(R)\). Let \(1\leq i\leq[\frac{m}{2}]-1\), for even \(m\) and \(1\leq i\leq[\frac{m}{2}]\), for odd \(m\) and \(I,J\in A_{i}\). Then \(I\cap J\neq 0\) and so \(I\) and \(J\) are adjacent, i.e., \(G(R)[A_{i}]\) is a complete graph. Moreover, if \(I\in A_{i}\) and \(J\in A_{j}\) with \(1\leq i\neq j\leq[\frac{m}{2}]-1\), for even \(m\) and \(1\leq i\neq j\leq[\frac{m}{2}]\), for odd \(m\), then \(I\cap J\neq 0\) and so \(I\) and \(J\) are adjacent. The above arguments show that \(G(R)[A]\) is a complete graph, if one let \(A=\cup_{i=1}^{[\frac{m}{2}]}A_{i}\), for odd \(m\) and \(A=\cup_{i=1}^{[\frac{m}{2}]-1}A_{i}\), for even \(m\). Now, let \(t=\frac{m}{2}\), where \(m\) is even. Then \(I\) and \(J\) are adjacent in \(G(R)\), for every \(I\in A_{t}\) and \(J\in A\).
Figure 2: \(G(R)\) and \(G(R)_{SR}\)
We note that if \(I\in A_{t}\), then \(I\cap V=0\) and so \(IV\in E(G(R)_{SR})\), for every \(V\in[I^{c}]\). This means that the largest independent set \(P\) in \(A_{t}\) contains exactly one element from either \([I]\) or \([I^{c}]\). Moreover, \(|P|=\dfrac{{m\choose t}}{2}\).
Now, we are ready to find the largest independent set in \(H\). By Lemma 3.1, if \([I]=[J]\), then \(IJ\in E(G(R)_{SR})\), for all \(I,J\in V(G(R)_{SR})\). Thus only one element of the equivalence class \([I]\) can be contained in the largest independent set in \(G(R)_{SR}[A]\), for every \(I\in A\). On the other hand, the number of equivalence classes in the subgraph induced by every \(A_{i}\) is \({m\choose i}\). Consider the independent set
\[S=\{I|\,I\ is\ representative\ of\ equivalence\ class\ [I],\ for\ every\ I\in A\},\]
in \(H\). Let \(S^{\prime}=S\), for odd \(m\) and \(S^{\prime}=S\cup P\), for even \(m\). Then \(S^{\prime}\) is an independent set in \(H\). Finally, if \(m\) is odd (or even), then there exists \(I\in S^{\prime}\) such that \(I\) and \(J\) are not adjacent in \(G(R)\), for every \(J\in V(H)\setminus A\) (or \(J\in V(H)\setminus(A\cup A_{t})\)). Hence \(IJ\in E(G(R)_{SR})\) and so \(S^{\prime}\cap(V(H)\setminus A)=\emptyset\) (or \(S^{\prime}\cap V(H)\setminus(A\cup A_{t})=\emptyset\)). Furthermore, \(|S^{\prime}|={m\choose 1}+\cdots+{m\choose t}=2^{m-1}-1\), where \(m\) is odd and \(|S^{\prime}|={m\choose 1}+\cdots+{m\choose t-1}+\dfrac{{m\choose t}}{2}=2^{m-1}-1\), where \(m\) is even. Thus \(S^{\prime}\) is the largest independent subset of \(V(H)\) of order \(2^{m-1}-1\) and so \(\beta(H)=|S^{\prime}|=2^{m-1}-1\). \(\square\)
**Theorem 3.1**: _Suppose that \(R\cong\prod_{i=1}^{m}R_{i}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\) and \(m\geq 2\) is a positive integer. Then \(sdim(G(R))=\Pi_{i=1}^{m}(n_{i}+2)-2^{m-1}-2\)._
**Proof.** By Lemma 3.3, \(\beta(G(R)_{SR})=2^{m-1}\). Since \(|V(In(R)_{SR})|=\Pi_{i=1}^{m}(n_{i}+2)-2\), Gallai's theorem and Lemma 2.1 show that \(sdim(G(R))=|V(G(R)_{SR})|-\beta(G(R)_{SR})=\Pi_{i=1}^{m}(n_{i}+2)-2^{m-1}-2\). \(\square\)
Finally, we investigate \(sdim(G(R))\), where both of fields and non-fields appear in decomposition of \(R\).
**Lemma 3.4**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\) and \(m,n\geq 1\) are positive integers. Then the following statements hold:_
1)_\(V(G(R)_{SR})=V(G(R))\)._
2) _For every \(I,J\in V(G(R)_{SR})\), if \([I]=[J]\), then \(IJ\in E(G(R)_{SR})\)._
3) _For every \(I,J\in V(G(R)_{SR})\), if \([I]\neq[J]\), then \(IJ\in E(G(R)_{SR})\) if and only if \(IJ\notin E(G(R))\)._
**Proof.** It is enough to apply a similar argument to that of Lemma 3.1. \(\Box\)
**Lemma 3.5**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\) and \(m,n\geq 1\) are positive integers. Then \(G(R)_{SR}=K_{\Pi_{i=1}^{m}(n_{i}+1)2^{n}}+H\), where \(H\) is a connected graph._
**Proof.** By Lemma 3.4, \(V(G(R))=V(G(R)_{SR})\). Also, since for every \(I,J\in A_{0}\), \([I]=[J]\) so \(IJ\in E(G(R)_{SR})\). Thus induced subgraph \(G(R)[A_{0}]\) is a complete graph. Also, by Lemma 3.4, for every \(I\in A_{0}\) and for every \(J\in V(G(R)_{SR})\setminus A_{0}\), \(IJ\notin E(G(R)_{SR})\). Furthermore, \(|A_{0}|=\Pi_{i=1}^{m}(n_{i}+1)2^{n}\). Thus \(G(R)[A_{0}]=K_{\Pi_{i=1}^{m}(n_{i}+1)}2^{n}\).
To complete the proof, it is enough to apply a similar argument to that of Lemma 3.2. \(\Box\)
**Lemma 3.6**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\) and \(m,n\geq 1\) are positive integers. Then \(\beta(G(R)_{SR})=2^{m+n-1}\)._
**Proof.** By Lemma 3.5, \(G(R)_{SR}=K_{\Pi_{i=1}^{m}(n_{i}+1)2^{n}}+H\), so \(\beta(G(R)_{SR})=\beta(H)+1\). Also, by a similar argument to that of Lemma 3.3 and case (1) of Lemma 3.4, \(S=\cup_{i=1}^{[\frac{m+n}{2}]}A_{i}\), where \(m+n\) is odd and \(S=\cup_{i=1}^{[\frac{m+n}{2}]-1}A_{i}\) union with half of the members of \(A_{\frac{m+n}{2}}\), where \(m+n\) is even is the largest independent subset of \(V(H)\) and \(|S|=2^{m+n-1}-1\). Hence \(\beta(G(R)_{SR})=|S|+1=2^{m+n-1}\). \(\Box\)
We close this paper with the following result.
**Theorem 3.2**: _Let \(R\cong S\times T\) such that \(S=\prod_{i=1}^{m}R_{i}\), \(T=\prod_{j=1}^{n}\mathbb{F}_{j}\), where \(R_{i}\) is a PIR non-field for every \(1\leq i\leq m\), \(\mathbb{F}_{j}\) is a field for every \(1\leq j\leq n\) and \(m,n\geq 1\) are positive integers. Then \(sdim(G(R))=\Pi_{i=1}^{m}(n_{i}+2)2^{n}-2^{m+n-1}-2\)._
**Proof.** By Lemma 3.6, \(\beta(G(R)_{SR})=2^{m+n-1}\). Since \(|V(G(R)_{SR})|=\Pi_{i=1}^{m}(n_{i}+1)2^{n}-2\), Gallai's theorem and Lemma 2.1 show that \(sdim(G(R))=|V(G(R)_{SR})|-\beta(G(R)_{SR})=\Pi_{i=1}^{m}(n_{i}+1)2^{n}-2^{m+n-1 }-2\). \(\Box\)
|
2302.14699 | An Analysis of Tennenbaum's Theorem in Constructive Type Theory | Tennenbaum's theorem states that the only countable model of Peano arithmetic
(PA) with computable arithmetical operations is the standard model of natural
numbers. In this paper, we use constructive type theory as a framework to
revisit, analyze and generalize this result. The chosen framework allows for a
synthetic approach to computability theory, exploiting that, externally, all
functions definable in constructive type theory can be shown computable. We
then build on this viewpoint, and furthermore internalize it by assuming a
version of Church's thesis, which expresses that any function on natural
numbers is representable by a formula in PA. This assumption provides for a
conveniently abstract setup to carry out rigorous computability arguments, even
in the theorem's mechanization. Concretely, we constructivize several classical
proofs and present one inherently constructive rendering of Tennenbaum's
theorem, all following arguments from the literature. Concerning the classical
proofs in particular, the constructive setting allows us to highlight
differences in their assumptions and conclusions which are not visible
classically. All versions are accompanied by a unified mechanization in the Coq
proof assistant. | Marc Hermes, Dominik Kirst | 2023-02-28T16:12:16Z | http://arxiv.org/abs/2302.14699v6 | # An analysis of Tennenbaum's theorem
###### Abstract.
Tennenbaum's theorem states that the only countable model of Peano arithmetic (PA) with computable arithmetical operations is the standard model of natural numbers. In this paper, we use constructive type theory as a framework to revisit, analyze and generalize this result.
The chosen framework allows for a synthetic approach to computability theory, exploiting that, externally, all functions definable in constructive type theory can be shown computable. We then build on this viewpoint, and furthermore internalize it by assuming a version of Church's thesis, which expresses that any function on natural numbers is representable by a formula in PA. This assumption provides for a conveniently abstract setup to carry out rigorous computability arguments, even in the theorem's mechanization.
Concretely, we constructivize several classical proofs and present one inherently constructive rendering of Tennenbaum's theorem, all following arguments from the literature. Concerning the classical proofs in particular, the constructive setting allows us to highlight differences in their assumptions and conclusions which are not visible classically. All versions are accompanied by a unified mechanization in the Coq proof assistant.
Key words and phrases:first-order logic, Peano arithmetic, Tennenbaum's theorem, constructive type theory, Church's thesis, synthetic computability, Coq. \({}^{*}\) This is an extended version of a paper published at FSCD 2022 [1].
The presence of non-standard elements like this has interesting consequences. PA can prove that for every bound \(n\), sums of the form \(\sum_{k\leq n}a_{k}\) exist, so in particular for example the Gaussian sum \(\sum_{k\leq n}k\). The presence of the non-standard element \(c\) in \(\mathcal{M}\) allows for the creation of infinite sums like \(\sum_{k\leq c}k\), which includes a summation over all natural numbers. The general PA model \(\mathcal{M}\) therefore exhibits behaviors which disagree with the common intuition that computations in PA are finitary, which are - in the end - largely based on the familiarity with the standard model \(\mathbb{N}\).
These intuitions are still not too far off the mark, as was demonstrated by Stanley Tennenbaum [15] in a remarkable theorem. By being a little more restrictive on the models under consideration, \(\mathbb{N}\) regains a unique position:
**Tennenbaum's Theorem:** Apart from the standard model \(\mathbb{N}\), there is no countable non-_computable_ model of first-order PA.
A model is considered _computable_ if its elements can be coded by numbers in \(\mathbb{N}\), and the arithmetic operations on its elements can be realized by computable functions on these codes. Usually, this Tennenbaum's theorem is formulated in a classical framework such as ZF set theory, and the precise meaning of _computable_ is given by making reference to a concrete model of computation like Turing machines, \(\mu\)-recursive functions, or the \(\lambda\)-calculus [11, 12]. In a case like this, where computability theory is applied rather than developed, the computability of a function is rarely proven by exhibiting an explicit construction in the specific model, but rather by invoking the informal _Church-Turing thesis_, which states that every function intuitively computable will be computable in the chosen model. Proving the computability of a function is then reduced to giving an intuitive argument for its computability.
The focus of this paper lies on revisiting Tennenbaum's theorem and several of its proofs in a constructive type theory (CTT). In contrast to classical treatments, usage of a constructive meta-theory enables us to formally add _Church's thesis_[13, 14, 15] in the form of an axiom, stating that every total function is computable. By its usage, the elegant and succinct paper-style computability proofs can be reproduced, but in a fully formal manner, and allowing for straightforward mechanized proofs.
In the type theory that we will specify in Section 2, the addition of this axiom becomes possible since we can adapt the approach of _synthetic computability_[13, 1, 15]: Any function term that is definable in CTT by the virtue of its rules, can externally be observed to be a computable function. Following through on this external observation, it can be taken as a justification to also internally treat functions as if they were computable. For example, we will make use of this when defining a predicate on \(X\) to be _decidable_ if there exists a function \(f\colon X\to\mathbb{B}\) computing booleans which reflect the truth values of \(p\) (Definition 2.4).
This approach leads to a simplification when it comes to the statement of Tennenbaum's theorem itself: In the most natural semantics we can give in CTT1 all models are now automatically viewed as computable, so we no longer need "computable model" as part of the theorem statement.
Footnote 1: Where arithmetic operations are interpreted as type-theoretic functions.
In the above sketched framework, we follow the classical presentations of Tennenbaum's theorem [11, 12] and develop constructive versions that only assume a type-theoretic version of _Markov's principle_[14]. This is then complemented by the adaption of an inherently more constructive variant given by McCarty [13, 14].
Concretely, our contributions can be summarized as follows:
* We review several existing proofs of Tennenbaum's theorem from the literature. We present them here, carried out in a constructive meta-theory, and work out subtle differences in the strengths of their conclusions, which are left invisible in any classical treatment, but become visible once viewed under a constructive sense.
* By considering models with a decidable divisibility relation (Corollary 7.11), we extend the theorem to models which do not have to be discrete or enumerable.
* We provide a Coq mechanization covering all the proofs and results that are presented in this paper.2 Footnote 2: The full mechanization is available on github [11] and can conveniently be viewed on a webpage [12]. We make use of two facts for which we gave no mechanized proofs in Coq. They are therefore clearly marked as “Hypothesis” in the paper (Section 7.3).
The present paper is an extended version of [12] and adds the following contributions:
* In [12], we only gave a reference to a possible proof strategy for showing the existence of HA-inseparable formulas (Definition 7.12). We have now mechanized a proof of it and also added it to the paper in Appendix B.
* We added a short discussion (Section 7.4) in which we try to pin down the main ingredients used in the derivation of Tennenbaum's theorem.
* The Coq mechanization has been re-based to depend on and contribute to a Coq library for first-order logic [13]. This enabled us to use existing definitions for \(\Delta_{1}\) and \(\Sigma_{1}\)-formulas, and more crucially, giving our usage of \(\mathsf{CT}_{\mathsf{Q}}\) further justification, as the library contains a derivation of \(\mathsf{CT}_{\mathsf{Q}}\) from a more conventional version of Church's thesis in [14].
* The presentation of several proofs and definitions in Section 5 and Section 7 have been revised. Additionally, a mistake from [12] has been corrected; the original version of HA-coding (Hypothesis 7.18) was not constructively provable, the new one is.
To conclude this introduction, we give a brief overview on the structure of the paper, in the order that we consider most suitable for a first reading:
The main results of the analysis are summarized in Section 8, where we give a tabulated overview on the different variants of Tennenbaum's theorem that we can get from the varying proofs. It clarifies which assumptions are made for each version, and we give a brief discussion of what to take from these differences. The complete proofs are covered in Section 7, ending with Section 7.4 in which we attempt to abstractly capture the essence of what enables Tennenbaum's theorem.
In Section 6 we motivate and introduce our chosen formulation of Church's thesis which is utilized as an axiom. Basic results about PA's standard and non-standard models are shown in Section 4 and then used in Section 5 to establish results that allow the encoding of predicates on \(\mathbb{N}\), which are essential in the proof of Tennenbaum's theorem.
To make the paper self-contained, we also give an introduction to the essential features of constructive type theory, synthetic computability, and the type-theoretic specification of first-order logic in Section 2. This is continued in section 3 by the presentation of the first-order axiomatization of PA as given in previous work [12].
## 2. Preliminaries
### Constructive Type Theory
The chosen framework for this paper is a constructive type theory (\(\mathsf{CTT}\)). More specifically, it will be the calculus of inductive constructions (\(\mathsf{CIC}\)) [1, 10] which is implemented in the Coq proof assistant [11]. It provides a predicative hierarchy of _type universes_ above a single impredicative universe \(\mathbb{P}\) of _propositions_ and the capability of inductive type definitions. On the type level, we have the unit type \(\mathbb{1}\) with a single element, the void type \(\mathbb{0}\), function spaces \(X\to Y\), products \(X\times Y\), sums \(X+Y\), dependent products3\(\forall(x:X).\,A\,x\), and dependent sums \(\Sigma(x:X).\,A\,x\). On the propositional level, analogous notions as the one listed for types in the above are present, but denoted by their usual logical notation (\(\top\), \(\bot\), \(\rightarrow\), \(\wedge\), \(\vee\), \(\forall\), \(\exists\)).4 It is important to note that the so-called _large eliminations_ from the impredicative \(\mathbb{P}\) into higher types of the hierarchy are restricted. In particular, it is generally not possible to show \((\exists x.\,p\,x)\to\Sigma x.\,p\,x\).5 The restriction does however allow for large elimination of the equality predicate \(=\,\,\forall X.\,X\to X\to\mathbb{P}\), as well as function definitions by well-founded recursion.
Footnote 3: As is custom in Coq, we write \(\forall\) in place of the symbol \(\Pi\) for dependent products.
Footnote 4: Negation \(\neg A\) is used as an abbreviation for both \(A\to\bot\) and \(A\to\mathbb{0}\).
Footnote 5: The direction \((\Sigma x.\,p\,x)\to\exists x.\,p\,x\) is however always provable. Intuitively, one can think of \(\exists x.\,p\,x\) as stating the mere existence of a value satisfying \(p\), while \(\Sigma x.\,p\,x\) is a type that also carries a value satisfying \(p\).
We will also use the basic inductive types of _Booleans_ (\(\mathbb{B}:=\mathsf{tt}\mid\mathsf{ff}\)), _Peano natural numbers_ (\(n:\mathbb{N}:=0\mid n+1\)), the _option type_ (\(\mathcal{O}(X):=\,^{\circ}x\mid\emptyset\)) and _lists_ (\(l:\mathsf{List}(X):=[\,]\mid x::l\)). Furthermore, by \(X^{n}\) we denote the type of _vectors_\(\vec{v}\) of length \(n:\mathbb{N}\) over \(X\).
Given predicates \(P,Q\!:\!X\to\mathbb{P}\) on a type \(X\), we will occasionally use the set notation \(P\subseteq Q\) for expressing \(\forall x\!:\!X.\,P\,x\to Q\,x\).
**Definition 2.1**.: A proposition \(P\!:\!\mathbb{P}\) is called _definite_ if \(P\vee\neg P\) holds and _stable_ if \(\neg\neg P\to P\). The same terminology is used for predicates \(p\!:\!X\to\mathbb{P}\) given they are pointwise definite or stable. We furthermore want to recall the following logical principles:
\[\mathsf{LEM} :=\forall P\!:\!\mathbb{P}.\,\mathsf{definite}\,P\] (Law of Excluded Middle) \[\mathsf{DNE} :=\forall P\!:\!\mathbb{P}.\,\mathsf{stable}\,P\] (Double Negation Elimination) \[\mathsf{MP} :=\forall f\!:\!\mathbb{N}\to\mathbb{N}.\,\mathsf{stable}\,( \exists n.\,fn=0)\] (Markov's Principle)
all of which are not provable in \(\mathsf{CIC}\).
Note that \(\mathsf{LEM}\) and \(\mathsf{DNE}\) are equivalent while \(\mathsf{MP}\) is much weaker and has a constructive interpretation [12]. For convenience, and as used by Bauer [1], we adapt the reading of double negated statements like \(\neg\neg P\) as "_potentially \(P\)_".6
Footnote 6: \(\neg\neg P\) expresses the impossibility of \(P\) being wrong, and therefore representing a guarantee that \(P\) can potentially be shown correct.
**Remark 2.2** (Handling \(\neg\neg\neg\)).: Given any propositions \(A,B\) we constructively have the equivalence \((A\to\neg B)\leftrightarrow(\neg\neg A\to\neg B)\), meaning that when trying to prove a negated goal, we can remove double negations in front of any assumption. More generally, any statement of the form \(\neg\neg A_{1}\to\ldots\to\neg\neg A_{n}\to\neg\neg C\) is equivalent to \(A_{1}\to\ldots\to A_{n}\to\neg\neg C\) and since \(C\to\neg C\) holds, it furthermore suffices to show \(A_{1}\to\ldots\to A_{n}\to C\) in this case. In the following, we will make use of these facts without further notice.
### Synthetic Computability
As already expressed in section 1, constructive type theory permits us to take a viewpoint that considers all functions to be computable functions, yielding simple definitions [10] of many textbook notions of computability theory:
**Definition 2.3** (Enumerability).: Let \(p:X\to\mathbb{P}\) be some predicate. We say that \(p\) is _enumerable_ if there is an _enumerator_\(f:\mathbb{N}\to\mathcal{O}(X)\) such that \(\forall x\!:\!X.\,p\,x\leftrightarrow\exists n.\,fn={}^{\circ}x\).
**Definition 2.4** (Decidability).: Let \(p:X\to\mathbb{P}\) be some predicate. We call \(f:X\to\mathbb{B}\) a _decider_ for \(p\) and write \(\mathsf{decider}\,p\,f\) iff \(\forall x\!:\!X.\,p\,x\leftrightarrow fx=\mathsf{tt}\). We then define the following notions of decidability:
* \(\mathsf{Dec}\,p:=\exists f\!:\!X\to\mathbb{B}.\,\mathsf{decider}\,p\,f\)
* \(\mathsf{dec}(P\!:\!\mathbb{P}):=P+\neg P\).
In both cases we will often refer to the predicate or proposition simply as being _decidable_.
We also expand the synthetic vocabulary with notions for types. In the textbook setting, many of them can only be defined for sets which are in bijection with \(\mathbb{N}\), but synthetically they can be handled in a very uniform way.
**Definition 2.5**.: We call a type \(X\)
* _enumerable_ if \(\lambda x\!:\!X.\top\) is enumerable,
* _discrete_ if there exists a decider for equality = on \(X\),
* _separated_ if there exists a decider for apartness \(\neq\) on \(X\),
* _witnessing_ if \(\forall f\!:\!X\to\mathbb{B}.\,(\exists x.\,fx=\mathsf{tt})\to\Sigma x.\, fx=\mathsf{tt}\).
**Fact 2.6**.: In the particular type theory we use, \(\mathbb{N}\) is witnessing.
### First-Order Logic
In order to study Tennenbaum's theorem, we need to give a description of the first-order theory of \(\mathsf{PA}\) and the associated intuitionistic theory of _Heyting arithmetic_ (\(\mathsf{HA}\)), which has the same axiomatization, but uses intuitionistic first-order logic. We follow prior work in [10, 10, 11] and describe first-order logic as embedded inside the constructive type theory, by inductively defining formulas, terms, and the deduction system. We then define a semantics for this logic, which uses Tarski models and interprets formulas over the respective domain of the model. The type of natural numbers \(\mathbb{N}\) will then naturally be a model of \(\mathsf{HA}\).
Before specializing to one particular theory, we keep the definition of first-order logic general and fix some arbitrary signature \(\Sigma=(\mathcal{F};\mathcal{P})\) for function and predicate symbols.
**Definition 2.7** (Terms and Formulas).: We define terms \(t\!:\!\mathsf{tm}\) and formulas \(\varphi\!:\!\mathsf{fm}\) inductively.
\[s,t\!:\!\mathsf{tm}::=x_{n}\mid f\,\vec{v}\hskip 14.226378pt(n\!:\! \mathbb{N},\ f\!:\!\mathcal{F},\ \vec{v}\!:\!\mathsf{tm}^{|f|})\] \[\alpha,\beta\!:\!\mathsf{fm}::=\bot\mid P\,\vec{v}\mid\alpha\to \beta\mid\alpha\wedge\beta\mid\alpha\vee\beta\mid\forall\,\alpha\mid\exists \,\beta\hskip 14.226378pt(P\!:\!\mathcal{P},\ \vec{v}\!:\!\mathsf{tm}^{|P|}).\]
Where \(|f|\) and \(|P|\) are the arities of the function symbol \(f\) and predicate symbol \(P\), respectively.
We use de Bruijn indexing to formalize the binding of variables to quantifiers. This means that the variable \(x_{n}\) at some position in a formula is _bound_ to the \(n\)-th quantifier preceding this variable in the syntax tree of the formula. If there is no quantifier binding the variable, it is said to be _free_.
**Definition 2.8** (Substitution).: Given a variable assignment \(\sigma:\mathbb{N}\to\mathsf{tm}\) we recursively define _substitution_ on terms by \(x_{k}[\sigma]\!:=\!\sigma\,k\) and \(f\,\vec{v}\!:=\!f(\vec{v}[\sigma])\), and extended to formulas by
\[\bot[\sigma]\!:=\!\bot\quad(P\,\vec{v})[\sigma]\!:=\!P\,(\vec{v}[\sigma])\quad \quad(\alpha\,\dot{\Box}\,\beta)[\sigma]\!:=\!\alpha[\sigma]\,\dot{\Box}\, \beta[\sigma]\quad\quad(\dot{\nabla}\,\varphi)[\sigma]\!:=\!\dot{\nabla}(\varphi [x_{0};\lambda x.(\sigma x)[\uparrow]])\]
where \(\dot{\Box}\) is any logical connective and \(\dot{\nabla}\) any quantifier. The expression \(x;\sigma\) is defined by \((x;\sigma)\,0\!:=\!x\) as well as \((x;\sigma)(n+1)\!:=\!\sigma\,n\) and is simply appending \(x\) as the first element to \(\sigma\!:\!\mathbb{N}\to\mathsf{tm}\). By \(\uparrow\) we designate the substitution \(\lambda n.\,x_{n+1}\) shifting all variable indices by one.
**Definition 2.9** (Natural Deduction).: Natural deduction \(\vdash:(\mathsf{fm}\to\mathbb{P})\to\mathsf{fm}\to\mathbb{P}\) is characterized inductively by the usual rules (see Appendix A). We write \(\vdash\) for intuitionistic natural deduction and \(\vdash_{c}\) for the classical variant, which extends \(\vdash\) by adding every instance of Peirce's law (\((\varphi\to\psi)\to\varphi\)) \(\to\varphi\).
**Definition 2.10** (Tarski Semantics).: A _model_\(\mathcal{M}\) consists of a type \(D\) designating its domain together with functions \(f^{\mathcal{M}}:D^{|f|}\to D\) and \(P^{\mathcal{M}}:D^{|P|}\to\mathbb{P}\) for all symbols \(f\) in \(\mathcal{F}\) and \(P\) in \(\mathcal{P}\). We will also use \(\mathcal{M}\) to refer to the domain. Functions \(\rho:\mathbb{N}\to\mathcal{M}\) are called environments and are used as variable assignments to recursively give evaluations to terms:
\[\hat{\rho}\,x_{k}:=\rho\,k\qquad\quad\hat{\rho}\,(f\,\vec{v}):=f^{\mathcal{M}} (\hat{\rho}\,\vec{v})\qquad\quad(v\!:\!\mathsf{tm}^{n})\]
This interpretation is then extended to formulas via the satisfaction relation:
\[\mathcal{M}\vDash_{\rho}P\,\vec{v} :=\;P^{\mathcal{M}}(\hat{\rho}\,\vec{v}) \qquad\quad\mathcal{M}\vDash_{\rho}\alpha\to\beta :=\;\mathcal{M}\vDash_{\rho}\alpha\to\mathcal{M}\vDash_{\rho}\beta\] \[\mathcal{M}\vDash_{\rho}\alpha\wedge\beta :=\;\mathcal{M}\vDash_{\rho}\alpha\wedge\mathcal{M}\vDash_{\rho}\beta \qquad\quad\mathcal{M}\vDash_{\rho}\alpha\vee\beta :=\;\mathcal{M}\vDash_{\rho}\alpha\vee\mathcal{M}\vDash_{\rho}\beta\] \[\mathcal{M}\vDash_{\rho}\forall\alpha :=\;\forall x\!:\!D.\;\mathcal{M}\vDash_{x;\rho}\alpha \qquad\quad\mathcal{M}\vDash_{\rho}\exists\alpha :=\;\exists x\!:\!D.\;\mathcal{M}\vDash_{x;\rho}\alpha\]
We say that a formula \(\varphi\)_holds in the model_\(\mathcal{M}\) and write \(\mathcal{M}\vDash\varphi\) if for every \(\rho\) we have \(\mathcal{M}\vDash_{\rho}\varphi\). We extend this notation to theories \(\mathcal{T}\!:\!\mathsf{fm}\to\mathbb{P}\) by writing \(\mathcal{M}\vDash\mathcal{T}\) iff \(\forall\varphi.\,\mathcal{T}\,\varphi\to\mathcal{M}\vDash\varphi\), and we write \(\mathcal{T}\vDash\varphi\) if \(\mathcal{M}\vDash\varphi\) for all models \(\mathcal{M}\) with \(\mathcal{M}\vDash\mathcal{T}\).
**Fact 2.11** (Soundness).: For any formula \(\varphi\) and theory \(\mathcal{T}\), if \(\mathcal{T}\vdash\varphi\) then \(\mathcal{T}\vDash\varphi\).
From the next section on, we will use conventional notation with named variables instead of explicitly writing formulas with de Bruijn indices.
## 3. Axiomatization of Peano Arithmetic
We present \(\mathsf{PA}\) following [10], as a first-order theory with a signature consisting of symbols for the constant zero, the successor function, addition, multiplication and equality:
\[\Sigma_{\mathsf{PA}}:=(\mathcal{F}_{\mathsf{PA}};\mathcal{P}_{\mathsf{PA}})=(0,\,S,+,\times;=)\]
The finite core of \(\mathsf{PA}\) axioms consists of statements characterizing the successor function, as well as addition and multiplication:
\[\begin{array}{ll}\text{Disjointness:}&\forall\,x.\,Sx=0\to\bot\qquad\quad \text{Injectivity:}&\forall\,xy.\,Sx=Sy\to x=y\\ &\text{$+$-base:}&\forall\,x.\,0+x=x\qquad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
If instead of the induction scheme we add the axiom \(\forall\,x.\,x=0\vee\exists\,y.\,x=Sy\), we get the theory \(\mathsf{Q}\) known as _Robinson arithmetic_. Both \(\mathsf{PA}\) and \(\mathsf{Q}\) also contain axioms for equality:
\[\begin{split}&\text{Reflexivity: }\forall\,x.\,x=x\\ &\text{Symmetry: }\forall\,xy.\,x=y\to y=x\\ &\text{Transitivity: }\forall\,xyz.\,x=y\to y=z\to x=z\\ &\text{$S$-equality: }\forall\,xy.\,x=y\to Sx=Sy\\ &\text{$+$-equality: }\forall\,xyuu.\,x=u\to y=v\to x+y=u+v\\ &\text{$\times$-equality: }\forall\,xyuv.\,x=u\to y=v\to x\times y=u\times v \end{split}\]
The classical first-order theory of Peano arithmetic is described by \(\mathsf{PA}\vdash_{c}\), while its intuitionistic counterpart - Heyting arithmetic - is given by \(\mathsf{PA}\vdash\).7 Since the constructive type theory we have chosen to work in only gives us a model for Heyting arithmetic, we will only work with the intuitionistic theory \(\mathsf{PA}\vdash\). To emphasize this we will from now on write \(\mathsf{HA}\) instead of \(\mathsf{PA}\).
Footnote 7: Another way to treat the distinction between classical and intuitionistic theories would be to add all instances of Peirce’s law to the axioms of a theory, instead of building them into the deduction system.
For simplicity, we only consider models that interpret the equality symbol with the actual equality relation of its domain, so-called _extensional_ models. Note that in the Coq development we even make the equality symbol a syntactic primitive, therefore enabling the convenient behavior that the interpreted equality reduces to actual equality.
**Definition 3.1**.: We recursively define a function \(\overline{\cdot}:\mathbb{N}\to\mathsf{tm}\) by \(\overline{0}:=0\) and \(\overline{n+1}:=S\overline{n}\), giving every natural number a representation as a term. Any term \(t\) which is of the form \(\overline{n}\) will be called _numeral_.
We furthermore use notations for expressing _less than_\(x<y:=\exists\,k.\,S(x+k)=y\), _less or equal_\(x\leq y:=\exists\,k.\,x+k=y\) and for _divisibility_\(x\mid y:=\exists\,k.\,x\times k=y\).
The formulas of \(\mathsf{HA}\) can be classified in a hierarchy based on their computational properties. We will only consider two levels of this hierarchy:
**Definition 3.2** (\(\Delta_{1}\) and \(\Sigma_{1}\)-formulas (_cf._[13])).: A formula \(\varphi\) is \(\Delta_{1}\) if for every substitution \(\sigma\) that only substitutes closed terms, we have \(\mathsf{Q}\vdash\varphi[\sigma]\) or \(\mathsf{Q}\vdash\neg\varphi[\sigma]\). A formula is \(\Sigma_{1}\) if it is of the form \(\exists x_{1}\ldots\exists x_{n}.\,\varphi_{0}\), where \(\varphi_{0}\) is a \(\Delta_{1}\) formula.
Given a \(\Sigma_{1}\)-formula \(\exists x_{1}\ldots\exists x_{n}.\varphi\) where \(\varphi\) is \(\Delta_{1}\), we can prove it equivalent to the formula \(\exists x\,\exists x_{1}<x\ldots\exists x_{n}<x.\,\varphi\), which shows that it can be written as a \(\Delta_{1}\)-formula, which is proceeded by exactly one existential quantifier. We will occasionally make use of this fact and refer to it as _\(\Sigma_{1}\)-compression_. A more syntactic definition of \(\Delta_{1}\) would characterize them as the formulas which are equivalent to both a \(\Pi_{1}\) and \(\Sigma_{1}\)-formula. For our purposes the definition which only stipulates the necessary decidability properties is sufficient, as it implies the absoluteness and completeness properties we will need [13]:
**Fact 3.3** (\(\Delta_{1}\)-Absoluteness).: Let \(\mathcal{M}\vDash\mathsf{HA}\) and \(\varphi\) be any closed \(\Delta_{1}\)-formula, then we have \(\mathbb{N}\vDash\varphi\to\mathcal{M}\vDash\varphi\).
**Fact 3.4** (\(\Sigma_{1}\)-Completeness).: For any \(\Sigma_{1}\)-formula \(\varphi\) we have \(\mathbb{N}\vDash\varphi\) iff \(\mathsf{HA}\vdash\varphi\).
## 4. Standard and Non-standard Models of HA
From now on \(\mathcal{M}\) will always designate a HA model. Any model like this has an interpretation \(0^{\mathcal{M}}\) of the zero symbol, as well as an interpretation \(S^{\mathcal{M}}\!:\!\mathcal{M}\to\mathcal{M}\) of the symbol for the successor. By repeated application of \(S^{\mathcal{M}}\) we can therefore get the sequence of elements \(0^{\mathcal{M}},S^{\mathcal{M}}0^{\mathcal{M}},S^{\mathcal{M}}S^{\mathcal{M}}0 ^{\mathcal{M}},\ldots\) essentially giving us a copy of the standard numbers inside \(\mathcal{M}\). We will now put this intuition more formally.
**Fact 4.1**.: We recursively define a function \(\nu\,:\mathbb{N}\to\mathcal{M}\) by \(\nu\,0\!\!:=\!\!0^{\mathcal{M}}\) and \(\nu\,(n+1)\!\!:=\!\!S^{\mathcal{M}}(\nu\,\,n)\). Furthermore, we define the predicate \(\mathsf{std}\!:\!=\!\!\lambda e\,.\,\exists n.\,\overline{n}=e\) and refer to \(e\) as a _standard number_ if \(\mathsf{std}\,e\) and _non-standard_ if \(\neg\,\mathsf{std}\,e\). We then have
1. \(\hat{\rho}\,\overline{n}=\nu\,n\) for any \(n\!:\!\mathbb{N}\) and environment \(\rho\!:\!\mathbb{N}\to\mathcal{M}\).
2. \(\nu\,\) is an injective homomorphism and therefore an _embedding_ of \(\mathbb{N}\) into \(\mathcal{M}\).
Both facts are taken as justification to abuse notation and also write \(\overline{n}\) for \(\nu\,n\).
Usually we would have to write \(0^{\mathcal{M}},S^{\mathcal{M}},+^{\mathcal{M}},\times^{\mathcal{M}},=^{ \mathcal{M}}\) for the interpretations of the respective symbols in a model \(\mathcal{M}\). For better readability we will however take the freedom to overload the symbols \(0,S,+,\cdot,=\) to also refer to these interpretations.
**Definition 4.2**.: \(\mathcal{M}\) is called a _standard model_ if there is a bijective homomorphism \(\varphi:\mathbb{N}\to\mathcal{M}\). We will accordingly write \(\mathcal{M}\cong\mathbb{N}\) if this is the case.
We can show that \(\nu\,\) is essentially the only homomorphism from \(\mathbb{N}\) to \(\mathcal{M}\) we need to worry about, since it is unique up to functional extensionality:
**Lemma 4.3**.: _Let \(\varphi:\mathbb{N}\to\mathcal{M}\) be a homomorphism, then \(\forall x\!:\!\mathbb{N}\,.\,\varphi\,x=\nu\,x\)._
Proof.: By induction on \(x\) and using the fact that both are homomorphisms.
We now have two equivalent ways to express standardness of a model.
**Lemma 4.4**.: \(\mathcal{M}\cong\mathbb{N}\) _iff \(\forall e\!:\!\mathcal{M}.\,\mathsf{std}\,e\)._
Proof.: Given \(\mathcal{M}\cong\mathbb{N}\), there is an isomorphism \(\varphi:\mathbb{N}\to\mathcal{M}\). Since \(\varphi\) is surjective, Lemma 4.3 implies that \(\nu\,\) must also be surjective. For the converse: if \(\nu\,\) is surjective, it is an isomorphism since it is injective by Fact 4.1.
Having seen that every model contains a unique embedding of \(\mathbb{N}\), one may wonder whether there is a formula \(\varphi\) which could define and pick out precisely the standard numbers in \(\mathcal{M}\). Lemma 4.5 gives a negative answer to this question:
**Lemma 4.5**.: _There is a unary formula \(\varphi(x)\) with \(\forall e\!:\!\mathcal{M}.\left(\,\mathsf{std}\,e\,\leftrightarrow\,\mathcal{M }\vDash\varphi(e)\,\right)\) if and only if \(\mathcal{M}\cong\mathbb{N}\)._
Proof.: Given a formula \(\varphi\) with the stated property, we certainly have \(\mathcal{M}\vDash\varphi(\overline{0})\) since \(\overline{0}\) is a standard number, and clearly \(\mathcal{M}\vDash\varphi(x)\implies\mathsf{std}\,x\implies\mathsf{std}\,(Sx) \implies\mathcal{M}\vDash\varphi(Sx)\). Thus, by induction in the model, we have \(\mathcal{M}\vDash\forall x.\,\varphi(x)\), which is equivalent to \(\forall e\!:\!\mathcal{M}.\,\mathsf{std}\,e\). The converse implication holds by choosing the formula \(x=x\).
We now turn our attention to models which are not isomorphic to \(\mathbb{N}\).
**Fact 4.6**.: For any \(e\!:\!\mathcal{M}\), we have \(\neg\,\mathsf{std}\,e\) iff \(\forall n\!:\!\mathbb{N}.\,\,e>\overline{n}\).
**Definition 4.7**.: Founded on the result of Fact 4.6 we write \(e>\mathbb{N}\) iff \(\neg\,\mathsf{std}\,e\) and call \(\mathcal{M}\)
\(\bullet\)_non-standard_ (written \(\mathcal{M}>\mathbb{N}\)) iff there is \(e\!:\!\mathcal{M}\) such that \(e>\mathbb{N}\),
* _not standard_ (written \(\mathcal{M}\not\cong\mathbb{N}\)) iff \(\neg\mathcal{M}\cong\mathbb{N}\).
We will also write \(e\colon\mathcal{M}>\mathbb{N}\) to express the existence of a non-standard element \(e\) in \(\mathcal{M}\).
Of course, we have \(\mathcal{M}>\mathbb{N}\to\mathcal{M}\not\cong\mathbb{N}\), but the converse implication does not hold constructively in general, so the distinction of both notions becomes meaningful.
**Lemma 4.8** (Overspill).: _If \(\mathcal{M}\not\cong\mathbb{N}\) and \(\varphi(x)\) is unary with \(\mathcal{M}\vDash\varphi(\overline{n})\) for every \(n\colon\mathbb{N}\), then_
1. \(\neg\big{(}\forall e\colon\mathcal{M}.\,\mathcal{M}\vDash\varphi(e)\to\mathsf{ std}\,e\big{)}\)__
2. \(\mathsf{stable}\,\,\mathsf{std}\to\neg\neg\,\exists\,e>\mathbb{N}.\,\mathcal{M} \vDash\varphi(e)\)__
3. \(\mathsf{DNE}\,\to\exists\,e>\mathbb{N}.\,\mathcal{M}\vDash\varphi(e)\)_._
Proof.: (1) Assuming \(\forall e\colon\mathcal{M}.\,\mathcal{M}\vDash\varphi(e)\to\mathsf{std}\,e\) and combining it with our assumption that \(\varphi\) holds on all numerals, Lemma 4.5 implies \(\mathcal{M}\cong\mathbb{N}\), giving us a contradiction. For (2) note that we constructively have that \(\neg\exists e\colon\mathcal{M}.\,\neg\mathsf{std}\,e\,\wedge\,\mathcal{M} \vDash\varphi(e)\) implies \(\forall e\colon\mathcal{M}.\,\mathcal{M}\vDash\varphi(e)\to\neg\neg\mathsf{ std}\,e\), and by using the stability of \(\mathsf{std}\) we therefore get a contradiction in the same way as in (1). Statement (3) immediately follows from (2).
From Lemma 4.8 we learn that under certain conditions, whenever a formula is satisfied on all standard numbers \(\overline{n}\), this satisfaction "spills over" into the non-standard part of the model, meaning there is a non-standard element which also satisfies the formula. In the next section, we will encounter our first application of this principle.
## 5. Coding Finite and Infinite Predicates
There is a standard way in which finite sets of natural numbers can be encoded by a single natural number. Assuming we have some injective function \(\pi:\mathbb{N}\to\mathbb{N}\) whose image consists only of prime numbers, and given a finite set of numbers like \(S\vDash\{4,13,21,33\}\), we can encode this set by the single number \(c\vDash\pi_{4}\cdot\pi_{13}\cdot\pi_{21}\cdot\pi_{33}\). It then satisfies \(n\in S\leftrightarrow\pi_{n}\mid c\), allowing us to reconstruct \(S\) by checking which primes are present in \(c\).
Instead of applying this to sets, we can also use it to encode bounded portions of predicates on \(\mathbb{N}\).
**Lemma 5.1**.: _Given \(n\colon\mathbb{N}\) and any predicate \(p:\mathbb{N}\to\mathbb{P}\) with \(\forall x<n.\,p\,x\vee\neg\,p\,x\), we have_
\[\exists c\colon\mathbb{N}\ \forall u\colon\mathbb{N}.\big{(}u<n\to(p\,u \leftrightarrow\pi_{u}\mid c)\big{)}\wedge\big{(}\pi_{u}\mid c\to u<n\big{)}\]
_The right part of the conjunction assures that no primes above \(\pi_{n}\) end up in the code \(c\)._
Proof.: We do a proof by induction on \(n\). For \(n=0\) we can choose \(c\vDash 1\). In the induction step, the induction hypothesis gives us a code \(c\vDash\mathbb{N}\) which codes \(p\) up to \(n\). Since by assumption, \(p\) is definite below \(Sn\), we know that \(p\,n\vee\neg p\,n\), allowing us to consider two cases: If \(p\,n\), we set the new code to be \(c^{\prime}\vDash c\cdot\pi_{n}\), if \(\neg p\,n\) we simply set \(c^{\prime}\vDash c\). In both cases one can now verify that \(c^{\prime}\) will correctly code \(p\) up to \(Sn\).
**Corollary 5.2** (Finite Coding in \(\mathbb{N}\)).: _Given any \(p:\mathbb{N}\to\mathbb{P}\) and bound \(n\colon\mathbb{N}\), we have_
\[\neg\neg\exists c\colon\mathbb{N}\ \forall u\colon\mathbb{N}.\big{(}u<n\to(p\,u \leftrightarrow\pi_{u}\mid c)\big{)}\wedge\big{(}\pi_{u}\mid c\to u<n\big{)}\]
_Note that if \(p\) is definite, we can drop the \(\neg\neg\)._
Proof.: If \(p\) is definite, we trivially have \(\forall x<n.\,p\,x\vee\neg\,p\,x\), so Lemma 5.1 gives us the \(\neg\neg\neg\)-free existence as claimed. Without assuming definiteness, we can still constructively show \(\neg\neg(\forall x<n.\,p\,x\vee\neg\,p\,x)\) by induction on \(n\), which combined with Lemma 5.1 gives us the existence, but behind a double negation.
With a proof of the encoding in \(\mathbb{N}\) we can give a straightforward proof that this is possible in any model of \(\mathsf{HA}\).
**Remark 5.3**.: To formulate the above result in a generic model \(\mathcal{M}\vDash\mathsf{HA}\), we require an object level representation of the prime function \(\pi\). For now, we will simply assume that we have such a binary formula \(\Pi(x,y)\) and defer the justification to Section 6.
The statement "\(\pi_{u}\) divides \(c\)" can now be expressed by \(\exists\,p.\)\(\Pi(u,p)\wedge p\mid c\), for which we will abuse notation and simply write \(\Pi(u)\mid c\).
**Lemma 5.4** (Finite Coding in \(\mathcal{M}\)).: _For any binary formula \(\alpha(x,y)\) and \(n\!:\!\mathbb{N}\) we have_
\[\mathcal{M}\vDash\forall\,e\!\neg\!\neg\,\exists\,c\,\forall\,u<\overline{n}. \ \alpha(u,e)\leftrightarrow\Pi(u)\mid c.\]
Proof.: Let \(e\!:\!\mathcal{M}\), and define the predicate \(p\!:=\!\lambda u\!:\!\mathbb{N}.\,\mathcal{M}\vDash\alpha(\overline{u},e)\). Then Corollary 5.2 potentially gives us a code \(a\!:\!\mathbb{N}\) for \(p\) up to the bound \(n\). It now suffices to show that the actual existence of \(a\!:\!\mathbb{N}\) already implies
\[\mathcal{M}\vDash\exists\,c\,\forall\,u<\overline{n}.\ \alpha(u,e)\leftrightarrow\Pi(u) \mid c.\]
And indeed, we can verify that \(c=\overline{a}\) shows the existential claim: given \(u\!:\!\mathcal{M}\) with \(\mathcal{M}\vDash u<\overline{n}\) we can conclude that \(u\) must be a standard number \(\overline{u}\). We then have the equivalences
\[\mathcal{M}\vDash\alpha(\overline{u},e)\iff p\,u\iff\pi_{u}\mid a\iff \mathcal{M}\vDash\Pi(\overline{u})\mid\overline{a}\]
since \(a\) codes \(p\) and \(\Pi\) represents \(\pi\).
Overspill now has interesting consequences when it comes to encoding, as for models that are not standard, it allows the potential encoding of a complete predicate \(p:\mathbb{N}\to\mathbb{P}\), and therefore also of infinite subsets.
**Lemma 5.5** (Infinite Coding in \(\mathcal{M}\)).: _If \(\mathsf{std}\) is stable, \(\mathcal{M}\not\cong\mathbb{N}\) and \(\alpha(x)\) unary, we have_
\[\neg\neg\exists c\!:\!\mathcal{M}\ \forall u\!:\!\mathbb{N}.\ \mathcal{M}\vDash \alpha(\overline{u})\leftrightarrow\Pi(\overline{u})\mid c.\]
Proof.: Using Lemma 5.4 for the present case where \(\alpha\) is unary, we get
\[\mathcal{M}\vDash\neg\!\,\exists\,c\,\forall\,u<\overline{n}.\ \alpha(u) \leftrightarrow\Pi(u)\mid c\]
for every \(n\!:\!\mathbb{N}\), so by Lemma 4.8 (Overspill) we get
\[\neg\neg\,\exists\,e>\mathbb{N}.\ \mathcal{M}\vDash\neg\!\,\exists\,c\, \forall\,u<e.\ \alpha(u)\leftrightarrow\Pi(u)\mid c\] \[\Longrightarrow\neg\neg\,\exists c\!:\!\mathcal{M}\,\forall u\!: \!\mathbb{N}.\ \mathcal{M}\vDash\alpha(\overline{u})\leftrightarrow\Pi(\overline{u})\mid c.\]
Where we used that since the equivalence holds for all \(u<e\) with \(e\) non-standard, it will in particular hold for all \(u\!:\!\mathbb{N}\).
**Lemma 5.6**.: _If \(\mathsf{std}\) is stable, \(\mathcal{M}\not\cong\mathbb{N}\), then for binary \(\alpha(x,y)\) and \(e\!:\!\mathcal{M}\) we have_
\[\neg\neg\,\exists c\!:\!\mathcal{M}\ \forall u\!:\!\mathbb{N}.\ \mathcal{M} \vDash\alpha(\overline{u},e)\leftrightarrow\Pi(\overline{u})\mid c.\]
Proof.: Analogous to the proof of Lemma 5.5.
These coding results allow us to connect a unary formula \(\alpha\) to an element \(c\!:\!\mathcal{M}\) of the model, in such a way that the decidability of the divisibility for \(c\) will entail the decidability of \(\mathcal{M}\vDash\alpha(\overline{\cdot})\).
## 6. Church's Thesis for First-Order Arithmetic
Church's thesis is an axiom of constructive mathematics which states that every total function is computable. We will assume a version of it in this paper, since by its addition to the ambient type theory, we merely need to show that a function can be defined at all, to prove its computability. This makes it possible to stay completely formal, yet achieve a textbook-style conciseness for proofs involving computability, even in their mechanization.
It is not safe to add a strong statement like this to just any theory. If we were to add it to \(\mathsf{ZF}\), it would immediately imply the computability of the function that solves the Halting problem, leading to an inconsistent theory. In general however, theories that tend to the constructive side do allow for the consistent addition of this axiom. In the type theory we use in this paper, this is achieved by strictly distinguishing between functional relations and total functions; The aforementioned function that solves the Halting problem in \(\mathsf{ZF}\) can only be shown to be a functional relation, which means we can still safely assume total functions to be computable. There currently is no consistency proof for \(\mathsf{CT}\) and the exact type theory we are using, but there are proofs showing that it can be consistently added to very similar systems [20, 21, 22].
Since \(\mathsf{CT}\) makes reference to _computability_, its exact form as an axiom does not only depend on the theory in which it is assumed, but also on the model of computation it makes reference to. Robinson's \(\mathsf{Q}\), as a finitely axiomatized arithmetical system, is expressive enough to serve as a computational model, and is a particularly well-suited choice in our case, leading us to the following formulation of \(\mathsf{CT}\) which we assume for the remainder of the paper:
**Axiom 6.1** (\(\mathsf{CT}_{\mathsf{Q}}\)).: _For every function \(f:\mathbb{N}\to\mathbb{N}\) there exists a binary \(\Sigma_{1}\) formula \(\varphi_{f}(x,y)\) such that for every \(n\!:\!\mathbb{N}\) we have \(\mathsf{Q}\vdash\forall y.\,\varphi_{f}(\overline{n},y)\leftrightarrow \overline{fn}=y\)._
Using \(\mathsf{CT}_{\mathsf{Q}}\) we can get an internal representation \(\varphi_{f}\) of any computable function \(f\), allowing us to argue and reason about the function inside of first-order arithmetic. As a further justification for the validity of this version, we want to note that it can be derived from a more common version of Church's thesis for \(\mu\)-recursive functions [13].8 We also have an immediate use-case for \(\mathsf{CT}_{\mathsf{Q}}\), since applying it on the injective prime function \(\pi\) lets us settle the earlier Remark 5.3:
Footnote 8: In [13], the abbreviation \(\mathsf{CT}_{\mathsf{Q}}\) was used to refer for a version of Church’s thesis which applies to partial functions. From it, version for total functions was derived, which is what we refer to as \(\mathsf{CT}_{\mathsf{Q}}\).
**Fact 6.2**.: There is a binary formula representing the injective prime function \(\pi\) in \(\mathsf{Q}\).
Since we defined decidable and enumerable predicates in Section 2.1 by reference to computable functions, we can use \(\mathsf{CT}_{\mathsf{Q}}\) to give characterizations and representations of such predicates by formulas in \(\mathsf{Q}\)[14].
**Definition 6.3**.: We call \(p:\mathbb{N}\to\mathbb{P}\)_weakly representable_ if there is a \(\Sigma_{1}\) formula \(\varphi_{p}(x)\) such that \(\forall n:\mathbb{N}.\,p\,n\leftrightarrow\mathsf{Q}\vdash\varphi_{p}( \overline{n})\), and _strongly representable_ if \(p\,n\to\mathsf{Q}\vdash\varphi_{p}(\overline{n})\) and \(\neg p\,n\to\mathsf{Q}\vdash\neg\varphi_{p}(\overline{n})\) hold for every \(n\!:\!\mathbb{N}\).
**Lemma 6.4** (Representability Theorem (\(\mathsf{RT}\))).: _Assume \(\mathsf{CT}_{\mathsf{Q}}\), and let \(p:\mathbb{N}\to\mathbb{P}\) be given._
1. _If_ \(p\) _is decidable, it is strongly representable._
2. _If_ \(p\) _is enumerable, it is weakly representable._
Proof.: If \(p\) is decidable, then there is a function \(f:\mathbb{N}\to\mathbb{N}\) such that \(\forall x:\mathbb{N}.\,p\,x\leftrightarrow fx=0\), and by \(\mathsf{CT}_{\mathsf{Q}}\) there is a binary \(\Sigma_{1}\) formula \(\varphi_{f}(x,y)\) representing \(f\). We then define \(\varphi_{p}(x):=\varphi_{f}(x,\overline{0})\) and deduce
\[p\,n \implies fn=0 \implies\mathsf{Q}\vdash\overline{fn}=\overline{0} \implies\mathsf{Q}\vdash\varphi_{f}(\overline{n},\overline{0}) \implies\mathsf{Q}\vdash\varphi_{p}(\overline{n})\] \[\neg p\,n \implies fn\neq 0 \implies\mathsf{Q}\vdash\neg(\overline{fn}=\overline{0}) \implies\mathsf{Q}\vdash\neg\varphi_{f}(\overline{n},\overline{0}) \implies\mathsf{Q}\vdash\neg\varphi_{p}(\overline{n})\]
which shows that \(p\) is strongly representable.
If \(p\) is enumerable, then there is \(f:\mathbb{N}\to\mathbb{N}\) such that \(\forall x:\mathbb{N}.\,p\,x\leftrightarrow\exists n.\,fn=x+1\) and by \(\mathsf{CT}_{\mathsf{Q}}\) there is a binary \(\Sigma_{1}\) formula \(\varphi_{f}(x,y)\) representing \(f\). We then define \(\varphi_{p}(x):=\exists\,n.\,\varphi_{f}(n,Sx)\) giving us
\[\mathsf{Q}\vdash\varphi_{p}(\overline{x}) \iff\mathsf{Q}\vdash\exists\,n.\,\varphi_{f}(n,Sx)\iff\exists n \!:\!\mathbb{N}.\,\mathsf{Q}\vdash\varphi_{f}(\overline{n},Sx)\] \[\iff\exists n\!:\!\mathbb{N}.\,\mathsf{Q}\vdash\overline{fn}=S \overline{x}\iff\exists n\!:\!\mathbb{N}.\,fn=x+1\iff p\,x\]
which shows that \(p\) is weakly representable by a \(\Sigma_{1}\) formula.
## 7. Tennenbaum's Theorem
With our choice of \(\mathsf{CTT}+\mathsf{CT}_{\mathsf{Q}}\) for the meta-theory in place, we now begin with our analysis of Tennenbaum's theorem. We will present several proofs of the theorem from the literature. In a classical meta-theory all of these proofs would yield the same result, but in our constructive setting, they turn out to differ in the strength of their assumptions and conclusions. Almost all the proofs will make use of some coding results for non-standard models from Section 5, enabling us to use a single model element to fully encode the standard part of any predicate \(p:\mathcal{M}\to\mathbb{P}\).
For the proof in Section 7.1 we will assume enumerability of the model, enabling a very direct diagonal argument [1]. In Section 7.2 we look at the proof approach that is most prominently found in the literature [12, 1] and uses the existence of recursively inseparable sets.
Another variant of this proof was proposed in a post by Makholm [1] and comes with the advantage that it circumvents the usage of Overspill. It turns out that in the constructive setting, this eliminates the necessity for \(\mathsf{MP}\), which is required for the standard proof using inseparable sets. Additionally, we look at the consequences of Tennenbaum's theorem, once the underlying semantics is made explicitly constructive. The latter two variations are discussed in Section 7.3.
### Via a Diagonal Argument
We start by noting that every HA model can prove the most basic fact about divisibility.
**Lemma 7.1** (Euclidean Lemma).: _Given \(e,d\!:\mathcal{M}\) we have_
\[\mathcal{M}\vDash\exists\,r\,q.\ e=q\cdot d+r\ \wedge\ (0<d\to r<d)\]
_and the uniqueness property telling us that if \(r_{1},r_{2}<d\) then \(q_{1}\cdot d+r_{1}=q_{2}\cdot d+r_{2}\) implies \(q_{1}=q_{2}\) and \(r_{1}=r_{2}\)._
Proof.: For Euclid's lemma, there is a standard proof by induction on \(e\!:\!\mathcal{M}\). The uniqueness claim requires some basic results about the strict order.
**Lemma 7.2**.: _If \(\mathcal{M}\) is enumerable and discrete, then \(\lambda nd.\,\mathcal{M}\vDash\overline{n}\mid d\) has a decider._
Proof.: Let \(n\colon\mathbb{N}\) and \(d\colon\mathcal{M}\) be given. By Lemma 7.1 we have \(\exists q^{\prime},r^{\prime}\colon\mathcal{M}.\,d=q^{\prime}\cdot\overline{n}+r^ {\prime}\). This existence is propositional, so presently we cannot use it to give a decision for \(\overline{n}\mid d\). Since \(\mathcal{M}\) is enumerable, there is a surjective function \(g:\mathbb{N}\to\mathcal{M}\) and the above existence therefore shows \(\exists q,r\colon\mathbb{N}.\,d=(g\,q)\cdot\overline{n}+(g\,r)\). Since equality is decidable in \(\mathcal{M}\) and \(\mathbb{N}^{2}\) is witnessing, we get \(\Sigma q,r\colon\mathbb{N}.\,d=(g\,q)\cdot\overline{n}+(g\,r)\), giving us computational access to \(r\), now allowing us to construct the decision. By the uniqueness part of Lemma 7.1 we have \(g\,r=0\leftrightarrow\overline{n}\mid d\), so the decidability of \(\overline{n}\mid d\) is entailed by the decidability of \(g\,r=0\).
**Lemma 7.3**.:
1. _If_ \(\mathsf{std}\) _is stable, then so is_ \(\mathcal{M}\cong\mathbb{N}\)_._
2. _Assuming_ \(\mathsf{MP}\) _and discreteness of_ \(\mathcal{M}\)_, then_ \(\mathsf{std}\) _is stable._
Proof.: The first statement is trivial by Lemma 4.4. For the second, recall that \(\mathsf{std}\,e\) stands for \(\exists n\colon\mathbb{N}.\,\overline{n}=e\). Since \(\overline{n}=e\) in \(\mathcal{M}\) is decidable, stability follows from Fact 2.6.
**Lemma 7.4**.: _If \(\mathsf{std}\) is stable, \(\mathcal{M}\not\cong\mathbb{N}\), and \(p\colon\mathbb{N}\to\mathbb{P}\) decidable, then potentially there is a code \(c\colon\mathcal{M}\) such that \(\forall n\colon\mathbb{N}.\,p\,n\leftrightarrow\mathcal{M}\vDash\overline{\pi_ {n}}\mid c\)._
Proof.: By \(\mathsf{RT}\), there is a formula \(\varphi_{p}\) strongly representing \(p\). Under the given assumptions, we can use the coding Lemma 5.5, yielding a code \(c\colon\mathcal{M}\) for the formula \(\varphi_{p}\), such that \(\forall u\colon\mathbb{N}.\,\mathcal{M}\vDash\varphi_{p}(\overline{u}) \leftrightarrow\Pi(\overline{u})\mid c\). Overall this shows:
\[p\,n \implies\,\mathsf{Q}\vdash\varphi_{p}(\overline{n}) \implies\,\,\mathcal{M}\vDash\varphi_{p}(\overline{n}) \implies\,\,\mathcal{M}\vDash\Pi(\overline{n})\mid c\] \[\neg p\,n \implies\,\mathsf{Q}\vdash\neg\,\varphi_{p}(\overline{n}) \implies\neg\,\mathcal{M}\vDash\varphi_{p}(\overline{n})\implies\neg\, \mathcal{M}\vDash\Pi(\overline{n})\mid c.\]
Since \(p\) is decidable, the latter implication entails \(\mathcal{M}\vDash\Pi(\overline{n})\mid c\implies p\,n\), which overall shows the desired equivalence.
This gives us the following version of Tennenbaum's theorem:
**Theorem 7.5**.: _Assuming \(\mathsf{MP}\) and discrete \(\mathcal{M}\), enumerability of \(\mathcal{M}\) implies \(\mathcal{M}\cong\mathbb{N}\)._
Proof.: By Lemma 7.3 it suffices to show \(\neg\neg\mathcal{M}\cong\mathbb{N}\). So assume \(\mathcal{M}\not\cong\mathbb{N}\) and try to derive \(\bot\). Given the enumerability, there is a surjective function \(g\colon\mathbb{N}\to\mathcal{M}\), allowing us to define the predicate \(p:=\lambda n\colon\mathbb{N}.\,\neg\,\mathcal{M}\vDash\overline{\pi_{n}}\mid g\,n\), which is decidable by Lemma 7.2. By the coding result in Lemma 7.4 there is an \(e\colon\mathcal{M}\) which codes \(p\), and by the surjectivity of \(g\), there is some \(c\colon\mathbb{N}\) with \(g\,c=e\). Combined, these facts give us
\[\neg\,\mathcal{M}\vDash\overline{\pi_{c}}\mid g\,c\,\,\stackrel{{ \mathrm{def.}}}{{\iff}}\,\,p\,c\,\,\stackrel{{\mathrm{coding}}}{{ \iff}}\,\,\mathcal{M}\vDash\overline{\pi_{c}}\mid g\,c\]
leading to the desired contradiction.
### Via Inseparable Predicates
The most frequently reproduced proof of Tennenbaum's theorem [20, 21] uses the existence of recursively inseparable sets and non-standard coding to establish the existence of a non-recursive set.
**Definition 7.6**.: A pair \(A,B:\mathbb{N}\to\mathbb{P}\) of predicates is called _inseparable_ if they are disjoint and \(A\subseteq D\subseteq\neg B\) implies the undecidability of \(D\).
**Lemma 7.7**.: _There are inseparable enumerable predicates \(A,B:\mathbb{N}\to\mathbb{P}\)._
Proof.: We use an enumeration \(\Phi_{n}:\mathsf{fm}\) of formulas to define disjoint predicates \(A:=\lambda n\,\colon\,\mathbb{N}\,.\,\mathsf{Q}\vdash\neg\,\Phi_{n}(\overline{n})\) and \(B:=\lambda n\,\colon\,\mathbb{N}\,.\,\mathsf{Q}\vdash\Phi_{n}(\overline{n})\). Since proofs over \(\mathsf{Q}\) can be enumerated, \(A\) and \(B\) are enumerable. Assuming a predicate \(D\) satisfying \(A\subseteq D\subseteq\neg B\) were decidable, \(\mathsf{RT}\) would give us a formula strongly representing \(D\), and by the enumeration there is \(d\,\colon\,\mathbb{N}\) such that \(\Phi_{d}\) is said formula. Since \(D\subseteq\neg B\) is equivalent to \(B\subseteq\neg D\) this gives us the following chain of implications:
\[D\,d\stackrel{{\mathrm{s.repr.}}}{{\Longrightarrow}}\mathsf{Q} \vdash\Phi_{d}(\overline{d})\,\,\stackrel{{\mathrm{def.}}}{{ \Longrightarrow}}\,B\,d\,\stackrel{{\subseteq}}{{\Longrightarrow}} \,\neg D\,d\stackrel{{\mathrm{s.repr.}}}{{\Longrightarrow}}\, \mathsf{Q}\vdash\neg\Phi_{d}(\overline{d})\,\,\stackrel{{ \mathrm{def.}}}{{\Longrightarrow}}\,A\,d\,\stackrel{{ \subseteq}}{{\Longrightarrow}}\,D\,d\]
Since this shows \(D\,d\Longleftrightarrow\neg D\,d\), we can conclude that \(D\) is undecidable.
**Corollary 7.8**.: _There is a pair \(\alpha(x),\beta(x)\) of unary \(\Sigma_{1}\)-formulas such that \(A\,\mathbin{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\leftleftleftleftleftleftleft| \nolimitsnolimitsnolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\)\)\)\)\)\)\mathop{\longrightarrow}\nolimitsnolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\)\)\)\)\mathop{\longrightarrow}\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits\nolimits \
It leaves us with the contradiction that \(D\) is both potentially decidable and undecidable.
By usage of Lemma 7.3, we then get:
**Corollary 7.11**.: _Assuming \(\mathsf{MP}\) and discreteness of \(\mathcal{M}\), we have that \(\forall d\colon\mathcal{M}\lnot\lnot\lnot\l{\mathsf{Dec}}(\,\overline{\cdot}\, \mid\,d)\) implies \(\mathcal{M}\cong\mathbb{N}\). _
### Variants of the Theorem
We now investigate two further variants of the theorem, going back to McCarty [10, 11] and Makholm [14] respectively. They both make use of the fact that the inseparable formulas we have used so far can produce inseparable formulas which are disjoint on the object level. This stronger property allows us to easily establish that their satisfiability in any model is undecidable.
**Definition 7.12**.: A pair of unary formulas \(\alpha(x),\beta(x)\) is called _\(\mathsf{HA}\)-inseparable_ if they are disjoint in the sense of \(\mathsf{HA}\vdash\neg\exists\,x.\,\alpha(x)\wedge\beta(x)\) and if any \(D\) with \(\mathsf{Q}\vdash\alpha(\overline{\cdot})\subseteq D\subseteq\neg\mathsf{Q} \vdash\beta(\overline{\cdot})\) is undecidable.
**Lemma 7.13**.: _If \(\alpha,\beta\) are \(\mathsf{HA}\)-inseparable, then \(\mathcal{M}\vDash\alpha(\overline{\cdot})\) and \(\mathcal{M}\vDash\beta(\overline{\cdot})\) are undecidable._
Proof.: Using soundness and the \(\mathsf{HA}\)-disjointness of \(\alpha\) and \(\beta\), we get
\[\mathsf{Q}\vdash\alpha(\overline{n})\ \stackrel{{\mathrm{sound}}}{{ \Longrightarrow}}\ \mathcal{M}\vDash\alpha(\overline{n})\ \stackrel{{\mathsf{HA} \text{-disj.}}}{{\Longrightarrow}}\ \neg\mathcal{M}\vDash\beta(\overline{n})\ \stackrel{{ \mathrm{sound}}}{{\Longrightarrow}}\ \neg\mathsf{Q}\vdash\beta(\overline{n}).\]
Undecidability of \(\mathcal{M}\vDash\alpha(\overline{\cdot})\) then follows from inseparability of the given formulas, and the same argument also shows the undecidability of \(\mathcal{M}\vDash\beta(\overline{\cdot})\).
According to McCarty [10], the existence of \(\mathsf{HA}\)-inseparable formulas can be established by taking the construction of inseparable formulas as seen in Lemma 7.7, and internalizing the given proof within \(\mathsf{HA}\). However, as pointed out in [11] (Fact 6.1), Rosser's trick can be used to construct the desired \(\mathsf{HA}\)-inseparable formulas from the inseparable formulas given in Section 7.2. We mechanized the latter of the two proofs and have also added it to Appendix B for completeness.
**Lemma 7.14** (\(\mathsf{HA}\)-inseparable formulas).: _There are unary \(\mathsf{HA}\)-inseparable formulas. _
McCarty [10, 11] considers Tennenbaum's theorem with constructive semantics. Instead of models placed in classical set theory, he works in an intuitionistic theory (e.g. \(\mathsf{IZF}\)), making the interpretation of the object-level disjunction much stronger. By furthermore assuming \(\mathsf{MP}\), he is then able to show that all models of \(\mathsf{HA}\) in this constructive setting are standard. To achieve the constructive rendering of disjunctions, we will locally make use of the following choice principle:
**Definition 7.15**.: By \(\mathsf{AUC}\) we denote the principle of unique choice:
\[\forall X\,Y\,R.\,(\forall x\,\exists!y.\,Rxy)\to\exists f\colon X\to Y.\, \forall x.\,Rx(fx)\]
Note that generally, Church's thesis and unique choice principles combined prove the negation of \(\mathsf{LEM}\)9, which will make the results that use \(\mathsf{AUC}\) (deliberately) anti-classical.
Footnote 9: See the discussion in [11].
**Lemma 7.16**.: _Assuming \(\mathsf{AUC}\) and given \(e\colon\mathcal{M}>\mathbb{N}\), we have \(\lnot\lnot\l{\mathsf{Dec}}(\mathcal{M}\vDash\alpha(\overline{\cdot}))\) for any unary formula \(\alpha\)._
Proof.: Since single instances of the law of excluded middle are provable under double negation, induction on \(n\) can be used to prove \(\mathcal{M}\vDash\forall\,n.\neg\neg\forall\,x<n.\,\alpha(x)\vee\neg\alpha(x)\). Choosing the non-standard number \(e\) for the bound \(n\) above, we get \(\mathcal{M}\vDash\neg\neg\forall\,x<e.\,\alpha(x)\vee\neg\alpha(x)\) and therefore in particular \(\neg\neg\,\forall n:\mathbb{N}.\,\mathcal{M}\vDash\alpha(\overline{n})\vee \neg\alpha(\overline{n})\), meaning \(\mathcal{M}\vDash\alpha(\overline{\cdot})\) is potentially definite. Since \(\mathsf{AUC}\) implies that definite predicates on \(\mathbb{N}\) are decidable, the claim follows.
**Corollary 7.17** (McCarty).: _Given \(\mathsf{AUC}\) and \(\mathsf{MP}\), \(\mathsf{HA}\) is categorical._
Proof.: Given that \(\mathsf{HA}\vdash\forall xy.\,x=y\vee\neg\,x=y\), \(\mathsf{AUC}\) entails the discreteness of every model \(\mathcal{M}\vDash\mathsf{HA}\). Using \(\mathsf{MP}\) and Lemma 7.3, this entails stability of \(\mathsf{std}\) and \(\mathcal{M}\cong\mathbb{N}\), giving us
\[\mathcal{M}\cong\mathbb{N}\iff\neg\neg\mathcal{M}\cong\mathbb{N}\iff\neg \mathcal{M}>\mathbb{N}\]
where we can now prove the rightmost statement to finish: Assume we had \(\mathcal{M}>\mathbb{N}\), then by Lemma 7.13 and Lemma 7.16 we immediately get a contradiction.
Now turning to Makholm [10], we will no longer require \(\mathsf{AUC}\), but will instead make use of the fact that the coding result established in Corollary 5.2 can be derived in \(\mathsf{HA}\). We did not mechanize the proof of this statement, so we make its assumption explicit here.
**Hypothesis 7.18** (\(\mathsf{HA}\)-Coding).: _For any unary formula \(\alpha(x)\), \(\mathsf{HA}\) can internally prove the coding lemma: \(\mathsf{HA}\vDash\forall\,n\neg\neg\exists\,c\,\forall\,u<n.\,\alpha(u) \leftrightarrow\Pi(u)\mid c\).10_
Footnote 10: In the conference paper [11], the hypothesis stated that \(\mathsf{HA}\vDash\forall\,n\,\exists\,c\,\forall\,u<n.\,\alpha(u)\leftrightarrow \Pi(u)\mid c\). Contrary to what was claimed, this is not provable for arbitrary formulas \(\alpha\). The new version solves this mistake through the addition of the double negation.
**Theorem 7.19** (Makholm).: _If \(\forall d:\mathcal{M}.\,\neg\neg\mathsf{Dec}(\,\overline{\cdot}\mid d)\) then \(\neg\mathcal{M}>\mathbb{N}\)._
Proof.: Assuming \(\mathcal{M}>\mathbb{N}\) we aim to derive a contradiction. By Lemma 7.14 there are \(\mathsf{HA}\)-inseparable unary formulas \(\alpha\) and \(\beta\). Using soundness on \(\mathsf{HA}\)-coding we get that \(\alpha\) can be coded up to any bound \(n\,{:}\mathcal{M}\)
\[\mathcal{M}\vDash\forall n\,\neg\neg\exists\,c\,\forall\,u<n.\,\alpha(u) \leftrightarrow\Pi(u)\mid c.\]
By assumption, we possess a non-standard element \(e:\mathcal{M}>\mathbb{N}\), and picking this for the bound \(n\), we get a code \(c\,{:}\mathcal{M}\) satisfying
\[\mathcal{M}\vDash\forall\,u<e.\,\alpha(u)\leftrightarrow\Pi(u)\mid c.\]
Since the above equivalence holds for all standard numbers \(u:\mathcal{M}\), the potential decidability of \(\,\overline{\cdot}\mid c\) entails the potential decidability of \(\mathcal{M}\vDash\alpha(\overline{\cdot})\), contradicting however its undecidability, which follows from Lemma 7.13.
Note the quite remarkable fact that in contrast to Corollary 7.11, we do not need to assume \(\mathsf{MP}\) or discreteness of the model in order to establish Theorem 7.19.
### Unearthing the Roots of Tennenbaum's Theorem
The proofs by McCarty and Makholm have a very clear structure. In both of them:
* \(\mathsf{HA}\)-inseparable formulas are used to derive the existence of an undecidable \(\mathcal{M}\vDash\alpha(\overline{\cdot})\).
* An assumption which stipulates computability of some operation of the model is used to show that in a non-standard model all predicates of the form \(\mathcal{M}\vDash\alpha(\overline{\cdot})\) are after all (potentially) decidable.
In Makholm's proof the latter point was achieved by usage of the coding result Hypothesis 7.18, which is establishes a connection between satisfaction of a formula and divisibility with respect to its code number. Close inspection of the proof of Corollary 5.2, reveals that only two properties of divisibility are needed for it:
\[\forall x.\,\neg\,x\mid 0,\qquad\quad\forall n\,c\,\exists c^{\prime}\,\forall x.\,\,\pi_{x}\mid c^{\prime}\leftrightarrow x=n\vee\pi_{x}\mid c.\]
We can abstract away from divisibility and formulate the result as:
**Lemma 7.20**.: _Given a binary predicate \(\in\,\colon\mathbb{N}\to\mathbb{N}\to\mathbb{P}\) satisfying the conditions_
\[\exists e\,\forall x.\,\neg\,x\in e,\qquad\quad\forall n\,c\,\exists c^{\prime }\,\forall x.\,x\in c^{\prime}\leftrightarrow x=n\lor x\in c,\]
_we can think of them as axiomatizing a weak notion of sets. For any predicate \(p\,\colon\mathbb{N}\to\mathbb{P}\) we then have \(\forall n\,\neg\neg\,\exists c\,\forall u<n.\,p\,u\leftrightarrow u\in c\)._
Proof.: By induction on \(n\). In the case \(n=0\), we use that first condition, giving us an empty set \(e\), which we use as the code \(c\). In the inductive case, we inspect the cases coming from \(\neg\neg(p\,n\vee\neg p\,n)\). If \(\neg p\,n\) then we simply use the code \(c\) that is given by the inductive hypothesis. If \(p\,n\), we use the second condition to enlarge \(c\) by the element \(n\), and let the bigger set be the new code.
If we have a binary formula \(\varphi_{\in}(x,y)\) satisfying the same conditions inside of \(\mathsf{HA}\), we can give a derivation of \(\mathsf{HA}\vdash\forall n\,\neg\,\exists c\,\forall u<n.\,\alpha(u) \leftrightarrow\varphi_{\in}(u,c)\) and verbatim run Makholm's proof. Using \(\overline{\cdot}\in d\) as a short for \(\lambda n.\,\mathcal{M}\vDash\varphi_{\in}(\overline{n},d)\) we then get:
**Theorem 7.21**.: _If \(\forall d\,\colon\mathcal{M}.\,\neg\neg\mathsf{Dec}(\overline{\cdot}\in d)\) then \(\neg\mathcal{M}>\mathbb{N}\)._
This highlights that the statement of Theorem 7.19 is not inherently tied to divisibility. It rather seems generally tied to relations that allow the implementation of finite sets or sequences inside of \(\mathsf{HA}\). Visser [20] analyzes several axiomatizations of pairs, sets and sequences in first-order theories, and the conditions we have listed above appear as the axioms of the weak set theory \(\mathsf{WS}\), which can interpret \(\mathsf{Q}\) and is therefore essentially undecidable. This latter point raises an interesting question: One can now wonder if it is computability of \(\overline{\cdot}\in d\) or \(\overline{\cdot}\mid d\) respectively, together with the essential undecidability of \(\mathsf{WS}\) and \(\mathsf{Q}\), which combine to rule out that the model is non-standard.
## 8. Discussion
### General Remarks
In Section 7, we presented several proofs of Tennenbaum's theorem which we summarize in the below table, listing their assumptions11 on the left and the conclusion on the right.
Footnote 11: We do not list the global assumption \(\mathsf{CT}_{\mathsf{Q}}\). \(\mathsf{HA}\)-coding (Hypothesis 7.18) was not mechanized in Coq but is provable, which is why we leave it out of the table.
\begin{tabular}{c|c|c||c|c} \hline \(\mathsf{MP}\) & \(\mathsf{AUC}\) & discrete & Theorem & From/Technique \\ \hline \(\bullet\) & & \(\bullet\) & \(\mathcal{M}\) enumerable & \(\to\ \mathcal{M}\cong\mathbb{N}\) & Diagonalization \\ \hline \(\bullet\) & & \(\bullet\) & \(\forall d\,\colon\mathcal{M}.\neg\neg\mathsf{Dec}(\overline{\cdot}\mid d)\) & \(\to\ \mathcal{M}\cong\mathbb{N}\) & Inseparability \\ \hline & & & \(\forall d\,\colon\mathcal{M}.\neg\mathsf{Dec}(\overline{\cdot}\mid d)\) & \(\to\,\neg\mathcal{M}>\mathbb{N}\) & Makholm \\ \hline \(\bullet\) & \(\bullet\) & & & \(\mathcal{M}\cong\mathbb{N}\) & McCarty \\ \hline \end{tabular}
First note that in the first two results we can clearly show the reverse implications. Therefore, given \(\mathsf{MP}\) and discreteness of \(\mathcal{M}\) we have the equivalences
\[\mathcal{M}\text{ enumerable}\iff\mathcal{M}\cong\mathbb{N}\ \stackrel{{ \mathsf{MP}}}{{\iff}}\ \neg\,\mathcal{M}>\mathbb{N}\iff\forall d.\neg\neg\mathsf{Dec}(\,\overline{ \neg}\mid d).\]
Comparing the first three entries, we see that Makholm's result is a strengthening of the second one, in the sense that it no longer requires \(\mathsf{MP}\) and the discreteness assumption, but once we do assume them, it gives us the same result as the inseparability proof. His result becomes possible once we make use of \(\mathsf{HA}\)-inseparable formulas, overcoming the need for Overspill, which turns out to be the root for these additional assumptions. In general, we can observe that the results become progressively stronger and less reliant on further assumptions as more and more intermediary results are progressively proven on the object level of \(\mathsf{HA}\). For example, instead of using Overspill to establish infinite coding, we later use that there is an internal proof of the coding lemma. Likewise, using \(\mathsf{HA}\)-inseparable formulas instead of inseparable formulas contributed to another strengthening. This might not be the end of the strengthenings that can be done. Makholm's result can equivalently be written as \(\mathcal{M}>\mathbb{N}\to\neg\neg\exists d.\neg\mathsf{Dec}(\,\overline{ \neg}\mid d)\), leaving open whether the constructively stronger \(\mathcal{M}>\mathbb{N}\to\exists d.\neg\mathsf{Dec}(\,\overline{\neg}\mid d)\) can also be shown.
As was pointed out by McCarty in [10], a weaker version \(\mathsf{WCT}\) of \(\mathsf{CT}\) suffices for his proof, where the code representing a given function is hidden behind a double negation. He mentions in [10] that \(\mathsf{WCT}\) is still consistent with the Fan theorem, while \(\mathsf{CT}\) is not. Analogously, the following weakening of \(\mathsf{CT}_{\mathsf{Q}}\) suffices for all of the proofs that we have presented:
**Definition 8.1** (\(\mathsf{WCT}_{\mathsf{Q}}\)).: For every function \(f:\mathbb{N}\to\mathbb{N}\) there _potentially_ is a binary \(\Sigma_{1}\) formula \(\varphi_{f}(x,y)\) such that for every \(n\!:\!\mathbb{N}\) we have \(\mathsf{Q}\vdash\forall y.\,\varphi_{f}(\overline{n},y)\leftrightarrow \overline{fn}=y\).
This only needs few changes of the presented proofs.12 An advantage of \(\mathsf{WCT}_{\mathsf{Q}}\) over \(\mathsf{CT}_{\mathsf{Q}}\) is that the former follows from the double negation of the latter and is therefore negative, ensuring that its assumption does not block reduction [11].
Footnote 12: We could have presented all of the results with respect to \(\mathsf{WCT}_{\mathsf{Q}}\). We opted against this in favor of \(\mathsf{CT}_{\mathsf{Q}}\), to avoid additional handling of double negations and to keep the proofs more readable.
Depending on the fragment of first-order logic one can give constructive proofs of the model existence theorem [14], producing a countable syntactic model with computable functions for every consistent theory. By the argument given in the introduction, model existence would yield a countable and computable non-standard model of \(\mathsf{PA}\), which at first glance seems to contradict the statement of Tennenbaum's theorem. For any countable non-standard model of \(\mathsf{PA}\) however, Theorem 7.19 and Lemma 7.2 entail that neither equality nor apartness can be decidable. This is similar in spirit to the results in [13], showing that even if the functions of the model are computable, non-computable behavior still emerges, but in relation to equality.
### Coq Mechanization
The Coq development is axiom-free and the usage of crucial but constructively justified axioms \(\mathsf{CT}_{\mathsf{Q}}\), \(\mathsf{MP}\) and \(\mathsf{AUC}\) are localized in the relevant sections. Apart from these, there is Hypothesis 7.18 which is taken as an additional assumption in the relevant sections. We have given details as to how this hypothesis can be proven, but since we did not yet mechanize the proof, we wanted to make its assumption on the level of the mechanization very explicit, by labeling it as a hypothesis in the accompanying text.
The development depends on a Coq library for first-order logic. Restricting to the files for this project, the line count is roughly 4000 lines of code. From those, 2000 loc on basic results about PA models were reused from earlier work [11]. Notably, the formalization of the various coding lemmas from Section 5 took 580 loc and all variants of Tennenbaum's theorem amount to a total of only 800 lines.
### Related Work
Classical proofs of Tennenbaum's theorem can be found in [1, 10, 11]. There are also refinements of the theorem which show that computability of either operation suffices [12] or which reduce the argument to a weaker induction scheme [13, 14]. Constructive accounts were given by McCarty [13, 14] and Plisko [15], and a relatively recent investigation into Tennenbaum phenomena was conducted by Godziszewski and Hamkins [10].
For an account of CT as an axiom in constructive mathematics we refer to Kreisel [11] and Troelstra [12]. Investigations into CT and its connections to other axioms of synthetic computability based on constructive type theory were done by Forster [15, 16]. While there is no proof for the consistency of CT in CIC, there are consistency proofs for very similar systems [10, 14, 17].
Compared to the previous conference paper [11], this extended journal version relies on the slightly different definition of \(\Sigma_{1}\) and \(\Delta_{1}\) formulas used by Kirst and Peters [13]. Moreover, they give a derivation of our formulation of CT from a more conventional formulation of Church's thesis, illustrating that CT is a convenient axiom for sidestepping much first-order encoding overhead, while the work needed to formally capture computation within Q is feasible in its mechanization.
Presentations of first-order logic in the context of proof-checking have already been discussed and used, among others, by Shankar [11], Paulson [12], O'Connor [18], as well as Han and van Doorn [13]. We make use of a Coq library for first-order logic [10], which has developed from several previous projects [11, 12, 13, 14] and depends on the Coq library of undecidability proofs [10].
Synthetic computability theory was introduced by Richman and Bauer [15, 16] and initially applied to constructive type theory by Forster, Kirst, and Smolka [11]. Their synthetic approach to undecidability results has been used in several other projects, all merged into the Coq library of undecidability proofs [10].
### Future Work
By relying on the synthetic approach, our treatment of Tennenbaum's theorem does not explicitly mention the computability of addition or multiplication of the model. To make these assumptions explicit again, and to also free our development from the necessity to adapt this viewpoint, we could assume an abstract version of CT which makes reference to a \(T\) predicate [15, 16] and is then used to axiomatize \(T\)-computable functions. We can then assume a version of CT that stipulates the computability of every \(T\)-computable function. This would allow us to specifically assume \(T\)-computability for either addition or multiplication and to formalize the result that \(T\)-computability of either operation leads to the model being standard [12].
In this present paper, we mechanized one of the two hypotheses that were left unmechanized in [11], and we plan to also eliminate the remaining one, namely the object level coding lemma (Hypothesis 7.18). This will require the proof of Corollary 5.2 to be turned into a derivation, potentially needing sizeable syntactic derivations inside of HA, and very
likely many "boilerplate" results about prime numbers. Just as with the mechanization of Lemma 7.14, these proofs will benefit from the proof mode developed in [1].
There are interesting parallels when comparing the proofs of Tennenbaum's theorem and proofs of the incompleteness results. In particular, we saw that the usage of HA-inseparable sets, and therefore the usage of Rosser's trick, leads to an improvement of the constructive Tennenbaum result. Connections between the two theorems are well-known [11, 12], but it should be interesting to combine the presented work with work like [13], to study this connection in a constructive framework. As we hope to have illustrated with the present work, this can be a worthwhile project, as it can shed new light on the content of old proofs.
A more satisfying rendering of McCarty's result will be achieved by changing the semantics (Definition 2.10), and putting the interpretations of formulas on the (proof-relevant) type level instead of the propositional level, therefore removing the need to assume AUC to break the barrier from the propositional to the type level.
Following usual practice in textbooks, we consider the first-order equality symbol as a syntactic primitive and only regard models interpreting it as actual equality in Coq. When treated as axiomatized relation instead, we could consider the (slightly harder to work with) setoid models and obtain the more general result that no computable non-standard setoid model exists.
|
2306.01005 | AbODE: Ab Initio Antibody Design using Conjoined ODEs | Antibodies are Y-shaped proteins that neutralize pathogens and constitute the
core of our adaptive immune system. De novo generation of new antibodies that
target specific antigens holds the key to accelerating vaccine discovery.
However, this co-design of the amino acid sequence and the 3D structure
subsumes and accentuates some central challenges from multiple tasks, including
protein folding (sequence to structure), inverse folding (structure to
sequence), and docking (binding). We strive to surmount these challenges with a
new generative model AbODE that extends graph PDEs to accommodate both
contextual information and external interactions. Unlike existing approaches,
AbODE uses a single round of full-shot decoding and elicits continuous
differential attention that encapsulates and evolves with latent interactions
within the antibody as well as those involving the antigen. We unravel
fundamental connections between AbODE and temporal networks as well as
graph-matching networks. The proposed model significantly outperforms existing
methods on standard metrics across benchmarks. | Yogesh Verma, Markus Heinonen, Vikas Garg | 2023-05-31T14:40:47Z | http://arxiv.org/abs/2306.01005v1 | # AbODE: Ab Initio Antibody Design using Conjoined ODEs
###### Abstract
Antibodies are Y-shaped proteins that neutralize pathogens and constitute the core of our adaptive immune system. _De novo_ generation of new antibodies that target specific _antigens_ holds the key to accelerating vaccine discovery. However, this co-design of the amino acid sequence and the 3D structure subsumes and accentuates, some central challenges from multiple tasks including protein folding (sequence to structure), inverse folding (structure to sequence), and docking (binding).
We strive to surmount these challenges with a new generative model AbODE that extends graph PDEs to accommodate both contextual information and external interactions. Unlike existing approaches, AbODE uses a single round of full-shot decoding, and elicits continuous differential attention that encapsulates, and evolves with, latent interactions within the antibody as well as those involving the antigen. We unravel fundamental connections between AbODE and temporal networks as well as graph-matching networks. The proposed model significantly outperforms existing methods on standard metrics across benchmarks.
Machine Learning, Multiple-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, MultiTask, Multi-Task, Multi-Task, Multi-Task, MultiTask, Multi-Task, Multi-Task, Multi-Task, MultiTask, Multi-Task, Multi-, Multi-Task, MultiTask, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, Multi-Task, MultiTask, Multi-Task, Multi-Task, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask, Multi-Task, Multi-Task, Multi-Task, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask MultiTask, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, Multi-Task MultiTask, MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask Multi-Task, Multi-Task, MultiTask MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, Multi-Task MultiTask, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask, Multi-Task, Multi-Task, MultiTask Multi-Task, Multi-Task, Multi-Task, MultiTask Multi-Task, MultiTask, Multi-Task Multi-Task, Multi-Task MultiTask, Multi-Task, Multi-Task, MultiTask Multi-Task, MultiTask, Multi-Task Multi-Task, MultiTask Multi-Task, MultiTask-Task MultiTask, Multi-Task, Multi-Task, MultiTask Multi-Task, Multi-Task, MultiTask Multi-Task, Multi-Task Multi-Task, MultiTask-Task, Multi-Task Multi-Task, Multi-Task MultiTask, Multi-Task Multi-Task, Multi-Task MultiTask, Multi-Task Multi-Task Multi-Task, MultiTask-Task Multi-Task, MultiTask-Task Multi-Task Multi-Task, Multi-Task-Task Multi-Task Multi-Task, Multi-Task Multi-Task Multi-Task, Multi-Task Multi-Task Multi-Task, Multi-TaskTask Multi-Task Multi-Task, Multi-Task-Task Multi-Task Multi-Task Multi-Task, MultiTask-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task, Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task Multi-Task-Task
The autoregressive scheme (one residue at a time) adopted by (Jin et al., 2022a;b) is susceptible to issues such as vanishing or exploding gradients during training, as well as slow generation and accumulation of errors during inference. Kong et al. (2023) advocate multiple _full-shot_ rounds to address this issue; however, segregating context (intra-antibody) from external interactions (antibody-antigen) precludes joint optimization, and may result in sub-optimality.
We circumvent these issues with a novel viewpoint that models the antibody-antigen complex as a joint 3D graph with heterogeneous edges. Different from all prior works, this perspective allows us to formulate a coupled neural ODE system over the nodes pertaining to the antibody, while simultaneously accounting for the antigen. Specifically, we associate local densities (one per antibody node) that are progressively refined toward globally aligned densities based on simultaneous feedback from the antigen as well as the (other) antibody nodes. The 3D coordinates and the node labels for the antibody can then be sampled after a few rounds in _one-shot_, i.e., all at once. Thus, the entire procedure is efficient and end-to-end trainable.
We show how invariance can be built in readily into the proposed method ADODE toward representations that account for rotations and other symmetries. AbODE establishes a new state-of-the-art (SOTA) for antibody design across standard metrics on several benchmarks. Interestingly, it turns out that it shares connections with two recent methods for equivariant molecular generation and docking, namely, ModFlow and IEGMN. While ModFlow can be recovered as a special case of the ADODE formulation, IEGMN may be interpreted as a discrete analog of ADODE. One one hand, these similarities reaffirm the kinship of different computational drug design tasks; on the other, they suggest the broader applicability of neural PDEs as effective tools for these tasks. Our experiments further reinforce this phenomenon: ADODE is already competitive with the SOTA methods on a task it is not tailored for, namely, fixed backbone protein sequence design.
### Contributions
In summary, we make following contributions.
* We propose AbODE, a generative model that extends graph PDEs by jointly modeling the internal context and interactions with external objects (e.g., antigens).
* AbODE co-designs the antibody sequence and structure, using a single round of full-shot decoding.
* Empirically, AbODE registers SOTA performance on various sequence design and structure prediction tasks.
## 2 Related Work
Antibody/protein designEarly approaches for computational antibody design optimize hand-crafted energy functions (Pantazes and Maranas, 2010; Li et al., 2014; Lapidoth et al., 2015; Adolf-Bryfogle et al., 2018). These methods require costly simulations and are prone to defects due to complex interactions between chains that cannot be captured by force fields or statistical functions (Graves et al., 2020). Recently, deep generative models have been utilized for 1D sequence prediction in proteins (O'Connell et al., 2018; Ingraham et al., 2019; Strokach et al., 2019; Karimi et al., 2020; Cao et al., 2021; Dauparas et al., 2022) and antibodies (Alley et al., 2019; Shin et al., 2021; Saka et al., 2021; Akbar et al., 2022), conditioned on the backbone 3D structure. Jin et al. (2022b) proposed to co-design the sequence and structure via an autoregressive refinement technique, while Kong et al. (2023) advocated multiple rounds of full-shot decoding together with an encoder for intra-antibody context, and a separate encoder for external interactions.
Different from all these works, we formulate a single full-shot method that extends graph PDEs(Chamberlain et al.,
Figure 1: Schematic showing the structure of a residue (amino acid), where the backbone atoms we use are \(N\), \(C_{\alpha}\) and \(C\) (**right**) and the structure of the antibody (**left**) which is Y-shaped showing the VH/VL sequences and binding to the antigen, and we focus on CDRs of the variable domain in the heavy chain (VH).
2021; Iakovlev et al., 2020) to accommodate and condition on spatial and context-based information of the antigen, tailored to antibody sequence and structure generation.
Generative models for graphsOur work is related to continuous time models for graph generation (Verma et al., 2022; Avelar et al., 2019) that incorporate dynamic interactions (Chen et al., 2018; Grathwohl et al., 2018; Iakovlev et al., 2020; Eliasof et al., 2021) over graphs. Methods have also been developed for protein structure generation, e.g., Folding Diffusion (Wu et al., 2022), Anand & Huang (2018), AlphaDesign (Gao et al., 2022), etc. Most of these methods lack the flexibility to be directly applied to antibody sequence and structure design, due to their inability to capture effective inductive biases, conditional information, and higher-order features. In contrast, we can combine conditional information and evolve the structure and sequence via latent co-interacting trajectories.
3D structure predictionOur method is also closely related to docking (Ganea et al., 2021; Stark et al., 2022) and protein folding (Ingraham et al., 2019;c,d; Baek et al., 2021; Jumper et al., 2021; Ingraham et al., 2022). Methods like DiffDock (Corso et al., 2023) and EquiBind (Stark et al., 2022) predict only the structure of the molecule given a protein binding site but lack any generative component related to sequence design. AlphaFold (Jumper et al., 2021) requires holistic information like protein sequence, multi-sequence alignment (MSA), and template features. These models cannot be directly applied for antibody design, where MSA is not specified in advance and one needs to predict the structure of an incomplete sequence. In contrast, we learn to co-model the 3D structure and sequence for incomplete graphs and interleave structure modeling with sequence prediction.
## 3 Antibody sequence and structure co-design
An antibody (Ab) is a Y-shaped protein (Fig. 1) that identifies antigens of a foreign object (e.g., a virus) and stimulates an immunological response. An antibody consists of a constant domain, and a symmetric variable region divided into heavy (H) and light (L) chains (Kuroda et al., 2012). The surface of the antibody contains three complementarity-determining regions (CDRs), which act as the main binding determinant. CDR-H3 makes up the majority of the binding affinity (Fischman & Ofran, 2018). The non-CDR regions are highly preserved (Kuroda et al., 2012); thus, it is common to formulate antibody design as a CDR design problem (Shin et al., 2021).
We view the antibody-antigen complex as a joint graph with interactions between nodes across the binding. We co-model both the sequence and the 3D conformation of the CDR regions with a graph PDE and apply our method to antigen-specific and unconditional antibody design tasks.
We seek a representation that is invariant to translations and rotations due to its locality along the backbone. Moreover, we would like the edge features to be sufficiently informative such that the relative neighborhoods can be reconstructed up to rigid body motion (Ingraham et al., 2019). We describe next a representation that satisfies these desiderata.
### The antibody-antigen graph
We define the antigen-antibody complex as a 3D graph \(G=(V,E,X)\), with antibody Ab and antigen Ag vertices \(V=(V_{\texttt{Ab}},V_{\texttt{Ag}})\), coordinates \(X=(X_{\texttt{Ab}},X_{\texttt{Ab}})\) and edges \(E=(E_{\texttt{Ab}},E_{\texttt{Ab}-\texttt{Ab}})\) within the antibody as well as between the antibody and the antigen. Each vertex \(v\in\mathcal{A}=\{\texttt{Arg},\texttt{His},\ldots\}\) is one of 20 amino acids. We treat the labels with a Categorical distribution, such that the label features \(\mathbf{a}_{i}\in\mathbb{R}^{20}\) represent the unnormalized amino acid probabilities. We also represent each residue by the cartesian 3D coordinates of its three backbone atoms \(\{N,C_{\alpha},C\}\) (see Fig. 1). For the \(i^{th}\) residue \(\mathbf{x}_{i}\) we compute its spatial features \(\mathbf{s}_{i}=(r_{i},\alpha_{i},\gamma_{i})\) in Eq. 1, where, \(r_{i}\) denotes the distance between consecutive residues \(x_{i}\) and \(x_{i+1}\), \(\alpha_{i}\) is the co-angle of residue \(i\) wrt previous and next residue, \(\gamma_{i}\) is the azimuthal angle of \(i\)'s local plane, and \(\mathbf{n}_{i}\) is the normal vector. The full residue state \(\mathbf{z}_{i}=[\mathbf{a}_{i},\mathbf{s}_{i}]\)
Figure 2: A demonstration of AbODE. The initial structure and amino acid labels evolve in time under \(f_{\psi}\) and are subsequently transformed into a final structure and amino acid labels.
concatenates the label features \(\mathbf{a}_{i}\) and the spatial features \(\mathbf{s}_{i}\).
\[r_{i} =\left\|\mathbf{u}_{i}\right\|,\quad\mathbf{u}_{i}=\mathbf{x}_{i+1} -\mathbf{x}_{i} \tag{1}\] \[\alpha_{i} =\cos^{-1}\left(\frac{\left\langle\mathbf{u}_{i},\mathbf{u}_{i-1} \right\rangle}{\left\|\mathbf{u}_{i}\right\|\cdot\left\|\mathbf{u}_{i-1} \right\|}\right)\] (2) \[\gamma_{i} =\cos^{-1}\left(\frac{\left\langle\mathbf{u}_{i},\mathbf{n}_{i} \right\rangle}{\left\|\mathbf{u}_{i}\right\|\cdot\left\|\mathbf{n}_{i}\right\|} \right),\quad\mathbf{n}_{i}=\mathbf{u}_{i}\times\mathbf{u}_{i-1}. \tag{3}\]
InteractionsTo capture the interactions pertaining to the complex, we define edges \(E_{\mathrm{Ab}}\) between all antibody residues and edges \(E_{\mathrm{Ab-Ab}}\) between all antibody and antigen residues (See Figure 3). We also define edge features between nodes \(i\) and \(j\),
\[\mathbf{e}_{ij} =\left(\Delta\mathbf{z}_{ij},i-j,\mathrm{RBF}\left(\left\|\mathbf{ s}_{i}-\mathbf{s}_{j}\right\|\right). \tag{4}\] \[\mathcal{O}_{i}^{\top}\frac{s_{i,\alpha}-s_{j,\alpha}}{\left\|s_ {i,\alpha}-s_{j,\alpha}\right\|},\mathcal{O}_{i}^{\top}\mathcal{O}_{j},k_{ij}). \tag{5}\]
These include state differences \(\Delta\mathbf{z}_{ij}=\{\Delta\mathbf{a}_{ij},\Delta\mathbf{s}_{ij}\}\) over label features \(\Delta\mathbf{a}_{ij}=\mathbf{a}_{j}-\mathbf{a}_{i}\) and spatial features \(\Delta\mathbf{s}_{ij}=\{(\Delta r_{ij},\Delta\alpha_{ij},\Delta\gamma_{ij}) _{p}\mid p\in\{N,C_{\alpha},C\}\}\), backbone distance \(i-j\), and spatial distance \(\mathrm{RBF}(\left\|\mathbf{s}_{i}-\mathbf{s}_{j}\right\|)\) (here, RBF is the standard radius basis function kernel). The fourth term encodes directional embedding in the relative direction of \(j\) in the local coordinate frame \(\mathcal{O}_{i}\)(Ingraham et al., 2019), and the \(\mathcal{O}_{i}^{T}\mathcal{O}_{j}\) describes the orientation encoding of the node \(i\) with node \(j\) (See Appendix A.1 for details). Finally, we encode within-antibody edges with \(k=1\) and antibody-antigen edges with \(k=2\).
Task formulationGiven a three-dimensional antibody or antibody-antigen graph, we aim to learn a PDE in order to generate an amino acid sequence and the corresponding 3D conformation jointly.
### Conjoined system of ODEs
We propose to model the distribution of antibody-antigen complexes by a differential graph flow \(\mathbf{z}(t)\) over time \(t\in\mathrm{R}_{+}\). We initialize the initial state \(\mathbf{z}(0)\) to a uniform categorical vector, similar to mask initialization (Jin et al., 2022; Kong et al., 2023). Coordinates are initialized with the even distribution between the residue right before CDRs and the one right after CDRs following (Kong et al., 2023), and we learn a differential \(\frac{d\mathbf{z}(t)}{dt}\) that maps to the end state \(\mathbf{z}(T)\) that matches data.
We begin by assuming an ODE system \(\{\mathbf{z}_{i}(t)\}\) over time \(t\in\mathrm{R}_{+}\), where node the time evolution of node \(i\) is an ODE
\[\dot{\mathbf{z}}_{i}(t)=\frac{\partial\mathbf{z}_{i}(t)}{\partial t}=f_{\psi}(t,\mathbf{z}_{i}(t),\mathbf{z}_{N(i)}(t),\{\mathbf{e}_{ij}(t)\}_{j}) \tag{6}\]
where \(N(i)=\{j\::(i,j)\ \in\ E\}\) indexes the neighbors of node \(i\), and the function \(f\) parameterized by \(\psi\) is our main learning goal. The differentials form a coupled ODE system
\[\dot{\mathbf{z}}(t) =\begin{pmatrix}\dot{\mathbf{z}}_{1}(t)\\ \vdots\\ \dot{\mathbf{z}}_{M}(t)\end{pmatrix} \tag{7}\] \[=\begin{pmatrix}f_{\psi}(t,\mathbf{z}_{1}(t),\mathbf{z}_{N(1)}(t ),\{\mathbf{e}_{1j}(t)\}_{j})\\ \vdots\\ f_{\psi}(t,\mathbf{z}_{M}(t),\mathbf{z}_{N(M)}(t),\{\mathbf{e}_{Mj}(t)\}_{j}) )\end{pmatrix}\] (8) \[\mathbf{z}(T) =\mathbf{z}(0)+\int_{0}^{T}\dot{\mathbf{z}}(t)dt\;. \tag{9}\]
where \(M\) is the number of nodes. The above ODE system corresponds to a graph PDE (Iakovlev et al., 2020; Verma et al., 2022), whose forward pass and backpropagation can be solved efficiently by ODE solvers.
Interestingly, it turns out that the PDE about a recently proposed method for molecular generation can be recovered as a particular case of 7, when all the edges are set to be of the same type.
**Proposition 1**: _: ModFlow (Verma et al., 2022) can be seen as a special case of AbODE in an unconditional setting. This can be achieved by setting \(k_{ij}=1\) for every \(e_{ij}\)._
### Attention-based differential
We capture the interactions between the antigen and antibody residues with graph attention (Shi et al., 2020)
\[\alpha_{ij} =\mathrm{softmax}\left(\frac{\left(\mathbf{W}_{3}\mathbf{z}_{i} \right)^{\top}\left(\mathbf{W}_{4}\mathbf{z}_{j}+\mathbf{W}_{6}\mathbf{e}_{ ij}\right)}{\sqrt{d}}\right) \tag{10}\] \[\mathbf{z}_{i}^{\prime} =\mathbf{W}_{1}\mathbf{z}_{i}+\sum_{j\in N(i)}\alpha_{ij}\left( \mathbf{W}_{2}\mathbf{z}_{j}+\mathbf{W}_{6}\mathbf{e}_{ij}\right) \tag{11}\]
Figure 3: Schematic graph construction for the antigen-antibody complex with internal edges \(E_{\mathrm{Ab}}\) and external edges \(E_{\mathrm{Ab-Ab}}\). In the unconditional setting (i.e., the antigen is not specified), this reduces to an antibody graph
where \(\mathbf{W}_{1},\ldots,\mathbf{W}_{6}\) are weight parameters and \(d\) is the head size. The \(\alpha\)'s are the attention coefficients corresponding to within and across edges, which are used to update the node feature \(\mathbf{z}_{i}\). Interestingly, our method also shares similarities with the Independent E(3)-Equivariant Graph Matching Networks (IEGMNs) for docking (Ganea et al., 2021).
**Proposition 2**: : _AbODE can be cast as Independent E(3)-Equivariant Graph Matching Networks (IEGMN) (Ganea et al., 2021)). The operations are listed in Table 1 (See Appendix A.2 for more details)._
In this sense, our extended graph PDE unifies molecular generation and docking with protein/antibody design.
We now describe our training objective.
### Training Objective
We optimize for the data fit of the generated states z(T) given by the differential function \(f_{\psi}\). The loss consists of two components: one for the sequence and another for the structure
\[\mathcal{L}=\mathcal{L}_{\texttt{seq}}+\mathcal{L}_{\texttt{ structure}} \tag{12}\]
The sequence loss is quantified in terms of the cross-entropy between the true label \(\mathbf{a}_{ni}^{\texttt{true}}\) and the label distribution \(\mathbf{a}_{ni}\)predicted by the model, i.e.,
\[\mathcal{L}_{\texttt{seq}}=\frac{1}{N}\sum_{n=1}^{N}\frac{1}{M}\sum_{i=1}^{M _{i}}\mathrm{CE}\left(\mathbf{a}_{ni}^{\texttt{true}}\,\mathbf{a}_{ni}\right) \tag{13}\]
where \(n\) indexes the \(N\) datapoints and \(i\) indexes the \(M_{i}\) residues. The structure loss is computed based on the fit to the data sample in terms of the angles and radii:
\[\mathcal{L}_{\texttt{structure}}=\frac{1}{N}\sum_{n=1}^{N}\frac{1}{M}\sum_{i= 1}^{M_{i}}\lambda\left(\mathcal{L}_{\texttt{angle}}^{ni}+\mathcal{L}_{ \texttt{radius}}^{ni}\right). \tag{14}\]
For each residue angle pair \((\alpha,\gamma)\) we compute the negative log of the von-Mises likelihood
\[\mathcal{L}_{\texttt{angle}}^{ni}=\sum_{k}^{\{\mathrm{C}_{\alpha},\mathrm{C}, \mathrm{N}\}}\sum_{\theta\in\{\alpha,\gamma\}}\log\mathcal{M}\left(\theta_{ ik}^{n}\mid\theta_{ik}^{n,\texttt{true}}\,\kappa\right) \tag{15}\]
where \(\kappa\) is a scale parameter, and \(k\) is atom index. The von Mises distribution can be interpreted as a Gaussian distribution over the domain of angles. On the other hand, the radius loss is the negative log of a Gaussian distance.
\[\mathcal{L}_{\texttt{radius}}^{ni}=\sum_{k}^{\{\mathrm{c}_{\alpha},\mathrm{C },\mathrm{N}\}}\log\mathcal{N}\left(r_{ik}^{n}\mid r_{ik}^{n,\texttt{true}}\,\sigma_{r}^{2}\right) \tag{16}\]
where \(\sigma_{r}^{2}\) is the radius variance. Note that our method predicts the sidechain spatial coordinates, also used to calculate the total loss. Here \(\lambda\) is the polar loss weight, set to \(\lambda=0.8\). We set \(\kappa=10\), \(\sigma_{r}^{2}=0.1\) to prefer narrow likelihoods for accurate structure prediction.
We next describe the generation step.
### Sequence and structure prediction
Given the antibody or antigen-antibody complex, we generate an antibody sequence and the corresponding structure by solving the system of ODEs as described in section 3.2 for time T to obtain \(\mathbf{z}(T)=[\mathbf{a}(T),\mathbf{s}(T)]\). Using the softmax operator, we transform the label features \(\mathbf{a}(T)\) into Categorical amino acid probabilities \(\mathbf{p}\). We pick the most probable amino acid per node. A schematic representation is shown in Fig. 2
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & \begin{tabular}{c} IEGMN layer \\ \end{tabular} & \begin{tabular}{c} AbODE \\ \end{tabular} \\ \hline Edge & \begin{tabular}{c} \(\mathbf{m}_{ij}=\varphi^{\prime}\left(\mathbf{h}_{1}^{(0)},\mathbf{h}_{2}^{(0)}, \mathbf{R}\mathbf{p}\left(\mathbf{x}_{1}^{(0)},\mathbf{x}_{2}^{(0)},\sigma \right),\mathbf{f}_{ij}\right)\) \\ \(\mathbf{m}_{n}=\frac{\mathbf{r}_{1}}{\sqrt{\lambda}(\mathbf{x}_{1})}\sum_{j\in \mathcal{V}(\mathbf{x}_{1})}\mathbf{m}_{ij}\) \\ \end{tabular} & \begin{tabular}{c} \(\alpha_{ij}=\mathrm{softmax}\left(\frac{\mathbf{W}\mathbf{z}_{1}\mathbf{x}_{2}^{(0)} +\mathbf{W}\mathbf{e}_{ij}}{\sqrt{d}}\right)\) \\ \(m_{i}^{\prime}=\sum_{j\in\mathcal{V}_{i}}\alpha_{i,j}\left(\mathbf{W}\mathbf{z }_{2}^{(0)}+\mathbf{W}\mathbf{e}_{ij}\right)\) \\ \end{tabular} \\ \hline Intra and Inter connections & \begin{tabular}{c} \(\boldsymbol{\mu}_{ij}=-a_{ij}\mathbf{W}\mathbf{h}_{j}^{(0)}\) \\ \(\boldsymbol{\mu}_{i}=\sum_{j\in\mathcal{V}_{j}}\boldsymbol{\mu}_{ij},\forall i \in\mathcal{V}_{i}\), \(\boldsymbol{\mu}_{ik}=\sum_{i\in\mathcal{V}_{j}}\boldsymbol{\mu}_{ik},\forall k \in\mathcal{V}_{2}\) \\ \end{tabular} & \begin{tabular}{c} \(m_{ij}^{\prime}=\alpha_{ij}\left(\mathbf{W}\mathbf{z}_{2}^{(0)}+\mathbf{W} \mathbf{e}_{ij}\right)\) \\ \(\boldsymbol{\mu}_{i}^{\prime}=\sum_{j\in\mathcal{V}_{j}}\boldsymbol{\mu}_{ij}, \forall i\in\mathcal{V}_{i}\), \(\boldsymbol{\mu}_{ik}=\sum_{i\in\mathcal{V}_{j}}\boldsymbol{\mu}_{ik},\forall k \in\mathcal{V}_{2}\) \\ \end{tabular} & \begin{tabular}{c} \(m_{ij}^{\prime}=\alpha_{ij}\left(\mathbf{W}\mathbf{z}_{2}^{(0)}+\mathbf{W} \mathbf{e}_{ij}\right)\) \\ \(\boldsymbol{\mu}_{i}^{\prime}=\sum_{j\in\mathcal{V}_{j}}\boldsymbol{\mu}_{ij}, \forall i\in\mathcal{V}_{i}\), \(\boldsymbol{\mu}_{ik}=\sum_{i\in\mathcal{V}_{j}}\boldsymbol{\mu}_{ik},\forall k \in\mathcal{V}_{2}\) \\ \end{tabular} & \begin{tabular}{c} \(m_{ij}^{\prime}=\sum_{j\in\mathcal{V}_{i}}\alpha_{ij}^{\prime}\left(m_{i}^{ \prime}-\mathbf{m}_{ij}^{\prime}\right)\) \\ \(\mathbf{m}_{i}^{\prime}=\sum_{j\in\mathcal{V}_{i}}\alpha_{i,j}\left(\mathbf{W} \mathbf{z}_{2}^{(0)}+\mathbf{W}\mathbf{e}_{ij}\right)\) \\ \end{tabular} \\ \hline Node embedding & \begin{tabular}{c} \(\mathbf{h}_{i}^{(d+1)}=(1-\beta)\cdot\mathbf{h}_{i}^{(0)}+\beta\cdot\varphi^{ \prime}\left(\mathbf{h}_{i}^{(0)},\mathbf{m}_{n},\boldsymbol{\mu}_{n}, \boldsymbol{\mu}_{n},\boldsymbol{\xi}_{i}\right)\) \\ \end{tabular} &
\begin{tabular}{c} \(\mathbf{a}_{i}^{\prime}=\mathbf{W}_{1}\mathbf{a}_{i}+\mathbf{m}_{i}^{\prime( \text{int}}+\mathbf{m}_{i}^{\prime(\text{int}}\)
\[\text{Acordinate embedding}\] \\ \end{tabular} \\ \hline Coordinate embedding & \begin{tabular}{c} \(\mathbf{x}_{i}^{(d+1)}=\mathbf{w}_{i}^{(0)}+(1-\eta)\mathbf{x}_{i}^{(d+1)}+ \sum_{j\in\mathcal{V}(\mathbf{x}_{1})}\left(\mathbf{x}_{2}^{(0)}-\mathbf{x}_{j}^{ (0)}\right)\varphi^{\prime}\left(\mathbf{m}_{3}\right)\) \\ \end{tabular} &
\begin{tabular}{c} \(s_{i}^{\prime}=\mathbf{W}_{1}\mathbf{s}_{i}+\mathbf{m}_{i}^{\prime(\text{int}}+ \mathbf{m}_{i}^{\prime(\text{int}}\)
\end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: AbODE as a variant of Independent E(3)-Equivariant Graph Matching Network (IEGMN) applied to interactions among two graphs \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\). Here, \(e_{ij}\in E_{1}\cup E_{2}\); \(n\in V_{1}\cup V_{2}\); \(\texttt{RBF}(\mathbf{x}_{i},\mathbf{x}_{j};\sigma)=\exp(-||\mathbf{x}_{i}^{(l)}- \mathbf{x}_{j}^{(l)}||^{2}/\sigma)\); \(h_{n}\) and \(x_{n}\) denote, respectively, the node embedding and the spatial embedding; \(a_{ij}\) are attention based coefficients; \(\phi^{x}\) is a real-valued (scalar) parametric function; \(\phi^{h,e}\) are parametric functions (MLPs); \(\boldsymbol{t}_{ij},\boldsymbol{t}_{i}\) are the original edge and node features; \(\beta,\eta\) are scaling parameters and \(\mathbf{W}\) is a learnable matrix. For AbODE, \(\alpha_{i,j}\) are the attention coefficients; \(\mathbf{W}_{1},\ldots,\mathbf{W}_{6}\) are learnable weight parameters; \(d\) is the hidden size of each head; \
## 4 Experiments
TasksWe benchmark AbODE on a series of challenging tasks: (i) we evaluate the model on unconditional antibody sequence and structure generation against ground truth structures in the Structural Antibody Database SAbDab (Dunbar et al., 2014) section 4.1, (ii) we benchmark our method in terms of its ability to generate antigen-conditioned antibody sequences and structures from SAbDab in section 4.2, (iii) we evaluate our model on the task of designing CDR-H3 over 60 manually selected diverse complexes (Adolf-Bryfogle et al., 2018) in section 4.3, (iv) we extend our model to incorporate information about the constant region of the antibody in section 4.4, and finally, (v) we extend AbODE to de novo protein sequence design with a fixed backbone in section 4.5.
BaselinesWe compare AbODE with the state-of-the-art baseline methods. On the uncontrolled generation task, we compare against sequence-only **LSTM**(Saka et al., 2021; Akbar et al., 2022), an autoregressive graph network AR-GNN (You et al., 2018) tailored for antibodies, and an autoregressive method RefineGNN (Jin et al., 2022), which considers the 3D geometry and co-models the sequence and the structure.
On the antigen-conditioned sequence and structure generation task, we again compare against LSTM and RefineGNN. We also consider their variants C-LSTM and C-RefineGNN proposed in Kong et al. (2023), where they adapt the current methodology to consider the entire context of the antibody-antigen complex. We additionally consider **MEAN**(Kong et al., 2023) which uses progressive decoding to generate CDR by encoding the external antigen context of 1D/3D information. Finally, we also compare against a physics-based simulator RosettaAD (Adolf-Bryfogle et al., 2018).
ImplementationAbODE is implemented in PyTorch (Paszke et al., 2019). We used three layers of a Transformer Convolutional Network (Shi et al., 2020) with embedding dimensions of \(128-256-64\). Our models were trained with the Adam optimizer for 5000 epochs using batch size 300. For details, we refer the reader to Appendix A.3.
### Unconditioned Sequence and Structure Modeling
DataWe obtained the antibody sequences and structure from Structural Antibody Database (SAbDab) (Dunbar et al., 2014) and removed any incomplete or redundant complexes. We followed a similar strategy to Jin et al. (2022), where we focus on generating heavy chain CDRs, and curated the dataset by clustering the CDR sequences via MMseq2 (Steinegger and Soding, 2017) with \(40\%\) sequence identity. We then randomly split the clusters into training, validation, and test sets with an 8:1:1 ratio.
MetricsWe evaluate our method on perplexity (PPL) and root mean square deviation (RMSD) between the predicted structures and the ground truth structures on the test data. We report the results for all the CDR-H regions. We calculate the RMSD by the Kabsch algorithm (Kabsch, 1976) based on \(C_{\alpha}\) spatial features of the CDR residues.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{CDR-H1} & \multicolumn{2}{c}{CDR-H2} & \multicolumn{2}{c}{CDR-H3} \\ \cline{2-7} Method & PPL (\(\downarrow\)) & RMSD (\(\downarrow\)) & PPL (\(\downarrow\)) & RMSD (\(\downarrow\)) & PPL (\(\downarrow\)) & RMSD (\(\downarrow\)) \\ \hline LSTM & \(6.79\) & (N/A) & \(7.21\) & (N/A) & \(9.70\) & (N/A) \\ AR-GNN & \(6.44\) & \(2.97\) & \(6.86\) & \(2.27\) & \(9.44\) & \(3.63\) \\ RefineGNN & \(6.09\) & \(1.18\) & \(6.58\) & \(0.87\) & \(8.38\) & \(2.50\) \\ AbODE & \(4.25\pm 0.46\) & \(0.73\pm 0.15\) & \(4.32\pm 0.32\) & \(0.63\pm 0.19\) & \(6.35\pm 0.29\) & \(2.01\pm 0.13\) \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{CDR-H1} & \multicolumn{2}{c}{CDR-H2} & \multicolumn{2}{c}{CDR-H3} \\ \cline{2-7} Method & AAR \% (\uparrow) & RMSD (\(\downarrow\)) & AAR \% (\uparrow) & RMSD (\(\downarrow\)) & AAR \% (\uparrow) & RMSD (\(\downarrow\)) \\ \hline LSTM & \(40.98\pm 5.20\) & (N/A) & \(28.50\pm 1.55\) & (N/A) & \(15.69\pm 0.91\) & (N/A) \\ C-LSTM & \(40.93\pm 5.41\) & (N/A) & \(29.24\pm 1.08\) & (N/A) & \(15.48\pm 1.17\) & (N/A) \\ RefineGNN & \(39.40\pm 5.56\) & \(3.22\pm 0.29\) & \(37.06\pm 3.09\) & \(3.64\pm 0.40\) & \(21.13\pm 1.59\) & \(6.00\pm 0.55\) \\ C-RefineGNN & \(33.19\pm 2.99\) & \(3.25\pm 0.40\) & \(33.53\pm 3.23\) & \(3.69\pm 0.56\) & \(18.88\pm 1.37\) & \(6.22\pm 0.59\) \\ MEAN & \(58.29\pm 7.27\) & \(0.98\pm 0.16\) & \(47.15\pm 3.09\) & \(0.95\pm 0.05\) & \(36.38\pm 3.08\) & \(2.21\pm 0.16\) \\ AbODE & \(\mathbf{70.5}\pm 1.14\) & \(\mathbf{0.65}\pm 0.1\) & \(\mathbf{55.7}\pm 1.45\) & \(\mathbf{0.73}\pm 0.14\) & \(\mathbf{39.8}\pm 1.17\) & \(\mathbf{1.73}\pm 0.11\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Top**: Unconditional sequence and structure benchmark. We report perplexity (PPL) and root mean square deviation (RMSD) for each CDR in the heavy chain. Baselines are from Jin et al. (2022). **Bottom**: Antigen-conditional sequence and structure benchmark on SAbDab (Dunbar et al., 2014). We report amino acid recovery (AAR) and root mean square deviation (RMSD) for each CDR in the heavy chain. Baselines are from Kong et al. (2023).
ResultsThe LSTM baselines do not involve structure prediction, so we only report the RMSD for the graph-based method. Table 2 reports the performance of AbODE on uncontrolled generation, where AbODE outperforms all the baselines on both metrics. Notably, AbODE significantly reduces the PPL in all CDR regions and typically predicts a structure close to the ground truth structure. We also evaluate the biological functionality of the generated antibodies, shown in Fig. 4. Specifically, we considered the following properties:
* **Gray**: The Gray value is calculated by adding the hydropathy value for each residue and dividing it by the length of the sequence (Kyte & Doolittle, 1982)
* **Instability**: The Instability index is calculated using the approach of Guruprasad et al. (1990), which predicts regional instability of dipeptides that occur more frequently in unstable proteins when compared to stable proteins.
* **Aromaticity**: It calculates the aromaticity value of a protein according to Lobry & Gautier (1994). It is simply the relative frequency of Phe+Trp+Tyr.
As our plots demonstrate, AbODE can essentially replicate the behavior of the data in terms of instability and gravy. However, there is some discrepancy in terms of spread concerning aromaticity.
### Antigen Conditioned Sequence and Structure Modeling
DataWe took the antigen-antibody complexes dataset from Structural Antibody Database (Dunbar et al., 2014) and removed the illegal data points, renumbering them to the IMGT scheme (Lefranc et al., 2003). We follow the data preparation strategy of Kong et al. (2023); Jin et al. (2022b) by splitting the dataset into training, validation, and test sets. We accomplish this by clustering the sequences via MMseq2 (Steinegger & Soding, 2017) with \(40\%\) sequence identity. Then we split all clusters into training, validation, and test sets in the proportion 8:1:1.
MetricsWe employ Amino Acid Recovery (AAR) and RMSD for quantitative evaluation. AAR is defined as the overlapping rate between the predicted 1D sequences and the ground truth. RMSD is calculated via the Kabsch algorithm (Kabsch, 1976) based on \(C_{\alpha}\) spatial features of the CDR residues.
ResultsTable 2 shows the performance of AbODE compared to the baseline methods. AbODE is able to perform better than other competing methods in terms of structure and sequence prediction. AbODE is able to improve over the SOTA by directly combining the antibody context with the information about the antigen via the attention network, thereby demonstrating the benefits of joint modeling. As a result, AbODE able to learn the underlying distribution of the complexes effectively.
### Antigen-Binding CDR-H3 Design
In order to further evaluate our model, we designed CDR-H3 that binds to a given antigen. We used AAR and RMSD as our scoring metrics. We included RosettaAD (Adolf-Bryfogle et al., 2018), a conventional physics-based baseline for comparison. We benchmark our method on 60 diverse complexes selected by (Adolf-Bryfogle et al., 2018).
Note, however, that the training is still conducted on the SAbDab dataset as described in section 4.2, where we eliminate the antibodies that overlap with those in RabD to avoid any data leakage.
ResultsThe performance of AbODE, and its comparison with the baselines, is reported in Table 5. AbODE can improve upon the best-performing baseline MEAN while significantly outperforming all the other baselines in terms of both the AAR and the RMSD. In particular, the higher Amino acid recovery rate (AAR) of AbODE relative to the other methods demonstrates the ability of the proposed
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{CDR-H1} & \multicolumn{2}{c}{CDR-H2} & \multicolumn{2}{c}{CDR-H3} \\ \cline{2-7} AbODE & PPL (\(\downarrow\)) & RMSD (\(\downarrow\)) & PPL (\(\downarrow\)) & RMSD (\(\downarrow\)) & PPL (\(\downarrow\)) & RMSD (\(\downarrow\)) \\ \hline \(-\) Constant Region & 4.25 \(\pm\) 0.46 & 0.73 \(\pm\) 0.15 & 4.32 \(\pm\) 0.32 & 0.63 \(\pm\) 0.19 & 6.35 \(\pm\) 0.29 & 2.01 \(\pm\) 0.13 \\ \(+\) Constant Region (\(z_{<i}\)) & 4.31 \(\pm\) 0.31 & 0.69 \(\pm\) 0.21 & 4.17 \(\pm\) 0.29 & 0.59 \(\pm\) 0.21 & 6.41 \(\pm\) 0.37 & 1.94 \(\pm\) 0.17 \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{CDR-H1} & \multicolumn{2}{c}{CDR-H2} & \multicolumn{2}{c}{CDR-H3} \\ \cline{2-7} AbODE & AAR \(\%\) (\(\uparrow\)) & RMSD (\(\downarrow\)) & AAR \(\%\) (\(\uparrow\)) & RMSD (\(\downarrow\)) & AAR \(\%\) (\(\uparrow\)) & RMSD (\(\downarrow\)) \\ \hline \(-\) Constant Region & 70.5 \(\pm\) 1.14 & 0.65 \(\pm\) 0.1 & 55.7 \(\pm\) 1.45 & 0.73 \(\pm\) 0.14 & 39.8 \(\pm\) 1.17 & 1.73 \(\pm\) 0.11 \\ \(+\) Constant Region (\(z_{<i}\)) & 71.9 \(\pm\) 1.87 & 0.71 \(\pm\) 0.23 & 56.8 \(\pm\) 1.97 & 0.70 \(\pm\) 0.14 & 36.7 \(\pm\) 1.5 & 1.88 \(\pm\) 0.11 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Top**: Adding constant region information for unconditioned sequence and structure modeling task. **Bottom**: Adding constant region information for antigen-conditioned antibody sequence and structure modeling task.
method to learn the underlying distribution of residuals for sequence design.
### Conditional Generation given Framework Region
We next extend the proposed method by incorporating the sequence and structural information besides the CDR regions (i.e., constant region). We encode the sequence and structure information of the residues before CDR-H1, H2, and H3. Specifically, we define a k-nearest neighbor graph over the spatial domain for residues and use the sequence \(\mathbf{z}_{<i}\), where \(i\) is the location of the first CDR-H1 (or H2/H3 as the case maybe), top obtain an encoding
\[h_{<i} =\phi^{enc}(\mathbf{z}_{<i},\mathbf{z}_{\mathcal{N}_{<i}},\{e_{ ij}\}_{j\in\mathcal{N}_{<i}})\] \[h =\texttt{Agg}(h_{<i})\]
where \(\mathbf{z}_{<i}=[\mathbf{a}_{<i},\mathbf{s}_{<i}]\), \(\mathcal{N}_{<i}\) denotes the neighbours of the residues, and \(e_{ij}\) are the edge features. We parameterize \(\phi^{enc}\) as a 2-layer Transformer Convolutional Network (Shi et al., 2020), setting the encoding dimension to 16. The encoded features \(h_{<i}\) are then aggregated to provide a single summarized representation \(h\) per antibody, which is then used in dynamics
\[\dot{\mathbf{z}}_{i}(t)=\frac{\partial\mathbf{z}_{i}(t)}{\partial t}=f_{\psi }(t,\mathbf{z}_{i}(t),\mathbf{z}_{N(i)}(t),\{\mathbf{e}_{ij}(t)\}_{j},h)\]
Consequently, in this case, our method has access to extra information from the rest of the antibody sequence, leading to more nuanced dynamics. Further details are provided in Appendix A.4.
ResultsWe evaluate this variant of our method on both uncontrolled antibody sequence-structure design and antigen-conditioned antibody sequence structure co-design, as described in section 4.1 and 4.2. The performance of AbODE is reported in Table 3. We observe that including the constant region increases performance for some CDR regions.
### Fixed Backbone Sequence Design
We finally extend the evaluation of our method to design protein sequences that can fold into a given backbone structure. This task is commonly known as the fixed backbone structure design.
We utilized the protein dihedral angles and other spatial features described in Eq 4 and Jing et al. (2020). These features can be derived solely from backbone coordinates (Ingraham et al., 2019), as the protein structures are fixed from the beginning. We use the CATH 4.2 dataset curated by Ingraham et al. (2019) and followed the same experimental setting as used in previous works for a fair comparison. We compare AbODE with state-of-the-art baselines for fixed backbone design, including Structured GNN (Ingraham et al., 2019), GVP-GNN (Jing et al., 2020), GVP-Transformer (Hsu et al., 2022) and ProtSeed (Shi et al., 2023). We evaluate the performance of all methods using PPL and AAR as introduced in previous sections. Additional details can be found in A.5.
ResultsResults The comparison of AbODE with other baselines is shown in Table 4. We note that AbODE is able to perform comparably to other methods when the evaluation is performed on test splits in CATH 4.2 test set. These include the short chains that have at most 100 residues and the single-chain protein sequences. Our results establish the promise of AbODE as a protein sequence design method (conditioned on desired backbone structures), and suggest that AbODE may be generalizable to related tasks beyond antibody design.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{PPL (\(\downarrow\))} & \multicolumn{3}{c}{AAR \(\%\) (\(\uparrow\))} \\ \cline{2-7} Method & Short & Single-chain & All & Short & Single-chain & All \\ \hline GVP-Transformer & \(8.94\) & \(8.67\) & \(6.70\) & \(27.3\) & \(28.3\) & \(36.5\) \\ Structured GNN & \(8.31\) & \(8.88\) & \(6.55\) & \(28.4\) & \(28.1\) & \(37.3\) \\ GVP-GNN & \(7.10\) & \(7.44\) & \(5.29\) & \(32.1\) & \(32.0\) & \(40.2\) \\ ProtSeed & \(7.32\) & \(7.38\) & \(5.60\) & \(34.8\) & \(34.1\) & \(43.8\) \\ AbODE & \(7.19\pm 0.34\) & \(7.33\pm 0.25\) & \(5.85\pm 0.45\) & \(34.4\pm 1.7\) & \(34.7\pm 1.2\) & \(42.7\pm 1.9\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Perplexity (PPL) and Amino Acid Recovery (AAR) for different methods on fixed backbone sequence design task. Baselines are from Shi et al. (2023).
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & AAR \(\%\) (\(\uparrow\)) & RMSD (\(\downarrow\)) \\ \hline RosettaAD & 22.50 & 5.52 \\ LSTM & 22.36 & (N/A) \\ C-LSTM & 22.18 & (N/A) \\ RefineGNN & 29.79 & 7.55 \\ C-RefineGNN & 28.90 & 7.21 \\ MEAN & 36.77 & 1.81 \\ AbODE & **39.95**\(\pm\) 1.3 & **1.54**\(\pm\) 0.24 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on RAbD benchmark. We report Amino acid recovery (AAR) and RMSD for CDR-H3 design. Baselines are from Kong et al. (2023).
## 5 Ablation Studies
### Masked-Antigen Conditioned Sequence and Structure Modeling
We evaluated the performance of our method when data was missing. We investigate this scenario by masking \(10\%\) amino acids of the antigen with the minimum number of amino acids being masked 1 (note that masking 10% becomes especially critical when the antigen is a peptide with only 5-9 amino acids) for antigen-conditioned antibody sequence and structure generation. Table 6 shows the empirical results of the proposed method (AbODE) on antigen-conditioned antibody sequence and structure generation as described in section 4.2. Compared to the original, unmasked setting (in Table 2), we observe some dip in the performance compared to the original setting, as expected.
### Time hyperparameter for ODE
We also evaluated the effect of different choices of time steps \(t\) to solve our ODE system. Table 7 demonstrates the effect of change in the number of time steps on the downstream performance for CDR-H1 data on Antigen Conditioned Sequence and Structure Modeling. We note that increasing the number of timesteps for solving the ODE increases performance and that training with fewer time steps leads to unstable training.
## 6 Conclusion
We introduced a new generative model AbODE, which models the antibody-antigen complex as a joint graph and performs information propagation using a graph PDE that reduces to a system of coupled residue-specific ODEs. AbODE can accurately co-model the sequence and structure of the antigen-antibody complex. In particular, the model can generate a binding antibody sequence and structure with state-of-the-art accuracy for a given antigen.
## Acknowledgements
The calculations were performed using resources made available by the Aalto University Science-IT project. This work has been supported by the Academy of Finland under the _HEALED_ project (grant 13342077).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{CDR-H1} & \multicolumn{2}{c}{CDR-H2} & \multicolumn{2}{c}{CDR-H3} \\ \cline{2-7} AbODE & AAR \% (\(\uparrow\)) & RMSD (\(\downarrow\)) & AAR \% (\(\uparrow\)) & RMSD (\(\downarrow\)) & AAR \% (\(\uparrow\)) & RMSD (\(\downarrow\)) \\ \hline Mask = \(10\%\) & 63.7 & 0.87 & 49.7 & 0.88 & 33.1 & 1.99 \\ Mask = \(0\%\) & 70.5 & 0.65 & 55.7 & 0.73 & 39.8 & 1.73 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Amino acid recovery (AAR) and root mean square deviation (RMSD) for masking a certain part of antigen in antigen-conditioned antibody sequence and structure generation.
\begin{table}
\begin{tabular}{l c c} \hline \hline AbODE & PPL (\(\downarrow\)) & RMSD (\(\downarrow\)) \\ \hline \(t=10\) & 7.38 & 1.44 \\ \(t=50\) & 7.18 & 1.87 \\ \(t=200\) & 5.18 & 1.01 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Hyperparameter effect of the number of time steps for solving the ODE for CDR-H1 data
Figure 4: Functional evaluation of generated antibodies vs. data for CDR-H1 unconditional antibody sequence and structure design |
2307.00032 | Uncertainty Informed Optimal Resource Allocation with Gaussian Process
based Bayesian Inference | We focus on the problem of uncertainty informed allocation of medical
resources (vaccines) to heterogeneous populations for managing epidemic spread.
We tackle two related questions: (1) For a compartmental ordinary differential
equation (ODE) model of epidemic spread, how can we estimate and integrate
parameter uncertainty into resource allocation decisions? (2) How can we
computationally handle both nonlinear ODE constraints and parameter
uncertainties for a generic stochastic optimization problem for resource
allocation? To the best of our knowledge current literature does not fully
resolve these questions. Here, we develop a data-driven approach to represent
parameter uncertainty accurately and tractably in a novel stochastic
optimization problem formulation. We first generate a tractable scenario set by
estimating the distribution on ODE model parameters using Bayesian inference
with Gaussian processes. Next, we develop a parallelized solution algorithm
that accounts for scenario-dependent nonlinear ODE constraints. Our
scenario-set generation procedure and solution approach are flexible in that
they can handle any compartmental epidemiological ODE model. Our computational
experiments on two different non-linear ODE models (SEIR and SEPIHR) indicate
that accounting for uncertainty in key epidemiological parameters can improve
the efficacy of time-critical allocation decisions by 4-8%. This improvement
can be attributed to data-driven and optimal (strategic) nature of vaccine
allocations, especially in the early stages of the epidemic when the allocation
strategy can crucially impact the long-term trajectory of the disease. | Samarth Gupta, Saurabh Amin | 2023-06-30T03:49:52Z | http://arxiv.org/abs/2307.00032v1 | # Uncertainty Informed Optimal Resource Allocation with Gaussian Process based Bayesian Inference
###### Abstract
We focus on the problem of uncertainty informed allocation of medical resources (vaccines) to heterogeneous populations for managing epidemic spread. We tackle two related questions: (1) For a compartmental ordinary differential equation (ODE) model of epidemic spread, how can we estimate and integrate parameter uncertainty into resource allocation decisions? (2) How can we computationally handle both nonlinear ODE constraints and parameter uncertainties for a generic stochastic optimization problem for resource allocation? To the best of our knowledge current literature does not fully resolve these questions. Here, we develop a data-driven approach to represent parameter uncertainty accurately and tractably in a novel stochastic optimization problem formulation. We first generate a tractable scenario set by estimating the distribution on ODE model parameters using Bayesian inference with Gaussian processes. Next, we develop a parallelized solution algorithm that accounts for scenario-dependent nonlinear ODE constraints. Our scenario-set generation procedure and solution approach are flexible in that they can handle any compartmental epidemiological ODE model. Our computational experiments on two different non-linear ODE models (SEIR and SEPIHR) indicate that accounting for uncertainty in key epidemiological parameters can improve the efficacy of time-critical allocation decisions by 4-8%. This improvement can be attributed to data-driven and optimal (strategic) nature of vaccine allocations, especially in the early stages of the epidemic when the allocation strategy can crucially impact the long-term trajectory of the disease.
## 1 Introduction
In this paper we study the problem of _uncertainty informed_ optimal resource allocation to control the spread of an infectious disease such as Covid-19. We develop a _data-driven, scalable_ and ODE _model agnostic_ approach while _accounting for uncertainty_ for the vaccine allocation problem. Our approach is flexible in that it can be easily adapted to other control strategies such as imposing lock-downs [8] and allocation of other resources such as medical personnel, supplies, testing facilities & etc.
The vaccine allocation problem has been well studied in the literature. This includes earlier works like [11; 6] to more recent optimization based methods like [7; 23]. Researchers have also studied ways to incorporate uncertainty through stochastic epidemiological modelling [17; 23] or stochastic optimization with uncertain parameters [52; 60]. However, prior works have two major limitations:
1. Most papers such as [52; 60; 61; 17] which claim to account for uncertainty, do not provide a principled _data-driven_ method to model (and estimate) uncertainty. They simply model the allocation problem as a stochastic program under the assumption that a scenario-set exists without outlining a principled procedure on how to generate or estimate this scenario-set from _data_. Clearly, this does not effectively solve the problem of uncertainty informed vaccine allocation.
2. The presence of product term between the susceptible (S) and infected (I) population is a key characteristic of most compartmentalized epidemiological ODE models [7]. Due to this non-linearity, the resource allocation problem with the discretized ODEs results in a non-convex quadratic program. This is difficult to solve even in the nominal case i.e. without accounting for uncertainty, let alone uncertainty informed. To avoid the product term, previous papers [52, 60, 61, 62] resort to using simple (linear) epidemiological models so that the discretized ODEs result in a linear program which is easy to solve. Such linear models are limited in their ability to capture the true underlying non-linear dynamics of disease transmission; hence the resulting allocation strategies are not globally optimal.
In this work, we address both of the above limitations by making following novel contributions:
1. We make progress in resolving the issue of incorporating parameter uncertainty in the resource (vaccine) allocation problem in a _data-driven manner_. We do this by making connections with the ODE parameter estimation literature with Bayesian inference using GPs with gradient matching methods. We show that the posterior-distributions can be used to represent uncertainty through a tractable scenario-set by formulating the scenario-reduction problem as an optimal-transport problem. This optimal transport problem can be easily solved using k-means clustering [48].
2. We provide a novel formulation for the uncertainty informed vaccine allocation problem as a stochastic optimization problem. We develop technical results for the feasibility and decomposability of this stochastic program. Using these technical results, we develop an _parallelized_, scalable iterative solution algorithm to solve the stochastic program while retaining the original _non-linear_, continuous-time ODE model constraints. Due to this ODE _model agnostic_ nature of our approach, we are also able to account for different levels of mobility within different sub-populations and the temporal variations in the onset of the pandemic in each of these sub-populations.
3. We provide extensive empirical results on two different ODE models (i.e. the SEIR and the SEPIHR models) in sections 3.1, 4, 6 and Supplementary Information (SI). Our results demonstrate that with optimal vaccine allocation, peak infections can be reduced by around 35%. More importantly, a further gain of around **4** to **8**% can be achieved when incorporating uncertainty.
## 2 Epidemiological Modelling and Pitfalls of Classical Parameter estimation
Mathematical modelling of pandemics (including epidemics) has an extensive literature going back to 1960s [11, 6]. A fairly recent and concise overview can be found at [10]. Throughout literature, modelling the spread of different diseases using a compartmentalized model through a set of time-dependent ordinary differential equations (ODEs) is common and widely used [9]. Following the recent literature on covid-19 [35, 1, 16, 19] we also adopt the compartmentalized modelling approach.
A popular epidemiological model which we use is the SEIR model, shown in fig. 1. In this model the entire population (of size N) is divided into four states: Susceptible (S), Exposed (E), Infected (I) and Recovered (R). The evolution of each state or the system dynamics is governed by equations in (1).
\[\left.\begin{array}{l}\vspace{0.2cm}\text{\small\small S}\\ \text{\small\small\ \
commonly used non-linear least squares approach for ODE parameter estimation and its associated pitfalls, thus providing motivation for adopting Bayesian viewpoint.
### Classical Parameter Estimation: Non-linear Least Squares (NLLS)
Before describing the NLLS approach, we briefly describe the _initial-value problem_ (IVP) in the context of ODEs. For a given (or fixed) set of parameter values and initial conditions (denoted \(\mathbf{x}_{0}\)), a systems of ODEs can be numerically solved using an off-the-shelf ODE solver such as ODE45 in matlab or ODEINT in python. The solved system (also referred to as simulation) provides the value (or estimates) of different states at the specified time-stamps.
For a given set of parameters, using the estimated state values obtained by solving the IVP and time-series data, discrepancy or the least-squares error can be computed. This can be turned into a optimization problem where we want to find those values of the model parameters for which the least-squares error is minimized. L-BFGS is commonly used to solve such problems [35]. Mathematically for SEIR model, the NLLS problem can be written as follows:
\[\min_{\alpha,\beta,\gamma} \sum_{t=1}^{N}\biggl{(}(y_{R}^{t}-R(t))^{2}+(y_{I}^{t}-I(t))^{2} \biggr{)}\] \[\text{s.t.}\;\;\{\eqref{eq:1}\}\;\;\;\forall\;t\in\{1,\ldots,N\} \text{ and }[S(0),E(0),I(0),R(0)]=\mathbf{x}_{0}\]
where \(y_{R}^{t}\) and \(y_{I}^{t}\) denote count data for infected and removed individuals at time \(t\). The optimal parameters obtained after solving NLLS can then be used to re-solve the ODE system to make predictions for future time as well.
Why account for Uncertainty? NLLS discussed above can provide sufficiently reliable _point estimates_ of the parameter values and predictions of new cases into the future provided the _time-series data is accurate_. Using these point estimates resource (vaccine) allocation problem is to be solved subsequently. The efficacy of the overall allocation solution in real-world is highly dependent on the accuracy of the predicted point estimates which are only as good as the data from which these estimates are generated. For Covid-19, the data reported by various private organizations and government agencies can be severely biased, under-reported [33] and erroneous due to numerous reasons [4]. Reliance on these point estimates can result in severe region-wide inefficiencies. To address these issues and also account for potential modelling errors, we incorporate _uncertainty_ through Bayesian inference to estimate the _joint-distribution_ of ODE model parameters from data, which we discuss next.
## 3 Bayesian Parameter Estimation
Bayesian inference for estimating ODE parameters has been well studied in literature [44], however in the absence of closed form posterior and the requirement of solving the ODE system in each sampling iteration makes inference difficult. To overcome this limitation, [14] proposed the use of Gaussian Processes (GPs) to model the evolution of a state over time while exploiting the fact that derivative of a Gaussian process is also a Gaussian process. This significantly helps in achieving tractability and allows Bayesian inference to be computationally feasible. Following [14], numerous other related works like [22; 5; 38; 41; 26; 59; 58] have been proposed which also employ the use of GPs to efficiently estimate the parameters of a non-linear ODE system (for eg: SEIR model). We discuss some of these works, in particular the approach of [59] which is useful to our problem setting.
Consider a set of \(K\) time-dependent states denoted as \(\mathbf{x}(t)=[x_{1}(t),\ldots,x_{K}(t)]^{T}\). The evolution of each of these \(K\) state over time is defined by a set of \(K\) time-dependent arbitrary differential equations denoted as follows:
\[\dot{x}_{i}(t)=\frac{dx_{i}(t)}{dt}=\text{f}_{i}(x(t),\theta,t)\quad\forall \quad i\in\{1,\ldots,K\} \tag{3}\]
where the functional form of \(\text{f}_{i}\) is known (for eg. SEIR model). Noisy observations (i.e. the time-series data) of each of the \(K\) states (denoted \(\mathbf{y}(t)=[y_{1}(t),\ldots,y_{K}(t)]^{T}\)) at \(N\) different time points where \(t_{1}<\cdots<t_{N}\) are available, i.e.
\[\left.\begin{array}{c}y_{1}(t)=x_{1}(t)+\epsilon_{1}(t)\\ \vdots\\ y_{K}(t)=x_{K}(t)+\epsilon_{K}(t)\end{array}\right\}\;,\;\;\text{where }\epsilon_{i}(t)\sim\mathcal{N}(0,\sigma_{i}^{2}).\]
Let \(\boldsymbol{\epsilon}(t)=[\epsilon_{1}(t),\ldots,\epsilon_{K}(t)]^{T}\), then in vector notation we have \(\mathbf{y}(t)=\mathbf{x}(t)+\boldsymbol{\epsilon}(t)\). As there are \(N\) observations for each of the \(K\) states, for a clear exposition we introduce matrices of size \(K\times N\) as
follows: \(\mathbf{X}=[\mathbf{x}(t_{1}),\ldots,\mathbf{x}(t_{N})]\) and \(\mathbf{Y}=[\mathbf{y}(t_{1}),\ldots,\mathbf{y}(t_{N})]\). We can then write :
\[P(\mathbf{Y}|\mathbf{X},\sigma)=\prod_{k}\prod_{t}P(y_{k}(t)|x_{k}(t),\sigma)= \prod_{k}\prod_{t}\mathcal{N}(y_{k}(t)|x_{k}(t),\sigma^{2}) \tag{4}\]
[14] proposed placing a Gaussian process prior on \(\mathbf{x}_{k}\). Let \(\boldsymbol{\mu}_{k}\) and \(\boldsymbol{\phi}_{k}\) be the hyper-parameters of this Gaussian process, we can then write:
\[p(\mathbf{x}_{k}|\boldsymbol{\mu}_{k},\boldsymbol{\phi}_{k})=\mathcal{N}( \mathbf{x}_{k}|\boldsymbol{\mu}_{k},\mathbf{C}_{\phi_{k}}) \tag{5}\]
In (5), \(\mathbf{C}_{\phi_{k}}\), denotes the Kernel (or the covariance) matrix for a predefined kernel function with hyper-parameters \(\boldsymbol{\phi}_{k}\). As differentiation is a linear operator therefore the derivative of a Gaussian process is also a Gaussian process (see ch-9 in [45] and [50]). Theore a Gaussian process is closed under differentiation and the joint distribution of the state variables \(\mathbf{x}_{k}\) and their derivatives \(\dot{\mathbf{x}}_{k}\) is a multi-variate Gaussian distribution as follows:
\[\begin{bmatrix}\mathbf{x}_{k}\\ \dot{\mathbf{x}}_{k}\end{bmatrix}\sim\mathcal{N}\bigg{(}\begin{bmatrix} \boldsymbol{\mu}_{k}\\ \mathbf{0}\end{bmatrix},\begin{bmatrix}\mathbf{C}_{\phi_{k}},\mathbf{C}_{\phi _{k}}\\ \mathbf{C}_{\phi_{k}},\mathbf{C}_{\phi_{k}}^{\prime}\end{bmatrix}\bigg{)} \tag{6}\]
where \(\mathbf{C}_{\phi_{k}}\) and \(\mathbf{C}_{\phi_{k}}^{\prime\prime}\) are the kernel matrices for the state \(\mathbf{x}_{k}\) and its derivative \(\dot{\mathbf{x}}_{k}\) respectively, while \(\mathbf{{}^{\prime}C}_{\phi_{k}}\) and \(\mathbf{C}_{\phi_{k}}^{\prime}\) are the cross-covariance kernel matrices between the states and their derivatives. The functional form of the entries of \(\mathbf{C}_{\phi_{k}},\mathbf{C}_{\phi_{k}}^{\prime\prime},\mathbf{C}_{\phi_{ k}}\) and \(\mathbf{C}_{\phi_{k}}^{\prime}\) are provided in SI. Importantly, this implies that using the Gaussian process defined on the state variables \(\mathbf{x}_{k}\), we can also make predictions about their derivatives \(\dot{\mathbf{x}}_{k}\). From (6), we can compute the conditional distribution of the state derivatives as:
\[p(\dot{\mathbf{x}}_{k}|\mathbf{x}_{k},\boldsymbol{\mu}_{k},\boldsymbol{\phi} _{k})=\mathcal{N}(\dot{\mathbf{x}}_{k}|\mathbf{m}_{k},\mathbf{A}_{k}) \tag{7}\]
where \(\mathbf{m}_{k}=^{\prime}\mathbf{C}_{\phi_{k}}\mathbf{C}_{\phi_{k}}{}^{-1}( \mathbf{x}_{k}-\boldsymbol{\mu}_{k});\ \mathbf{A}_{k}=\mathbf{C}_{\phi_{k}}^{\prime\prime}- \mathbf{C}_{\phi_{k}}\mathbf{C}_{\phi_{k}}{}^{-1}\mathbf{C}_{\phi_{k}}^{\prime}\). Note that \(p(\dot{\mathbf{x}}_{k}|\mathbf{x}_{k},\boldsymbol{\mu}_{k},\boldsymbol{\phi} _{k})\) corresponds to the second, i.e. GP part of the graphical model in fig. 3.
Using the functional form of the ODE system in (3) and with state specific Gaussian additive noise \(\lambda_{k}\), we can write
\[p(\dot{\mathbf{x}}_{k}|\mathbf{X},\boldsymbol{\theta},\lambda_{k})=\mathcal{N }(\dot{\mathbf{x}}_{k}|\mathbf{f}_{k}(\mathbf{X},\boldsymbol{\theta}),\lambda _{k}\mathbf{I}) \tag{8}\]
where \(\mathbf{f}_{k}(\mathbf{X},\boldsymbol{\theta})=[\mathbf{f}_{k}(\mathbf{x}(t_{1 }),\boldsymbol{\theta}),\ldots,\mathbf{f}_{k}(\mathbf{x}(t_{N}),\boldsymbol{ \theta})]^{T}\). Note that (8) corresponds to the ODE part of the graphical model in the fig. 3.
The two models \(p(\dot{\mathbf{x}}_{k}|\mathbf{x}_{k},\boldsymbol{\mu}_{k},\boldsymbol{\phi} _{k})\) in (7) and \(p(\dot{\mathbf{x}}_{k}|\mathbf{X},\boldsymbol{\theta},\lambda_{k})\) in (8) are combined through two new random variables \(\mathbf{F}_{1}\) and \(\mathbf{F}_{2}\), resulting in the graphical model shown in fig. 4[59]. Considering a single state (for notational simplicity), for given values of \(\mathbf{x}\) and \(\boldsymbol{\theta}\), \(\mathbf{F}_{1}\) in fig. 4, represents the deterministic output of the ODEs, i.e. \(\mathbf{F}_{1}=\mathbf{f}(\boldsymbol{\theta},\mathbf{x})\). The value of \(p(\mathbf{F}_{1}|\boldsymbol{\theta},\mathbf{x})\) can be written using the Dirac-delta function (denoted \(\delta(\cdot)\)) as following:
\[p(\mathbf{F}_{1}|\boldsymbol{\theta},\mathbf{x})=\delta(\mathbf{F}_{1}- \mathbf{f}(\boldsymbol{\theta},\mathbf{x})) \tag{9}\]
Under the assumption that the GP model would be able to capture both, the true states and their derivatives perfectly, then it would imply that \(\mathbf{F}_{1}\) is same as \(\dot{\mathbf{x}}\), i.e. \(\mathbf{F}_{1}=\dot{\mathbf{x}}\). But clearly this assumption is unlikely to hold, therefore to account for any possible mismatch and small error in the GP states and GP derivatives, this condition is relaxed so that:
\[\mathbf{F}_{1}=\dot{\mathbf{x}}+\epsilon=:\mathbf{F}_{2},\quad\text{ where}\quad\epsilon\sim\mathcal{N}(\mathbf{0},\lambda\mathbf{I}) \tag{10}\]
The above argument regarding the the error in the states and derivatives of the GP model is captured in the graphical model (fig. 4) through the use of the random variable \(\mathbf{F}_{2}\). From a given state-derivative \(\dot{\mathbf{x}}\) obtained from the GP model, \(\mathbf{F}_{2}\) is obtained after addition of Gaussian noise with standard deviation \(\lambda\). The probability density of \(\mathbf{F}_{2}\) can then be written as
\[p(\mathbf{F}_{2}|\dot{\mathbf{x}},\lambda)=\mathcal{N}(\mathbf{F}_{2}|\dot{ \mathbf{x}},\lambda\mathbf{I})) \tag{11}\]
Note that the equality constraint in (10) is encoded in the graphical model using an un-directed edge between \(\mathbf{F}_{1}\) and \(\mathbf{F}_{2}\). For the purpose of inference, this equality constraint is incorporated in the joint density via the Dirac-delta function, i.e. \(\delta(\mathbf{F}_{1}-\mathbf{F}_{2})\). The joint-density of the whole graphical model (fig. 4) is given as:
\[p(\mathbf{x},\dot{\mathbf{x}},\mathbf{y},\mathbf{F}_{1},\mathbf{F}_{2}, \boldsymbol{\theta}|\phi,\sigma,\lambda)= p(\boldsymbol{\theta})p(\mathbf{x}|\phi)p(\dot{\mathbf{x}}| \mathbf{x},\phi)p(\mathbf{y}|\mathbf{x},\sigma)p(\mathbf{F}_{1}|\boldsymbol{ \theta},\mathbf{x})p(\mathbf{F}_{2}|\dot{\mathbf{x}},\lambda\mathbf{I})\delta( \mathbf{F}_{1}-\mathbf{F}_{2}) \tag{12}\]
Finally, the marginal distribution of \(\mathbf{x},\boldsymbol{\theta}\) takes the following form:
\[p(\mathbf{x},\boldsymbol{\theta}|\mathbf{y},\boldsymbol{\phi},\sigma,\lambda)=p( \boldsymbol{\theta})\times\mathcal{N}(\mathbf{x}|\boldsymbol{\mu},\mathbf{C}_{ \phi})\times\mathcal{N}(\mathbf{y}|\mathbf{x},\sigma^{2}\mathbf{I})\times \mathcal{N}(\mathbf{f}(\mathbf{x},\boldsymbol{\theta})|\mathbf{m},\mathbf{A}+ \lambda\mathbf{I}) \tag{13}\]
Figure 4: Combined model.
### Empirical Sampling Results
We now provide sampling results on the two disease-transmission ODE models: 1) SEIR model (fig. 2) and 2) SEIPHR model (fig. 2).
**SEIR:** Using \(\alpha=0.9,\beta=0.08\) and \(\gamma=0.1\) (as true parameter values) we simulate the SEIR model (eq:(1)) to get state values. We add zero-mean Gaussian noise with \(\sigma=0.1\) to each of the simulated state values to generate our dataset. Using the data only for first 15 days (\(T=\{1,\dots,15\}\)), we estimate the GP hyper-parameters for states using maximum-likelihood (see SI for details). We then run the Metropolis-Hastings MCMC sampling procedure using the density from eq: (13) to get our empirical posterior joint-distribution on \(\alpha,\beta\) and \(\gamma\). After removing the burn-in samples, for the remaining \(3\times 10^{5}\) samples, we plot the marginal distributions along with their mean and mode in fig. 5.
**SEPIHR:** This model has 5 parameters, i.e. \(\alpha,\beta,\delta_{1},\gamma_{1}\) and \(\gamma_{2}\), for which the joint-distribution is to be estimated from data (see SI for model details). We use \(\alpha=1.1,\beta=0.08,\delta_{1}=0.01,\delta_{2}=0.002,\delta_{3}=0.002,\gamma _{1}=0.1,\gamma_{2}=0.1\) and \(\gamma_{3}=0.06\) as the true parameter values. Using the data for only first 15 days, we follow the same sampling procedure as described previously for the SEIR model. The marginal distributions along with their mean and mode are shown in fig. 12.
We note that mode is very close to the true values in both the models, thus validating the capability of the sampling procedure in correctly estimating the parameter values.
**Related literature:** Before concluding this section, we briefly mention related literature. Variational inference (VI) based approach of [26] provides improvements over [22], however due to modelling assumptions is not suited for our work. The optimization based gradient matching approaches of [44, 36, 25, 41, 58] and others like [27] only provide point-estimates. The generative modelling approach of [5] suffers from identifiablity issues as explained by [38]. Approaches with different sampling methods would include [34, 43, 30, 12, 13] and approximation based methods would include [55, 2, 20]. Other VI based methods would include [46, 24]. Probabilistic numerics [29] based methods include [56] and [32]. [53] showed the use of probabilistic integrators for ODEs in parameter estimation. [15, 57, 31, 28] are other useful references.
## 4 Tractable Scenario-set Construction
In section 3.1, we obtained an empirical joint distribution on the SEIR parameters \(\alpha,\beta,\gamma\) in the form of \(3\times 10^{5}\) samples. Each of these samples represent a real-world scenario and will be used to represent uncertainty through the scenario-set in the vaccine allocation stochastic optimization formulation (discussed in the next section). However, working with such large number of samples is computationally prohibitive, therefore we first discuss how to reduce the number of these samples (or scenarios) while still correctly representing the joint-distribution. We introduce some mathematical preliminaries:
Let \(\mathbb{P}=\sum_{i\in I}p_{i}\delta_{\xi_{i}}\), where \(\xi_{i}\in\mathbb{R}^{d}\) represents the location and \(p_{i}\in[0,1]\) represents the probability of the \(i\)-th scenario in \(\mathbb{P}\), where \(i\in I=\{1,\dots,n\}\). Let \(\mathbb{Q}\) represents the target distribution. \(\mathbb{Q}=\sum_{j\in J}q_{j}\delta_{\zeta_{j}}\), where \(\zeta_{j}\in\mathbb{R}^{d}\) represents the location and \(q_{j}\in[0,1]\) represents the probability of the \(j\)-th scenario in \(\mathbb{Q}\), where \(j\in J=\{1,\dots,m\}\).
Figure 6: SEPIHR: Empirical distribution after \(3\times 10^{5}\) samples from the MCMC sampling procedure.
We can now define the type-I Wasserstein distance between original \(\mathbb{P}\) and target \(\mathbb{Q}\) as following:
\[d_{l}(\mathbb{P},\mathbb{Q}) =\min_{\pi\in\mathbb{R}_{+}^{1\times m}}\Big{(}\sum_{i\in I}\sum_{j \in J}\pi_{ij}||\xi_{i}-\zeta_{j}||^{l}\Big{)}^{1/l}\] (14) s.t. \[\sum_{j\in J}\pi_{ij}=p_{i}\forall i\in I\ ;\ \sum_{i\in I}\pi_{ij}=q_{j} \forall j\in J\]
where \(l\geq 1\). The linear program (14) corresponds to the min-cost transportation problem, where \(\pi_{ij}\) represents the amount of mass moved from location \(\xi_{i}\) to \(\zeta_{j}\) and \(||\xi_{i}-\zeta_{j}||^{l}\) represents the associated cost incurred in moving unit mass.
Let \(\mathcal{P}_{u}(\mathcal{R},n)\) denote the set of all _uniform_ discrete distributions with exactly \(n\) scenarios on any given space \(\mathcal{R}\) where \(\mathcal{R}\subseteq\mathbb{R}^{d}\). Similarly, let \(\mathcal{P}(\mathcal{R},m)\) denote the set of all discrete distributions (not necessarily uniform) with _at-most_\(m\) scenarios.
Let the discrete probability distribution over parameters (for eg: SEIR \(\alpha,\beta,\gamma\)) obtained using sampling be denoted as \(\hat{\mathbb{P}}\), where \(\hat{\mathbb{P}}\) belongs to the set \(\mathcal{P}_{u}(\mathcal{R},n)\), i.e. \(\hat{\mathbb{P}}\in\mathcal{P}_{u}(\mathcal{R},n)\), where \(\mathcal{R}\) is determined by the upper and lower bounds on \(\alpha,\beta,\gamma\) and \(n=3\times 10^{5}\).
The scenario reduction problem can now be defined as:
\[\mathcal{CSR}(\hat{\mathbb{P}},m)=\min_{\mathbb{Q}\in\mathcal{P}(\mathcal{R},m )}d_{l}(\hat{\mathbb{P}},\mathbb{Q}) \tag{15}\]
Solving \(\mathcal{CSR}\) defined in (15) is not easy for an arbitrary \(l\). Fortunately for our purposes, the \(\mathcal{CSR}\) problem is same as the k-means clustering problem with \(m=\text{k}\) and \(l=2\)[48]. Clustering distributes the set of \(n\) points in \(I\) with locations \(\xi_{i}\) into k mutually exclusive subsets \(I_{1},\ldots I_{k}\). The \(\zeta_{j}\)'s (also known as centroids) and their associated probability \(q_{j}\)'s of the target distribution \(\mathbb{Q}\) are given by:
(16)
Although theoretically, k-means clustering is known to be NP-hard [39, 3], however empirically a high quality solution can be easily obtained using Lloyd's algorithm [37] also commonly known as the k-means clustering algorithm. We run k-means clustering on the original \(3\times 10^{5}\) samples with \(\text{k}=3\times 10^{4}\) to get a 10x reduction. For the SEIR model, resulting distribution (denoted \(\hat{\mathbb{Q}}\)), marginal distributions are shown in fig. 7. We observe that the reduced target distribution is very close to the original distribution. Using \(\hat{\mathbb{Q}}\), we can now construct our scenario-set (denoted \(\Omega\)) as:
\[\Omega=\big{\{}(\alpha^{j},\beta^{j},\gamma^{j},p_{j})\ \forall\ j\in\{1,\ldots, \text{k}\}\big{\}} \tag{17}\]
where \((\alpha^{j},\beta^{j},\gamma^{j})=\hat{\zeta}_{j}\) and \(p_{j}=\hat{q}_{j}\). As \(\text{k}=3\times 10^{4}\), therefore \(\Omega\) also has \(3\times 10^{4}\) scenarios, i.e. \(|\Omega|=3\times 10^{4}\). We will henceforth use this \(\Omega\) as our scenario set. The mode of \(\hat{\mathbb{Q}}\) corresponds to the nominal estimate of \(\alpha,\beta\) and \(\gamma\). Similarly, the result for SEPHIR model (fig. 12) is shown in fig. 8. Experiments with values of k, other than \(\text{k}=3\times 10^{4}\), for both models are provided in SI.
## 5 Optimal Vaccine Allocation Formulation and Solution Algorithm
We now work towards formulating our optimization problem for vaccine allocation. Our goal is to allocate vaccines (on a daily basis) to a set of \(\mathbb{K}\) sub-populations, such that the maximum number of total infections is minimized. This objective ensures that the peak of the pandemic is minimised as much as possible in order to reduce the burden on the healthcare services particularly medical
Figure 8: SEPHIR: MCMC+k-means denotes the empirical distribution with \(3\times 10^{4}\) samples obtained after doing k-means clustering on the original \(3\times 10^{5}\) MCMC samples shown in fig. 12.
Figure 7: SEIR: MCMC+k-means denotes the empirical distribution with \(3\times 10^{4}\) samples obtained after doing k-means clustering on the original \(3\times 10^{5}\) MCMC samples.
personnel at the height of the pandemic. The \(\mathbb{K}\) sub-populations correspond to different geographical regions such as nearby cities in a state. Let \(\mathcal{K}=\{1,\ldots,\mathbb{K}\}\).
The spread of disease in each sub-population is modeled using a separate SEIR model. To account for the vaccinated individuals, the SEIR model in fig. 1 is updated with a new compartment (denoted by M) to represent the immune population and the updated model (fig. 9) is denoted by SEIRM. Let \(V_{k}(t)\) represent the number of people vaccinated at time \(t\) in the \(k\)-th sub-population and \(\eta\) be efficacy of the vaccine, then the ODEs corresponding to the SEIRM model of the \(k\)-th sub-population are given by eq: (24).
(18)
We also consider two important features of disease transmission. First, due to mobility there is contact between infected individuals of one sub-population with the susceptible individuals of another sub-population. Second, due to different levels of mobility between different sub-populations, onset of the pandemic in each of the sub-populations generally vary. Both of these are accounted in the updated states \(S_{k}\) and \(E_{k}\) in eq:(24), where \(\lambda_{r}^{k}\) denotes the mobility levels from sub-population \(r\) to sub-population \(k\) and \(u_{k}(t)\) corresponds to a sigmoid function, \(u_{k}(t)\coloneqq 1/(1+e^{-c_{1}^{k}(t-c_{2}^{k})})\) with parameters \(c_{1}^{k}\) and \(c_{2}^{k}\). In particular, \(c_{2}^{k}\) controls the onset of the pandemic in the \(k\)-th sub-population, therefore we also account for uncertainty in \(c_{2}^{k}\)\(\forall\)\(k\in\mathcal{K}\), by appropriately extending the scenario set \(\Omega\) (for details see SI).
Let \(\mathcal{T}=\{1,\ldots,T\}\), denote the simulation time period, \(\mathcal{T}_{v}=\{t_{s},\ldots,t_{l}\}\) denote the vaccination time-period where \(t_{s}\) and \(t_{l}\) are the first and last vaccination days, such that \(\mathcal{T}_{v}\subseteq\mathcal{T}\). We can now write the _nominal (or non-stochastic)_ optimization problem (denoted \(\mathcal{NF}\)) for vaccine allocation as (19), where \(B_{t}\) in (19e) denotes the total daily vaccine budget for all \(\mathbb{K}\) sub-populations and \(U_{t}^{k}\) in (19f) denotes the vaccine budget for \(k\)-th sub-population. Equations (19b) represent the ODE constraints, (19c) & (19d) together computes the maximum (or peak) infection of the total population (denoted \(\mathcal{I}\)) and (19a) minimizes the peak infection.
We now provide the **uncertainty-informed**, i.e. stochastic counterpart (denoted \(\mathcal{SF}\)) of the nominal problem \(\mathcal{NF}\) in (20), where \(\Omega\) denotes the scenario-set, recall (17). Each state S,E,I,R,M in ODE constraints in (20b) now has an associated superscript \(\omega\) corresponding to that scenario, \(\mathcal{I}_{\omega}\) denotes the peak infection for scenario \(\omega\), (20a) computes the expected peak infection over all scenarios. The vaccine budget constrains in (20c) remain same as in \(\mathcal{NF}\).
\[\mathcal{NF}:\min_{V}~{}~{}\mathcal{I}\] (Nominal) (19a) s.t. \[\{\eqref{eq:cond
Theorem 5.2 holds true because of the budget constraints (20c), \(V_{k}(t)\) is non-negative and finite. Thus the existence and uniqueness of a solution to ODEs in (20b) is guaranteed and can be shown analytically using the Picard-Lindelof theorem with appropriate initial conditions [18, 49, 51].
**Lemma 5.3**.: _Decomposability w.r.t \(\Omega\): For a given (fixed) vaccine policy \(\mathcal{V}\), the ODE constraints in (20b) become decomposable, i.e. the set of ODE constraints in scenario \(\omega_{i}\) can be solved independently of the set of ODE constraints in scenario \(\omega_{j}\ \forall j\in\Omega\setminus i\)._
Lemma (5.3) follows from the fact that for a given scenario (say \(\omega_{i}\)), constraints in (20b) require parameters only corresponding to scenario \(\omega_{i}\). This has major computational implications as it allows for parallel evaluation of scenarios in \(\Omega\). Due to the additive nature of the objective function (20a) w.r.t. to \(\Omega\), we can compute the objective function value after parallel computation of scenarios. Therefore, we can efficiently solve \(\mathcal{SF}\) using an iterative heuristic based optimization procedure described in algorithm 1. Details on heuristics are provided in SI.
```
1:Randomly sample a batch of Vaccine policies of size \(B\), i.e. \(\bar{\mathcal{V}}_{0}=\{\mathcal{V}_{1},\ldots,\mathcal{V}_{B}\}\) and set \(i=0\).
2:while\(i\leq N_{\text{opt}}\)do
3:for\(k\gets 1\) to \(B\)do
4: Evaluate constraint violation (denoted \(C_{k}\)) of \(\bar{\mathcal{V}}_{i}[k]\) using (20c).
5: In parallel, simulate all \(|\Omega|\) scenarios for \(\bar{\mathcal{V}}_{i}[k]\) using an ODE solver to compute \(\mathcal{I}_{\omega}\).
6: Compute \(f^{k}_{obj}\)(20a): \(f^{k}_{obj}\leftarrow\sum_{\omega\in\Omega}p_{\omega}\mathcal{I}_{\omega}\)
7:endfor
8: Update the batch of vaccine policies \(\bar{\mathcal{V}}_{i}\) with heuristic rules using \(\{f^{1}_{obj},\ldots,f^{B}_{obj}\}\) and \(\{C_{1},\ldots,C_{B}\}\) to generate next batch of vaccine policies \(\bar{\mathcal{V}}_{i+1}\).
9:\(i\gets i+1\)
10:endwhile
11:return feasible vaccine policy \(\mathcal{V}\) with lowest \(f_{obj}\).
```
**Algorithm 1** Optimization procedure to solve \(\mathcal{NF}\) or \(\mathcal{SF}\)
## 6 Experimental (Simulation) Results
We show the efficacy of our proposed approach on two different disease transmission models, i.e. the SEIR and the SEPIHR models. For all experiments we report average of 5 runs. In addition to the experiments in this section, various other numerical experiments under different setups are provided in SI.
Recall that in section 3.1, we have already discussed the details and sampling results for both the SEIR and SEPIHR model including figures 5 & 12 respectively. Also, in section 4 (including fig. 7 & 8), we have discussed how to obtain a tractable scenario set \(\Omega\) using k-means to account for uncertainty in the vaccine allocation. Therefore our main goal in this section is to show the benefit of incorporating uncertainty by comparing the vaccine allocation policy (denoted \(\mathcal{V}_{\mathcal{N}}\)) obtained from solving the nominal formulation \(\mathcal{NF}\) against the vaccine allocation policy (denoted \(\mathcal{V}_{\mathcal{S}}\)) obtained from solving the stochastic solution \(\mathcal{SF}\). We also benchmark against a zero or no-vaccination policy denoted \(\mathcal{V}_{\phi}\)), where \(\mathcal{V}_{\phi}=\{V_{k}(t)=0\ \forall\ t\in\mathcal{T},k\in\mathcal{K}\}\).
**SEIR model**: We use a total simulation time horizon of \(T=120\) days, vaccination period of 25 days starting on \(t_{s}=16\) and ending on \(t_{l}=40\) with daily available vaccine budgets \(B_{t}=24\times 10^{3}\) and \(U^{k}_{t}=10^{4}\). Importantly, note that in section 3.1 for parameter estimation, we used data only for first 15 days, i.e. \(T=\{1,\ldots,15\}\), thus maintaining consistency for real-world applicability. We perform experiments in two different settings, in the **first setting** we work with \(\mathbb{K}=3\) i.e. three sub-populations of sizes \(7.5\times 10^{5},5\times 10^{5}\) and \(10^{6}\) respectively and in the **second setting** we increase \(\mathbb{K}\) to \(\mathbb{K}=4\), with an additional sub-population of size \(6\times 10^{5}\). Numerical values of other parameters like \(\lambda^{k}_{r},c^{k}_{1},c^{k}_{2},\eta\) and additional experiments to evaluate their effect are provided in SI.
For each setting i.e. \(\mathbb{K}=3\) and \(\mathbb{K}=4\), using algorithm 1 and the nominal estimates of \(\alpha,\beta\) and \(\gamma\), we solve the \(\mathcal{NF}\) to get the nominal vaccine policy \(\mathcal{V}_{\mathcal{N}}\). Using the scenario-set \(\Omega\) (generated from the discrete-parameter distribution) we solve \(\mathcal{SF}\) to get the uncertainty-informed vaccine policy \(\mathcal{V}_{\mathcal{S}}\). We next evaluate the efficacy of all the three vaccine policies i.e. \(\mathcal{V}_{\phi},\mathcal{V}_{\mathcal{N}}\) and \(\mathcal{V}_{\mathcal{S}}\). For each of these policies, we simulate all the scenarios in the scenario-set \(\Omega\) and compute the expected values of all the states i.e. S,E,I,R and M over the time horizon \(T\).
The evolution of the infected state (I) of the _total population_ and the infected (I) and immuned (M) states of each _sub-population_ are shown in fig. 9(a) and 9(b) for \(\mathbb{K}=3\) and \(4\) respectively. We note that
for \(\mathbb{K}=3\) (fig 9(a)), the expected peak infection is reduced from around 501k (with no-vaccination i.e. \(\mathcal{V}_{\phi}\)) to 324k with nominal vaccination policy \(\mathcal{V}_{\mathcal{N}}\). This reduction of peak infection by 35.3% is expected due to vaccination. More importantly, we observe that with the stochastic vaccination policy \(\mathcal{V}_{\mathcal{S}}\) the peak infection is further reduced to around 308k, which is an improvement of around **4.9%** over \(\mathcal{V}_{\mathcal{N}}\) and 38.56% over \(\mathcal{V}_{\phi}\). This improvement of \(\mathcal{V}_{\mathcal{S}}\) over \(\mathcal{V}_{\mathcal{N}}\) by **4.9%** is also referred to as the _value of stochastic solution_ (VSS) or equivalently the benefit of accounting for uncertainty.
For \(\mathbb{K}=4\) (fig 9(b)), we observe that the peak infection with no-vaccine policy \(\mathcal{V}_{\phi}\) is around 653k, and is reduced to 393k with \(\mathcal{V}_{\mathcal{N}}\) and is further reduced to 361k with \(\mathcal{V}_{\mathcal{S}}\), i.e. \(\mathcal{V}_{\mathcal{S}}\) provides a reduction of around **8%** over \(\mathcal{V}_{\mathcal{N}}\). This higher VSS of **8%** for \(\mathbb{K}=4\) compared to **4.9%** for \(\mathbb{K}=3\) is due to the fact that the size of scenario set \(|\Omega|\) is directly proportional to the number of sub-populations \(\mathbb{K}\). Recall that we also account for the uncertainty in the onset of the pandemic in each sub-population through the parameter \(c_{2}^{k}\).
Note that since the immuned sub-population size is directly proportional to vaccines allocated to that sub-population, therefore the third figure in 9(a) and 9(b) also shows how many vaccines are allocated to each sub-population relative to each other. We observe that there is a clear difference between the nominal and the stochastic allocations. This significant difference in nature of the vaccine policies explain the reduction of **4.9%** and **8%** respectively, providing validity to our results in the sense that the reductions obtained are not simply due to minor numerical changes in solution values. We further discuss the differences of the two policies (\(\mathcal{V}_{\mathcal{N}}\) vs \(\mathcal{V}_{\mathcal{S}}\)) in SI.
**SEPIHR model:** We next evaluate our approach on the SEPIHR model with additional states P (for protective quarantine) and H (for hospitalised quarantined). Importantly here as the number of hospitalisations (H) is modeled explicitly, therefore we minimize the peak (maximum) hospitalisations. For this model, corresponding vaccine allocation optimization formulations (i.e. nominal and stochastic) are provided in SI.
In fig. 11, we show the evolution of the infected (I) and hospitalised (H) states of the total population, along with the I,H and immuned (M) for each of the 4 sub-populations. We note that the peak infections (I) for the three policies (i.e. \(\mathcal{V}_{\phi}\), \(\mathcal{V}_{\mathcal{N}}\) and \(\mathcal{V}_{\mathcal{S}}\)) are around 539k, 280k and 262k and the peak hospitalisations are around 16k, 9.4k and 9k respectively. Therefore, \(\mathcal{V}_{\mathcal{S}}\) provides a reduction of **6.3%** in peak infections (I) over \(\mathcal{V}_{\mathcal{N}}\) and a **4.4%** reduction in peak hospitalisations (H). Interestingly, from the fifth plot in fig. 10, we note that despite its largest size and earliest the onset of the pandemic, red population is allocated the least vaccines. This can be explained by the fact that we aim to minimize the peak of the total population. In SI, we provide more such sub-population level discussions on the differences in nature the of the optimal policies including \(\mathcal{V}_{\mathcal{N}}\) vs \(\mathcal{V}_{\mathcal{S}}\).
The above results on SEIR and SEPIHR models clearly demonstrate the benefit of uncertainty-informed vaccine allocation using Bayesian inference over using nominal estimates. Our improvements of **4-8%** are either consistent with prior works in literature such as [60] or much better [54].
In SI, we provide more experiments under different setups for both the models. We also discuss the possible societal impact of our work.
Figure 11: **SEPIHR:** Evaluation of different policies: \(\mathcal{V}_{\phi}\), \(\mathcal{V}_{\mathcal{N}}\) and \(\mathcal{V}_{\mathcal{S}}\) with \(\mathbb{K}=4\) sub-populations.
Figure 10: **SEIR:** Evaluation of different vaccine policies i.e. no-vaccine \(\mathcal{V}_{\phi}\), nominal \(\mathcal{V}_{\mathcal{N}}\) and stochastic \(\mathcal{V}_{\mathcal{S}}\).
Concluding Remarks and Future Work
In this paper, we proposed an uncertainty informed vaccine allocation problem as a stochastic optimization problem, for which the tractable scenario-set is constructed in a novel data-driven manner using Bayesian inference for ODEs with GPs. We also proposed a scalable solution algorithm to solve the stochastic program and showed that a significant gain can be achieved by accounting for uncertainty. For future work, a natural extension would be to systematically investigate equity and fairness of allocation through additional constraints and different objective functions.
|
2303.17907 | Predictive Context-Awareness for Full-Immersive Multiuser Virtual
Reality with Redirected Walking | The advancement of Virtual Reality (VR) technology is focused on improving
its immersiveness, supporting multiuser Virtual Experiences (VEs), and enabling
users to move freely within their VEs while remaining confined to specialized
VR setups through Redirected Walking (RDW). To meet their extreme data-rate and
latency requirements, future VR systems will require supporting wireless
networking infrastructures operating in millimeter Wave (mmWave) frequencies
that leverage highly directional communication in both transmission and
reception through beamforming and beamsteering. We propose the use of
predictive context-awareness to optimize transmitter and receiver-side
beamforming and beamsteering. By predicting users' short-term lateral movements
in multiuser VR setups with Redirected Walking (RDW), transmitter-side
beamforming and beamsteering can be optimized through Line-of-Sight (LoS)
"tracking" in the users' directions. At the same time, predictions of
short-term orientational movements can be utilized for receiver-side
beamforming for coverage flexibility enhancements. We target two open problems
in predicting these two context information instances: i) predicting lateral
movements in multiuser VR settings with RDW, and ii) generating synthetic head
rotation datasets for training orientational movements predictors. Our
experimental results demonstrate that Long Short-Term Memory (LSTM) networks
feature promising accuracy in predicting lateral movements, and
context-awareness stemming from VEs further enhances this accuracy.
Additionally, we show that a TimeGAN-based approach for orientational data
generation can create synthetic samples that closely match experimentally
obtained ones. | Filip Lemic, Jakob Struye, Thomas Van Onsem, Jeroen Famaey, Xavier Costa Perez | 2023-03-31T09:09:17Z | http://arxiv.org/abs/2303.17907v4 | # Predictive Context-Awareness for Full-Immersive Multiuser Virtual Reality with Redirected Walking
###### Abstract
The advancement of Virtual Reality (VR) technology is focused on improving its immersiveness, supporting multiuser Virtual Experiences (VEs), and enabling users to move freely within their VEs while remaining confined to specialized VR setups through Redirected Walking (RDW). To meet their extreme data-rate and latency requirements, future VR systems will require supporting wireless networking infrastructures operating in millimeter Wave (mmWave) frequencies that leverage highly directional communication in both transmission and reception through beamforming and beamstering. We propose the use of predictive context-awareness to optimize transmitter and receiver-side beamforming and beamsteering. By predicting users' short-term lateral movements in multiuser VR setups with Redirected Walking (RDW), transmitter-side beamforming and beamsteering can be optimized through Line-of-Sight (LoS) "tracking" in the users' directions. At the same time, predictions of short-term orientational movements can be utilized for receiver-side beamforming for coverage flexibility enhancements. We target two open problems in predicting these two context information instances: i) predicting lateral movements in multiuser VR settings with RDW, and ii) generating synthetic head rotation datasets for training orientational movements predictors. Our experimental results demonstrate that Long Short-Term Memory (LSTM) networks feature promising accuracy in predicting lateral movements, and context-awareness stemming from VEs further enhances this accuracy. Additionally, we show that a TimeGAN-based approach for orientational data generation can create synthetic samples that closely match experimentally obtained ones.
Full-immersive multiuser Virtual Reality, predictive context-awareness, Recurrent Neural Network, Generative Adversarial Network, redirected walking
## I Introduction
The utilization of Virtual Reality (VR) technology is transforming digital experiences and interactions of various communities [1]. To improve the immersiveness of Virtual Experiences (VEs), VR setups and content are continually being upgraded. Research efforts are primarily focused on enhancing the quality of VEs provided to the users [2], and facilitating its wireless delivery without mobility constraints, also known as "cutting the wire" [3]. Additionally, enabling multiuser experiences that allow the users to collaborate and have their actions affect the VEs of others is an important goal [4].
In the future, VR systems will have the capability to accommodate multiple users who can fully engage in immersive VEs without being limited by mobility. This advanced functionality will be made possible by high-frequency wireless networks primarily operating in the millimeter Wave (mmWave) band, ranging from 30 to 300 GHz [5]. To provide mobile VR users with high-quality content in real-time, the wireless communication that supports these systems will need to be highly directional in both transmission and reception [6].
Directional mmWave beams will follow users' movements during transmission to maintain Line-of-Sight (LoS) connectivity with them. Meanwhile, Redirected Walking (RDW) will be utilized to prevent physical collisions between the users and VR setup boundaries or other users [4]. RDW enables the users to explore VEs freely while subtly redirecting their physical movements for collision avoidance, thus enhancing immersion. Short-term lateral movement prediction of the users can be used to support continuous LoS connectivity, as directional mmWave beams must provide coverage for both current and near-future user locations. This requirement highlights the importance of short-term lateral movement prediction in full-immersive multiuser VR setups with RDW.
Predicting short-term movements in natural human walking is an established research topic, with Long Short-Term Memory (LSTM) networks from the family of Recurrent Neural Networks (RNNs) being particularly effective (e.g., [7]). Although these methods have proven useful for predicting natural walks, neither imperceptible nor perceptible resteering accurately mimics natural walking movements. This indicates a need to assess the suitability of RNNs in predicting VR users' lateral mobility under the constraints of RDW. Despite this, the topic has received relatively little attention in the community. Nevertheless, recent work by [8] has demonstrated that RNNs, particularly LSTM networks, can be applied for this purpose and feature promising levels of accuracy in a single-user setup.
Our work builds upon previous research by incorporating context information from VEs into the prediction. Specifically, we utilize the users' movement trajectory in a VE as an input feature, which differs from existing methods that solely rely on physical movement trajectories. Our experiments evaluate the impact of different numbers of coexisting users and types of VEs on the predictive accuracy. Our results demonstrate that incorporating virtual movement trajectory as an input context significantly enhances the accuracy of the prediction model.
It is envisioned that the users' real-world movements should be accurately reflected in VEs, allowing for seamless changes in gaze direction. To support this requirement, flexible coverage is highly advantageous for receiver-side beamforming on a Head-Mounted Device (HMD). Even a slight beam misalignment can significantly affect the Signal-to-Noise Ratio (SNR) [9], which is why a flexible beam stretching in the head
rotation direction can provide the HMD with the consistently high gain necessary for uninterrupted content delivery. This approach ensures that user motion is reflected on-screen within the motion-to-photon latency of 20 ms for avoiding nausea [5]. We argue that accurate prediction of head rotations is needed to proactively form such beams. Existing approaches for such predictions are already highly accurate, as seen in e.g., [10].
Typically, such prediction algorithms rely on Deep Learning (DL) components for transforming the users' orientation data into valuable outputs. However, training, testing, and evaluating these algorithms necessitates collecting vast amounts of orientation data. While these algorithms are the primary consumers of orientation data, other situations also require large orientation data sources, such as receiver-side beamforming and enabling RDW. Collecting these datasets is expensive and laborious, making it challenging to scale. Instead, a more effective approach would be to use synthetic data generation to supplement existing datasets with new samples, adding only minor variations to the overall dataset distribution.
To the best of our knowledge, there is currently only a single study on generating synthetic orientational data, exploring the use of Fast Fourier Transform (FFT) for the generation [11]. In this approach, the input orientation time series is treated as a signal and converted to power spectral densities, and the mean power spectral density is modeled. This method then generates synthetic time series by converting perturbed versions of the power spectral densities back to orientational series. This results in synthetic series that closely resemble the mean of the input ones. Our approach, on the other hand, uses a TimeGAN model from the family of Generative Adversarial Networks (GANs) for generating synthetic orientational data. We demonstrate experimentally the model's beyond state-of-the-art capabilities.
In more general terms, this work advocates the utilization of predictive context-awareness to enhance the performance of mmWave networks that support full-immersive multiuser VR with RDW. Specifically, we demonstrate that predictive context information, such as the users' lateral mobility under the constraints of RDW and their orientational mobility, can be precisely forecasted for a brief period. Furthermore, we explore how these predictive context information instances can be utilized to optimize the performance of supporting mmWave networks. This optimization can occur along two dimensions: transmitter-side beamforming and beamsteering toward the VR users' HMDs, and receiver-side beamforming for coverage enhancements.
## II System Overview
In Figure 1, we showcase a full-immersive multiuser VR setup. Our focus lies on the deployment of this setup within a physically constrained environment that prioritizes user safety while engaging with VEs. This safe perimeter limits potential collision hazards for the users to the environmental boundaries and other users.
RDW is employed to steer the users and ensure collision avoidance between them and the environmental boundaries, as well as among themselves. Its objective is to facilitate user immersion by enabling them to freely explore VEs without constraints, while seamlessly redirecting their movements in the physical space, (ideally) without causing any noticeable disruptions.
The three ways of achieving imperceptible resterring in VEs are: i) curvature gains, which involve VE rotations, ii) translational gains, which modify the users' linear movements to change their travel distances in VEs, and iii) rotational gains that introduce additional rotations to the already rotating users. A promising algorithm for achieving this is the Artificial Potential Field (APF) [4], which generates a force vector that guides the users away from obstacles and scales inversely with the distance from each obstacle, including other users. APF respects empirically determined RDW noticeably thresholds [12], resulting in imperceptible resterring. In case of an imminent collision, the resetting algorithm called Artificial Potential Field Resetting (APF-R) [4] is triggered. The APF-R algorithm calculates the total force vector for determining the angle the users should physically turn toward, followed by instructing a 2:1 turn. During a 2:1 turn, the users' rotational speed increases, allowing them to turn 360\({}^{\circ}\) in the VE while turning a smaller computed angle in reality. Our study uses APF and APF-R for imperceptible resterring and resetting in situations where a collision is imminent, respectively.
To ensure optimal Quality of Experience (QoE), VR content is delivered to the users via highly directional mmWave communication. Specifically, an Access Point (AP) transmits focused beams that track the users' movements, thereby continuously maintaining LoS connectivity with each of them. This approach maximizes the link quality and enhances the users' QoE. The process involves the VR headset reporting its location to the AP, which is then used to facilitate both RDW and beamsteering. It is worth noting that modern VR headsets, such as the Oculus Quest 2 and Vive Cosmos, already possess the capability to generate and share the physical locations of the HMDs via their built-in sensors and inside-out tracking.
The AP aims at creating beams that cover the current and near-future locations of the users, facilitating LoS maintenance at present and in the upcoming period. Several techniques, including [13], have been proposed in the literature to achieve this goal for transmitter-side beamforming and beamsteering. The beamforming on the receiver side is also expected to adapt to the users' head rotations using the HMDs' built-in sensors that provide accurate orientation estimates. For those interested in this system aspect, in [14] we have introduced coVRage, a receiver beamforming technique that anticipates changes in the Angle of Arrival (AoA) from the AP using past and present orientations as references. Subsequently, the HMD adjusts the beam dynamically to encompass the AoA trajectory.
It should be noted that any possible interruptions in the LoS connectivity between the AP and its users caused by obstructions from other users are beyond the scope of our considerations. One solution to this issue could be to implement an AP handover when there is a prediction that a user's movement will obstruct the LoS path of another user. This emphasizes the need for accurately predicting VR users' short-term movements. Additionally, solutions that utilize Intelligent Reflective Surfaces (IRSs) have also been proposed, e.g., [6].
## III Toward Predictive Context-Awareness
### _Short-Term Lateral Movement Prediction_
The preceding discussion suggests that short-term prediction of users' horizontal movements will serve as a tool for RDW, transmitter-side beamsteering, and LoS obstruction avoidance. It is intuitive that the accuracy of such prediction will directly impact the effectiveness of other system components, making optimization crucial for improving the users' QoE.
RNNs have been identified as the most appropriate choice for predicting lateral movements, as mentioned earlier. RNNs are a class of artificial neural networks that consist of multiple neurons of the same kind, each of which passes a message to a succeeding one. This enables RNNs to display dynamic behavior over time, making them well-suited for tasks such as speech recognition, handwriting, and time-series prediction. We consider LSTMs as one of the most promising RNNs for predicting the trajectory of lateral movements within the RDW constraints, owing to their potential in predicting natural walking. Figure 2 illustrates a neuron of an LSTM network, as well as its key components. In LSTM, a forget gate, which is a _sigmoid_ layer, is used to determine whether to retain the previous cell state. A _sigmoid_ layer called the input gate, together with a _tanh_ layer, is used to update the cell state. The output is then generated by filtering the cell state through a _sigmoid_ layer to determine which part of the cell state should be outputted, and normalizing it using a _tanh_ layer.
Most LSTM-based lateral movement predictors rely on past physical locations and historical movement patterns to predict users' movements, taking inspiration from predictions of natural walking trajectories. However, in full-immersive VR setups with RDW, users' movements are not entirely natural. RDW techniques steer the users to avoid collisions by directing the delivery of VR content towards collision-free physical locations. Therefore, the redirections that are expected to happen in the VE to prevent collisions are a crucial aspect of VR users' mobility.
We argue that the RDW-related inputs from the VEs can be a valuable source of information for optimizing near-future movement trajectory predictions for full-immersive VR users. To incorporate this information, we introduce the concept of virtual locations, which represent the users' locations in the VEs. In addition to historical physical locations, we use a stream of historical virtual locations as input features to the proposed LSTM network. Depending on the type of input information used, we distinguish between the "baseline" and "virtual" versions of our approach. It is worth noting that the virtual coordinates of the users are assumed to be known one time step ahead of their physical coordinates, as depicted in Figure 2. We consider this assumption to be natural, since the RDW-derived virtual coordinates for the next time instance are based on the physical coordinates at current time.
### _Short-term Orientational Movement Prediction_
To predict orientational movements, a sufficient amount of experimentally obtained input data is required (i.e., distributions of yaw, pitch, and roll movements). To make existing predictors more effective, synthetic data needs to be generated using the limited experimental data available. Current methods of generating synthetic data are mostly model based, which necessitates expert input and significant modifications to the original design to transfer the generator across sources. Ideally, a model-free approach with higher generalizability would be preferable. GANs are a type of a general DL agent design that involves two sub-systems interacting adversarially to generate samples that can be integrated into the original dataset without significantly altering its overall distribution. As a result, GANs are extremely useful for developing model-free synthetic data generation techniques.
The GAN-based system starts with the generator utilizing random noise to create synthetic samples. Meanwhile, the discriminator acts as a supervised learning-based classifier, identifying whether a presented sample is genuine (coming
Fig. 1: Considered Scenario for Full-Inmersive Multiuser Virtual Reality with Redirrected Walking
from the source dataset) or fake (coming from the generator). The generator cannot access the source samples and it only has access to the discriminator's loss function. Therefore, the generator focuses on optimizing the loss to improve its output. This iterative process drives the generator to generate increasingly realistic synthetic samples that resemble the original distribution, unveiling more nuanced differences between real and synthetic samples.
There is a need to maintain correlation between the samples as they represent a time series. We suggest using TimeGANs to create synthetic orientational datasets, as they are particularly well-suited to this task due to their ability to maintain time-dependencies. To achieve this, our proposed TimeGAN training process consists of a discriminator and generator, both of which use Gated Recurrent Units (GRUs) to handle time-dependencies. Additionally, we introduce an embedder and recoverer subsystem for data encoding into a lower-dimensional latent space, followed by decoding back to the original dimension. We first train these two subsystems before the generator produces samples in the latent space, which are then converted to time series through the recovery process. To generate complete latent representations of incomplete series from the source dataset, we use supervised learning. The generator is encouraged to capture time-correlation within the series through a loss function that measures the distance between synthetically generated data and the actual source data at similar time steps. To achieve this, we alternate between adversarial and supervised learning, using cross-entropy and Mean-Square Error (MSE) losses, respectively. Please refer to Figure 3 for a visual representation of our proposed design.
## IV Evaluation Setup and Results
We assess the performance of LSTM-based short-term lateral movements prediction in potentially multiuser VR setups with RDW. This is followed by the performance assessment of TimeGAN for generating synthetic orientational movement datasets, envisaged as a primer for predicting orientational movements of fully immersed VR users.
### _Short-term Lateral Mobility Prediction_
The deployment environment utilized in the experiment was a square with dimensions of 15 meters. These dimensions were determined through experimentation to find the optimal size that would ensure maintaining acceptable level of noticeability, while still being practical for future deployment (e.g., in residential settings). To create this environment and prevent testers from colliding with obstacles, a 106\(\times\)65 m\({}^{2}\) outdoor field near the University of Antwerp was utilized. The server running the RDW algorithm (i.e., APF) was a Windows 10-based MSI GS66 laptop with an Intel i7 processor, 16 GB of RAM, and WiFi 6. The HMDs used were Android-based Oculus Quests2 with Qualcomm Snapdragon XR, a 120 Hz refresh rate, and 6 GB of RAM. Connectivity between the server and the HMDs was provided by a wireless hotspot using a Samsung Galaxy S8.
Two VEs were designed in Unity to evaluate the prediction accuracy. In the "straight path" experience, the testers were instructed to follow a straight path throughout the VE, representing the worst-case scenario for the noticeability and performance of the RDW algorithm. In the "random path"
Fig. 3: Illustration of the TimeGAN training process
Fig. 2: Input features of the considered RNN approaches
experience, the testers were encouraged to follow a randomly curved path in an open environment. The expectation was that the curved path introduced by the VE would be less noticeable and benefit the RDW algorithm compared to the straight path.
The experiments involved the testers walking in an unbounded VE while being physically confined to a restricted environment. The positional data from the HMDs was sent to the server, where the RDW algorithm guided the testers within the physical boundaries to avoid collisions with other testers and environmental borders. Each experiment involved up to three testers coexisting in the environment and fully immersed in the VE. At the beginning of each experiment, the testers were instructed to follow a predefined path. They were informed that a reset might occur based on recommendations from the RDW algorithm (i.e., APF-R). If a reset occurs during the testing process, the testers would encounter a stop sign, followed by the VE rotating to provide guidance in the suggested direction. To maintain engagement, the duration of each experiment was limited to 5 minutes because the testers would lose interest afterwards as there were no distractions nor interactions within the VEs.
Figure 4 illustrates the performance of the LSTM-based prediction using Squared Errors (SEs) as the metric of interest. Comparing the baseline with the version that incorporates virtual coordinates alongside physical ones, it is evident that the latter generally outperforms the former. In a single-user system and for both VEs, the average per-user SE of the prediction is reduced from around 0.001 m\({}^{2}\) in the baseline to less than 0.0005 m\({}^{2}\) when utilizing additional context from the VE, resulting in a twofold increase in prediction accuracy.
The improvements become more significant with an increasing number of users. For instance, in a three-user system, utilizing context instances from VEs leads to improvements of up to 75% in the worst case, as depicted in the figure. These findings demonstrate the potential of leveraging specific context information from VEs, such as virtual coordinates of the users, to enhance the performance of LSTM-based predictors of near-term lateral movements. Moreover, the introduction of a second user does not notably affect prediction accuracy in the considered VEs, indicating the usefulness of short-term lateral movement prediction in _multiuser_ VR systems with RDW. However, the accuracy decreases considerably when a third tester is introduced, regardless of the VE type. This observation highlights the importance of appropriately sizing the physical environment based on the number of users and their mobility patterns. Notably, the prediction accuracy in the straight path VE, which is the worst-case scenario for RDW, is better than that in the random path VE. This is likely because the straight path VE introduces less curvature in the testers' movements, and these more linear movements can be more accurately predicted.
### _Training of Orientational Mobility Predictors_
During six two-minute-long sessions in an immersive VE from [15], three testers were free to navigate and have their full poses, including lateral and orientational movements, sampled at a frequency of 250 Hz. The orientational traces from this dataset were used to evaluate our approach for generating synthetic head rotation datasets, with a focus on
Fig. 4: Squared errors (SEs) achieved by different versions of the LSTM approach
the Probability Density Functions (PDFs) of the yaw, pitch, and roll movements. We quantized the PDFs into 10\({}^{\circ}\)-wide buckets centered around 0\({}^{\circ}\). The yaw motion PDF was found to be substantially more complex than the other two types of movements, presumably due to the points of interest being spread across the testers' horizontal plane. As the VE was placed indoors, the users' gaze was primarily directed toward one of the walls, which explains the local maxima observed in Figure 5 around -90, 0, and 90\({}^{\circ}\).
The normal distributions of the pitch and roll samples are easily reproduced by neural networks. However, the distribution of yaw samples is more complex due to the multiple peaks and discontinuities that arise over time, since these samples are limited to the range between -180 and 180\({}^{\circ}\). To overcome this challenge, we employ a quantile transformer to non-linearly transform the data and address its non-normality. In addition, we shift the remainder of a time series by 360\({}^{\circ}\) to avoid discontinuities. These transformations mean that the data range cannot be predetermined anymore, but the samples still fall within practically useful boundaries. Furthermore, these transformations are reversible, enabling synthetic data to be backtransformed to their original representations.
The time and complexity of GAN training increase in proportion to the input data size. To accommodate the demand for a large number of distinct samples in GAN-based DL, we divided each time series into instances of 25 samples using a sliding window. We also applied data downsampling, which did not significantly impact its utility. Based on the law of inertia, humans can perform a limited number of distinct head rotations within a brief timeframe, resulting in relatively smooth motions that can be accurately reconstructed using simple interpolation. With most of the energy concentrated in the lowest frequencies, over 90% below 5 Hz, the Shannon-Nyquist sampling theorem confirms that this information can be preserved by downsampling to 10 Hz.
The dataset is, therefore, divided into 23,700 samples of 25 points, each sample being 1.5 s long. The data was provided to TimeGAN for fitting the quantile transformer, followed by generating a synthetic dataset 10 times larger than the original one, once every 10 epochs. This was done because GANs are challenging to train and known to significantly degrade when overtrained. The procedure yielded a well-distributed synthetic dataset, despite the fact that we did not optimize the epoch hyperparameter nor repeat the resource intensive training procedure. The generated dataset is the final result of the system, therefore the system will not be presented with inputs of slightly varied distributions later on. This is because the input comes from samples of uniformly distributed noise, implying that data overfitting can be excluded. Finally, we generated a baseline synthetic dataset using the FFT approach from [11]. The result is a 30,000 steps-long series, which we further downsample and divide into shorter samples using the above-discussed sliding window approach.
Figure 5 presents the distributions of yaw and pitch values. The roll distribution has been omitted because it is range-constrained as it captures uncomfortable head tilting. Hence, both the FFT- and TimeGAN-generated synthetic datasets closely match the roll distribution. However, the pitch distribution is slightly wider, and the FFT approach already fails to match it accurately, in contrast to TimeGAN. This trend is further pronounced in the yaw distribution, where TimeGAN closely matches its three local maxima, while FFT does so to a lesser extent. For example, the Kullback-Leibler divergence of the target yaw distribution compared to the TimeGAN-generated one is 0.00235, while the same compared to the FFT-resulting distribution is 0.0447 (i.e., almost 19 times higher). It is also worth noting that only FFT was hand-crafted to match this distribution. Based on these results, we argue that utilizing TimeGANs for generating synthetic head rotation datasets shows great promise.
## V Conclusion
Our research has demonstrated that Long Short-Term Memory (LSTM)-based Recurrent Neural Networks (RNNs) hold promise for accurately predicting lateral movements in multiuser full-immersive Virtual Reality (VR) environments with Redirected Walking (RDW). Additionally, we have showcased the advantages of incorporating virtual context, such as the movement trajectory within a Virtual Experience (VE), as an input feature for enhancing the prediction accuracy. Moreover, we have proposed a TimeGAN-based approach for generating synthetic head rotation data, envisioned to serve as a primer for training orientational movement predictors in full-immersive VR setups. On a high level, we advocate for the utilization of predictive context-awareness in optimizing the connectivity in next generation VR setups. We expect this approach to be beneficial in applications ranging from dynamic multimedia encoding to millimeter Wave (mmWave) beamforming.
Fig. 5: Distribution of Yaw and Pitch values across all time steps of all samples
Note that we did not optimize the duration of the prediction window, but this duration should intuitively depend on the transmitter-side beamforming and beamsteering operating in the 100 ms timeframe considered in this work. Deriving optimal hyperparameters of the presented approaches was also not in scope, as the goal was to demonstrate the feasibility of predictive context-awareness. We consider addressing these limitations as a part of our future efforts. We argue that other context instances, for example the users' full 3-dimensional (3D) pose estimates, might be of interest in future VR systems, e.g., for enabling touch-like feedback or mobility-wise unconstrained portrayal of users in VEs.
## Acknowledgments
This work was supported by the MCIN / AEI / 10.13039 / 501100011033 / FEDER / UE HoloMit 2.0 (nr. PID2021-126551OB-C21) / UNICO-5G I+D Open6G (TSI-063000-2021-6). The work was also funded by the Research Foundation - Flanders (nr. G034322N and 1SB0719N).
|
2309.03901 | Consistent actions for massive particles interacting with
electromagnetism and gravity | Consistent interactions with electromagnetism and gravity for mass $m$
particles of any spin are obtained. This is done by finding interactions which
preserve the covariantized massive gauge symmetry present in recently
constructed massive particle actions. This gauge principle is sufficient for
finding consistent completions of minimal as well as non-minimal couplings of
any type. For spins $s\geq 3/2$, consistency requires infinitely many
interaction terms in the action, including arbitrarily high order derivatives
of electromagnetic and gravitational curvatures, with correspondingly high
powers of $1/m$. These interactions may be formally resummed and expressed in
terms of non-local operators. Finally, although the interactions appear
non-local, evidence is presented for the existence of a field redefinition
which makes the interacting action local. This work provides the first explicit
realization of an exactly gauge invariant formulation of massive particles
interacting with electromagnetism and gravity. | Lukas W. Lindwasser | 2023-09-07T17:59:08Z | http://arxiv.org/abs/2309.03901v3 | # Consistent actions for massive particles interacting with electromagnetism and gravity
###### Abstract
Consistent interactions with electromagnetism and gravity for mass \(m\) particles of any spin are obtained. This is done by finding interactions which preserve the covariantized massive gauge symmetry present in recently constructed massive particle actions. This gauge principle is sufficient for finding consistent completions of minimal as well as non-minimal couplings of any type. For spins \(s\geq 3/2\), consistency requires infinitely many interaction terms in the action, including arbitrarily high order derivatives of electromagnetic and gravitational curvatures, with correspondingly high powers of \(1/m\). These interactions may be formally resummed and expressed in terms of non-local operators. The inherent non-locality is a manifestation of the known causality problems present in interacting massive particles with spin \(s\geq 3/2\).
## 1 Introduction
Introduction
There exists in nature composite massive particles with spins higher than 1, the highest spin particle observed at the moment being the \(\Delta(2950)\) baryon with spin 15/2 [1]. The basic problem of modelling such particles interacting with the most relevant forces they experience during their lifetime, electromagnetism and gravity, has a long history, first considered by Fierz and Pauli in 1939 [2]. In that work, they noted that introducing interactions generically does not preserve the massive higher spin particle's degrees of freedom, making the model inconsistent with unitarity.
Another challenge to the consistent modelling of interacting massive higher spin particles is that interactions violate causality. For instance, it was realized by Velo and Zwanziger in 1969 [3] that minimally coupling spin 3/2 particles to an electromagnetic background allows for superluminal propagation. Subsequently, arguments [4; 5] were made that any theory with a finite collection of interacting massive particles of spin higher than 2 cannot avoid causality violations at sufficiently high energies. For spin 3/2 particles, a solution to this and the degree of freedom problem was found by realizing the particle within the framework of supergravity [6]. For higher spins, the only known solution to both is to realize them within string theory [7], which has an infinite collection of higher spin particle excitations.
Barring these special solutions, these results are usually taken to mean that massive higher spin particles cannot be elementary [5; 8]. This is related to the fact that tree level scattering amplitudes involving massive higher spin particles have bad high energy behavior which violate unitarity bounds. It is known that there are unique non-minimal couplings with electromagnetism and gravity that improves the bad high energy behavior in some, but importantly not all, scattering processes [9; 10]. Furthermore, loop diagrams involving particles with spin 3/2 and higher result in nonrenormalizeable divergences. At energies where unitarity starts to break down, the particle description must be replaced with some yet more fundamental description. Indeed, all massive particles/resonances with spin 3/2 and higher we have observed are understood to be composites of low spin particles. For these reasons, modelling interacting massive higher spin particles directly is firmly within the regime of effective field theory.
At the moment, there is an active program of research modelling the inspiral of two spinning black holes or neutron stars using effective field theory techniques, treating them as point particles with very high spin [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22], in order to explain recently detected gravitational waves [23; 24; 25]. This has reinvigorated research in the consistent modelling of interacting massive higher spin particles.
A low energy effective field theory with an energy cutoff \(\Lambda\) never sees the aforementioned bad high energy behavior. Causality violations meanwhile are suppressed but still detectable if a superluminal signal with speed difference \(\Delta v=v-c>0\) propagates for a sufficiently long time \(T\) such that \(\Delta vT\gtrsim 1/\Lambda\), or if \(\Lambda\) is parametrically larger than the higher spin particle's mass \(m\). The original degree of freedom problem on the other hand persists in an effective field theory and is still of fundamental importance. The generic
solution to this problem is understood to be one which has been employed successfully for spin \(1,3/2,2\) particles, i.e. gauge invariance. For free particles, gauge symmetry is an invaluable tool for ensuring that the field theory description used has the correct physical degrees of freedom while keeping Lorentz invariance manifest. There is a reliable perturbative procedure for introducing interactions consistent with the gauge symmetry, while also possibly deforming the gauge transformations, described for instance in [26] and references therein. In the case of spin 1 particles, gauge invariance in interacting theories implies generalized Ward identities which guarantee that all Feynman diagrams with external legs with unphysical polarizations vanish [27; 28]. Thus whenever perturbation theory is valid, the gauge invariant interacting theory can continue describing a spin 1 particle at the level of the \(S\)-matrix. A natural expectation is that this general result will continue to hold for higher spins, although an independent proof of this is necessary. One complication of such a proof is that when interactions are introduced, one generally needs to correspondingly deform the gauge symmetry, as is what happens for instance in Yang-Mills theory and general relativity. The exact form of the generalized Ward identities in turn depends on the gauge symmetry. One should therefore first find a consistent set of interactions and corresponding deformations of the gauge symmetry.
Progress in this direction has been made in the past, including work at the level of equations of motion [7; 29; 30; 31], and at the Lagrangian level for constrained external backgrounds [32; 33; 34; 35; 36; 37; 38]. An important challenge is to find consistent actions of massive higher spin fields interacting in an arbitrary electromagnetic and gravitational field, so that they too can be treated as dynamical. The general expectation is that an exactly gauge invariant theory of massive higher spins with electromagnetism and gravity will require infinitely many interactions, including all orders of derivatives of curvatures, and correspondingly all orders in \(1/m\). In this paper, a prescription for writing down all such interactions is found for any spin.
To facilitate this, it was important to find an action principle for massive higher spins, recently detailed in a previous paper [39], which is simple enough to determine the set of consistent interactions. The original Singh-Hagen actions [40; 41] are not suited for this task for at least two reasons. First, although they successfully describe a free massive spin \(s\) particle with the correct degrees of freedom, they achieve this without an underlying gauge symmetry, and so there is no guiding principle for adding interactions which preserve the degrees of freedom. Second, these actions are only consistent in four spacetime dimensions, making it unclear how to dimensionally regulate. Actions for massive higher spins which address both of these points were constructed in [42; 43]. Introducing consistent interactions to these actions is possible, but technically challenging.
In contrast, the actions \(S_{n}\) and \(S_{n+1/2}\) constructed in [39] are built from gauge invariant field strengths \(\mathcal{F}_{n-i}\), \(i=0,1,2,3\) for integer spins and \(\mathcal{S}_{n-i}\), \(i=0,1,2\) for half integers, respectively, which satisfy massive Bianchi identities1 (15), (16), (23), enabling a simple prescription for finding exactly gauge invariant interactions. While the
general procedure for adding consistent interactions [26] entails also deforming the free theory's gauge transformations, possibly including terms proportional to arbitrary powers of electromagnetic and gravitational curvatures \(F_{\mu\nu}\) and \(R_{\omega\sigma\mu\nu}\), we find it sufficient to minimally deform the gauge transformations by simply replacing spacetime derivatives present in them with covariant derivatives, so that it is at least consistent with the \(U(1)\) electromagnetic gauge and general coordinate invariance. Given this gauge principle, the gauge variation of the actions \(\delta S_{n}\) and \(\delta S_{n+1/2}\) depends only on the aforementioned properties of the field strengths.
Footnote 1: The \(U(1)\) gauge transformation is a gauge transformation of the form \(\delta\mathcal{F}_{n-i}\), which is a gauge transformation of the form \(\delta\mathcal{F}_{n-i}\
Free theory
In this section, we briefly review the formulation of free massive spinning particles [39] that will be used as our starting point for incorporating interactions.
Particles with spin are covariantly described by totally symmetric tensors \(\phi_{\mu_{1}\cdots\mu_{n}}(x)\) for spin \(n\), or spinor tensors \(\psi_{\mu_{1}\cdots\mu_{n}}(x)\) for spin \(n+1/2\). The formulation of these particles dramatically simplifies after the introduction of an auxiliary vector coordinate \(s^{\mu}\), and instead writing the theory in terms of a "hyperfield" \(\Phi_{n}(X,s)=\frac{1}{n!}i^{-n/2}\phi_{\mu_{1}\cdots\mu_{n}}(X)s^{\mu_{1}} \cdots s^{\mu_{n}}\), where the factor \(i^{-n/2}\) is added for later convenience. All of the basic operations needed for constructing the covariant actions for massive spinning particles can be performed at the level of the hyperfield. For instance, one often has to take traces of the fields \(\phi^{\lambda}{}_{\lambda\mu_{3}\cdots\mu_{n}}(X)\). This can be achieved at the level of \(\Phi_{n}(X,s)\) by taking the Laplacian with respect to \(s^{\mu}\)
\[i\,\partial_{s}^{2}\Phi_{n}(X,s)=\frac{1}{(n-2)!}i^{-(n-2)/2}\phi^{\lambda}{}_ {\lambda\mu_{1}\cdots\mu_{n-2}}(X)s^{\mu_{1}}\cdots s^{\mu_{n-2}} \tag{1}\]
The divergence \(\partial^{\lambda}\phi_{\lambda\mu_{2}\cdots\mu_{n}}(X)\) may also be written in terms of \(\Phi_{n}(X,s)\)
\[i^{1/2}\,\partial_{s}\cdot\partial_{X}\Phi_{n}(X,s)=\frac{1}{(n-1)!}i^{-(n-1) /2}\partial^{\lambda}\phi_{\lambda\mu_{1}\cdots\mu_{n-1}}(X)s^{\mu_{1}}\cdots s ^{\mu_{n-1}} \tag{2}\]
The symmetric derivative \(\partial_{(\mu_{1}}\phi_{\mu_{2}\cdots\mu_{n+1})}(X)\) is written in terms of \(\Phi_{n}(X,s)\) via
\[i^{-1/2}s\cdot\partial_{X}\Phi_{n}(X,s)=\frac{1}{n!}i^{-(n+1)/2}\partial_{(\mu _{1}}\phi_{\mu_{2}\cdots\mu_{n+1})}(X)s^{\mu_{1}}\cdots s^{\mu_{n+1}} \tag{3}\]
One can also contract indices between two equal rank \(n\) hyperfields \(A_{n}(X,s)\) and \(B_{n}(X,s)\)
\[\int\frac{d^{d}sd^{d}s^{\prime}}{(2\pi)^{d}}e^{is\cdot s^{\prime}}A_{n}(X,s)B_ {n}(X,s^{\prime})=\frac{1}{n!}a_{\mu_{1}\cdots\mu_{n}}b^{\mu_{1}\cdots\mu_{n}} \tag{4}\]
These four operations are sufficient for the formulation of integer spin fields. For half integer spin fields, contraction with gamma matrices \(\gamma^{\mu}\) will also be necessary. For a Dirac hyperfield \(\Psi_{n}(X,s)\), we have
\[i^{1/2}\partial_{s}\Psi_{n}(X,s)=\frac{1}{(n-1)!}i^{-(n-1)/2}\gamma^{\lambda} \psi_{\lambda\mu_{1}\cdots\mu_{n}}(X)s^{\mu_{1}}\cdots s^{\mu_{n}} \tag{5}\]
The formula (4) is particularly interesting, as it suggests introducing a (pseudo) inner product on the space of hyperfields
\[(A_{n},B_{n})=\int d^{d}X\frac{d^{d}sd^{d}s^{\prime}}{(2\pi)^{d }}e^{is\cdot s^{\prime}}\tilde{A}_{n}(X,s)B_{n}(X,s^{\prime})=\int d^{d}X \frac{1}{n!}a^{*}_{\mu_{1}\cdots\mu_{n}}b^{\mu_{1}\cdots\mu_{n}} \tag{6}\] \[\text{where }\tilde{A}_{n}(X,s)=\frac{1}{n!}i^{-n/2}a^{*}_{\mu_{1} \cdots\mu_{n}}(X)s^{\mu_{1}}\cdots s^{\mu_{n}} \tag{7}\]
and we may consider the (pseudo) Hilbert space of hyperfields \(\Phi(X,s)\) with finite (pseudo) norm \((\Phi,\Phi)<\infty\)[44]. Note that \((i^{-1/2}s^{\mu}A,B)=(A,i^{1/2}\partial_{s}^{\mu}B)\), and so for instance, the divergence and symmetric derivative are anti-Hermitian adjoints of each other \((i^{1/2}\partial_{s}\cdot\partial_{X})^{\dagger}=-i^{-1/2}s\cdot\partial_{X}\) in this space.
### Integer spins
As detailed in [39], the covariant formalism of a complex massive spin \(n\) particle we work with not only uses a rank \(n\) hyperfield \(\Phi_{n}(X,s)\), but also three auxiliary hyperfields \(\Phi_{n-1}(X,s)\), \(\Phi_{n-2}(X,s)\), and \(\Phi_{n-3}(X,s)\). These auxiliary hyperfields couple to \(\Phi_{n}(X,s)\) in such a way so that after imposing the equations of motion, the auxiliary hyperfields may be set to zero, and \(\Phi_{n}(X,s)\) is a Fierz-Pauli system [2] in hyperspace
\[(\partial_{X}^{2}-m^{2})\Phi_{n} =0\] \[\partial_{s}\cdot\partial_{X}\Phi_{n} =0\] \[\partial_{s}^{2}\Phi_{n} =0 \tag{8}\]
\(\Phi_{n}(X,s)\) therefore has the correct degrees of freedom for the description of a massive spin \(n\) particle, with mass \(m\). This special coupling is facilitated by demanding that the theory be invariant under the gauge transformations
\[\delta\Phi_{n}=i^{-1/2}s\cdot\partial_{X}\epsilon_{n-1} \delta\Phi_{n-1} =i^{-1/2}s\cdot\partial_{X}\epsilon_{n-2}+im\,\epsilon_{n-1}\] \[\delta\Phi_{n-2}=-i^{1/2}s\cdot\partial_{X}\partial_{s}^{2} \epsilon_{n-1}+2im\,\epsilon_{n-2} \delta\Phi_{n-3} =-i^{1/2}s\cdot\partial_{X}\partial_{s}^{2}\epsilon_{n-2}+3m \partial_{s}^{2}\epsilon_{n-1} \tag{9}\]
where the gauge parameters \(\epsilon_{n-1}(X,s)\) and \(\epsilon_{n-2}(X,s)\) are arbitrary rank \(n-1\) and rank \(n-2\) hyperfields, respectively.
This theory includes gauge invariant field strengths \(\mathcal{F}_{n}\), \(\mathcal{F}_{n-1}\), \(\mathcal{F}_{n-2}\), and \(\mathcal{F}_{n-3}\), which are linear in the \(\Phi_{n-i}\)'s and quadratic in derivatives
\[\mathcal{F}_{n}= \big{(}\partial_{X}^{2}-m^{2}-s\cdot\partial_{X}\partial_{s} \cdot\partial_{X}+\frac{1}{2}(s\cdot\partial_{X})^{2}\partial_{s}^{2}\big{)} \Phi_{n}-\frac{i}{2}(s\cdot\partial_{X})^{2}\Phi_{n-2}-i^{1/2}m\,s\cdot \partial_{X}\Phi_{n-1} \tag{10}\] \[\mathcal{F}_{n-1}= \big{(}\partial_{X}^{2}-s\cdot\partial_{X}\partial_{s}\cdot \partial_{X}+\frac{1}{2}(s\cdot\partial_{X})^{2}\partial_{s}^{2}\big{)}\Phi_{ n-1}-\frac{i}{2}(s\cdot\partial_{X})^{2}\Phi_{n-3}\] \[+i^{-1/2}m\partial_{s}\cdot\partial_{X}\Phi_{n}-i^{-1/2}m\,s \cdot\partial_{X}\partial_{s}^{2}\Phi_{n}\] (11) \[\mathcal{F}_{n-2}= \big{(}\partial_{X}^{2}-s\cdot\partial_{X}\partial_{s}\cdot \partial_{X}-\frac{1}{2}(s\cdot\partial_{X})^{2}\partial_{s}^{2}\big{)}\Phi_{ n-2}-\frac{i}{2}(s\cdot\partial_{X})^{2}\partial_{s}^{4}\Phi_{n}-im^{2} \partial_{s}^{2}\Phi_{n}\] \[+2i^{-1/2}m\partial_{s}\cdot\partial_{X}\Phi_{n-1}-2i^{-1/2}m\,s \cdot\partial_{X}\partial_{s}^{2}\Phi_{n-1}+i^{1/2}m\,s\cdot\partial_{X} \Phi_{n-3}\] (12) \[\mathcal{F}_{n-3}= \big{(}\partial_{X}^{2}-m^{2}-s\cdot\partial_{X}\partial_{s} \cdot\partial_{X}-\frac{1}{2}(s\cdot\partial_{X})^{2}\partial_{s}^{2}\big{)} \Phi_{n-3}-\frac{i}{2}(s\cdot\partial_{X})^{2}\partial_{s}^{4}\Phi_{n-1}-3im^{ 2}\partial_{s}^{2}\Phi_{n-1}\] \[+2i^{1/2}m\,s\cdot\partial_{X}\partial_{s}^{4}\Phi_{n}+3i^{-1/2}m \partial_{s}\cdot\partial_{X}\Phi_{n-2}+i^{-1/2}m\,s\cdot\partial_{X}\partial_{ s}^{2}\Phi_{n-2} \tag{13}\]
The equations of motion are equivalent to setting all four field strengths to zero \(\mathcal{F}_{n-i}=0\). In terms of these ingredients, the gauge invariant free action for a massive spin \(n\) particle
may be written as
\[S_{n} =\frac{1}{2}n!\int d^{d}X\frac{d^{d}sd^{d}s^{\prime}}{(2\pi)^{d}}e^{ is.s^{\prime}}\times\] \[\Bigg{\{}\sum_{k=0}^{\lfloor n/2\rfloor}\frac{(-1)^{k}}{(2k)!} \Bigg{(}\Big{(}1-\frac{3k}{2}\Big{)}\partial_{s}^{2k}\tilde{\Phi}_{n}\partial_{ s^{\prime}}^{2k}\mathcal{F}_{n}+i\frac{k}{2}\partial_{s}^{2k}\tilde{\Phi}_{n} \partial_{s^{\prime}}^{2(k-1)}\mathcal{F}_{n-2}\] \[\qquad\qquad\qquad\qquad+i\frac{k}{2}\partial_{s}^{2(k-1)}\tilde {\Phi}_{n-2}\partial_{s^{\prime}}^{2k}\mathcal{F}_{n}-\frac{k}{2}\partial_{s} ^{2(k-1)}\tilde{\Phi}_{n-2}\partial_{s^{\prime}}^{2(k-1)}\mathcal{F}_{n-2} \Bigg{)}\] \[+\sum_{k=0}^{\lfloor(n-1)/2\rfloor}\frac{(-1)^{k}}{(2k+1)!} \Bigg{(}\Big{(}1-\frac{5k}{2}\Big{)}\partial_{s}^{2k}\tilde{\Phi}_{n-1} \partial_{s^{\prime}}^{2k}\mathcal{F}_{n-1}+i\frac{3k}{2}\partial_{s}^{2k} \tilde{\Phi}_{n-1}\partial_{s^{\prime}}^{2(k-1)}\mathcal{F}_{n-3}\] \[\qquad\qquad\qquad\qquad\qquad+i\frac{3k}{2}\partial_{s}^{2(k-1) }\tilde{\Phi}_{n-3}\partial_{s^{\prime}}^{2k}\mathcal{F}_{n-1}+\frac{k}{2} \partial_{s}^{2(k-1)}\tilde{\Phi}_{n-3}\partial_{s^{\prime}}^{2(k-1)}\mathcal{ F}_{n-3}\Bigg{)}\Bigg{\}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \text{c.c.} \tag{14}\]
That this action is gauge invariant is guaranteed by the fact that the field strengths are themselves gauge invariant \(\delta\mathcal{F}_{n-i}=0\), and that they satisfy what we call massive Bianchi identities
\[\partial_{s}\cdot\partial_{X}\mathcal{F}_{n}-\frac{1}{2}s\cdot \partial_{X}\partial_{s}^{2}\mathcal{F}_{n}+\frac{i}{2}s\cdot\partial_{X} \mathcal{F}_{n-2}+i^{1/2}m\mathcal{F}_{n-1}=0 \tag{15}\] \[\partial_{s}\cdot\partial_{X}\mathcal{F}_{n-1}-\frac{1}{2}s\cdot \partial_{X}\partial_{s}^{2}\mathcal{F}_{n-1}+\frac{i}{2}s\cdot\partial_{X} \mathcal{F}_{n-3}+\frac{1}{2}i^{-1/2}m\partial_{s}^{2}\mathcal{F}_{n}+\frac{1}{ 2}i^{1/2}m\mathcal{F}_{n-2}=0 \tag{16}\]
Indeed, the gauge variation of the action \(\delta S_{n}\) is a straightforward linear combination of terms proportional to the left hand sides of (15), (16), and the \(\delta\mathcal{F}_{n-i}\)'s, and their complex conjugates. This fact will be very important for us when considering interactions in section 3.
### Half integer spins
The covariant formalism for Dirac spin \(n+1/2\) particles [39] is very analogous to the integer spin case. It requires the use of not only a rank \(n\) Dirac spinor hyperfield \(\Psi_{n}(X,s)\), but also two auxiliary Dirac spinor hyperfields \(\Psi_{n-1}(X,s)\) and \(\Psi_{n-2}(X,s)\). As before, these auxiliary hyperfields couple to \(\Psi_{n}(X,s)\) in such a way so that after imposing the equations of motion, the auxiliary hyperfields may be set to zero, and \(\Psi_{n}(X,s)\) is a Rarita-Schwinger system [45] in hyperspace
\[(\partial\!\!\!/_{X}+m)\Psi_{n} =0\] \[\partial\!\!\!/_{s}\Psi_{n} =0 \tag{17}\]
\(\Psi_{n}(X,s)\) therefore has the correct degrees of freedom for the description of a massive spin \(n+1/2\) particle, with mass \(m\). This special coupling is facilitated by demanding that the
theory be invariant under the gauge transformations
\[\delta\Psi_{n}=i^{-1/2}s\cdot\partial_{X}\epsilon_{n-1}\] \[\delta\Psi_{n-1}=-is\cdot\partial_{X}\partial_{s}\epsilon_{n-1}+im \epsilon_{n-1}\] \[\delta\Psi_{n-2}=-i^{1/2}s\cdot\partial_{X}\partial_{s}^{2} \epsilon_{n-1}+2i^{1/2}m\partial_{s}\epsilon_{n-1} \tag{18}\]
where the gauge parameter \(\epsilon_{n-1}(X,s)\) is an arbitrary rank \(n-1\) hyperfield.
This theory includes gauge invariant field strengths \(\mathcal{S}_{n}\), \(\mathcal{S}_{n-1}\), and \(\mathcal{S}_{n-2}\), which are linear in the \(\Psi_{n-i}\)'s and linear in derivatives
\[\mathcal{S}_{n} =(\partial\!\!\!/_{X}+m-s\cdot\partial_{X}\partial\!\!\!/_{s})\Psi _{n}+i^{1/2}s\cdot\partial_{X}\Psi_{n-1} \tag{19}\] \[\mathcal{S}_{n-1} =(\partial\!\!\!/_{X}-s\cdot\partial_{X}\partial\!\!\!/_{s})\Psi _{n-1}+i^{1/2}s\cdot\partial_{X}\Psi_{n-2}+i^{-1/2}m\partial\!\!\!/_{s}\Psi_{n}\] (20) \[\mathcal{S}_{n-2} =(\partial\!\!\!/_{X}-m)\Psi_{n-2}+is\cdot\partial_{X}\partial_{ s}^{2}\Psi_{n}+i^{-1/2}s\cdot\partial_{X}\partial_{s}^{2}\Psi_{n-1}+2i^{-1/2}m \partial\!\!\!/_{s}\Psi_{n-1} \tag{21}\]
The equations of motion are equivalent to setting all three field strengths to zero \(\mathcal{S}_{n-i}=0\). In terms of these ingredients, the gauge invariant free action for a massive spin \(n+1/2\) particle in the Dirac representation may be written as
\[S_{n+1/2} =-n!\int d^{d}X\frac{d^{d}sd^{d}s^{\prime}}{(2\pi)^{d}}e^{is\cdot s ^{\prime}}\times\] \[\Bigg{\{}\sum_{k=0}^{\lfloor n/2\rfloor}\frac{(-1)^{k}}{(2k)!} \Bigg{(}\Big{(}1-\frac{3k}{2}\Big{)}\partial_{s}^{2k}\overline{\Psi}_{n} \partial_{s^{\prime}}^{2k}\mathcal{S}_{n}+i\frac{k}{2}\partial_{s}^{2k} \overline{\Psi}_{n}\partial_{s^{\prime}}^{2(k-1)}\mathcal{S}_{n-2}\] \[\qquad\qquad\qquad+i\frac{k}{2}\partial_{s}^{2(k-1)}\overline{ \Psi}_{n-2}\partial_{s^{\prime}}^{2k}\mathcal{S}_{n}+\frac{k}{2}\partial_{s} ^{2(k-1)}\overline{\Psi}_{n-2}\partial_{s^{\prime}}^{2(k-1)}\mathcal{S}_{n-2}\] \[\qquad\qquad\qquad\qquad-i^{-1/2}k\partial_{s}^{2(k-1)}\overline {\Psi}_{n-1}\overleftarrow{\partial\!\!\!/_{s}}\partial_{s}^{2(k-1)}\mathcal{ S}_{n-2}+i^{-1/2}k\partial_{s}^{2(k-1)}\overline{\Psi}_{n-2}\partial_{s^{ \prime}}\partial_{s^{\prime}}^{2(k-1)}\mathcal{S}_{n-1}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+ik\partial_{s}^{2(k-1)}\overline{\Psi}_{n-1}\overleftarrow{\partial\!\! \!/_{s}}\partial_{s}\partial_{s^{\prime}}\partial_{s^{\prime}}^{2(k-1)} \mathcal{S}_{n-1}\Bigg{)}\] \[+\sum_{k=0}^{\lfloor(n-1)/2\rfloor}\frac{(-1)^{k}}{(2k+1)!} \Bigg{(}\!\!-i\Big{(}\frac{1}{2}+\frac{3k}{2}\Big{)}\partial_{s}^{2k} \overline{\Psi}_{n}\overleftarrow{\partial\!\!\!/_{s}}\partial_{s^{\prime}} \partial_{s^{\prime}}^{2k}\mathcal{S}_{n}-\frac{k}{2}\partial_{s}^{2k} \overline{\Psi}_{n}\overleftarrow{\partial\!\!\!/_{s}}\partial_{s}\partial_{s^{ \prime}}^{2(k-1)}\mathcal{S}_{n-2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\frac{k}{2}\partial_{s}^{2(k -1)}\overline{\Psi}_{n-2}\overleftarrow{\partial\!\!\!/_{s}}\partial_{s^{ \prime}}\partial_{s^{\prime}}^{2k}\mathcal{S}_{n}+i\frac{k}{2}\partial_{s}^{2(k -1)}\overline{\Psi}_{n-2}\overleftarrow{\partial\!\!\!/_{s}}\partial_{s^{ \prime}}\partial_{s^{\prime}}^{2(k-1)}\mathcal{S}_{n-2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \frac{1}{2}i^{-1/2}\partial_{s}^{2k}\overline{\Psi}_{n}\overleftarrow{\partial \!\!\!/_{s}}\partial_{s}^{2k}\mathcal{S}_{n-1}-\frac{1}{2}i^{-1/2}\partial_{s}^ {2k}\overline{\Psi}_{n-1}\partial_{s^{\prime}}\partial_{s^{\prime}}^{2k} \mathcal{S}_{n}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+ \Big{(}\frac{1}{2}-k\Big{)}\partial_{s}^{2k}\overline{\Psi}_{n-1}\partial_{s^{ \prime}}^{2k}\mathcal{S}_{n-1}\Bigg{)}\Bigg{\}}\]
That this action is gauge invariant is guaranteed by the fact that the field strengths themselves are gauge invariant \(\delta\mathcal{S}_{n-i}=0\), and that they satisfy a fermionic massive Bianchi identity
\[\partial\!\!\!/_{s}(\partial\!\!\!/_{X}-m)\mathcal{S}_{n}-s\cdot\partial_{X} \partial_{s}^{2}\mathcal{S}_{n}-i^{1/2}(\partial\!\!\!/_{X}-m)\mathcal{S}_{n-1}+ i\,s\cdot\partial_{X}\mathcal{S}_{n-2}=0 \tag{23}\]
As before, the gauge variation of the action \(\delta S_{n+1/2}\) is a straightforward linear combination the left hand side of (23) and the \(\delta\mathcal{S}_{n-i}\)'s.
## 3 Interactions
In this section, we show how to add electromagnetic and gravitational interactions to the actions (14) and (22), in a way that preserves the massive gauge symmetry. We begin first with the integer spin case, and then repeat the analysis for half integer spins.
### Integer spins
Naively, when modelling a massive integer spin particle interacting with electromagnetism or gravity, one might try what works for spin 0, i.e. replace every spacetime derivative \(\partial_{X\mu}\) in the action with a covariant derivative \(\nabla_{\mu}\), as well as replace every \(\eta_{\mu\nu}\) with \(g_{\mu\nu}(X)\) and \(d^{d}X\) with \(d^{d}X\sqrt{-g}\) in the case of gravitational interactions. This is often referred to as minimal coupling. Notice however that even this procedure is ambiguous for spins \(n\geq 1\), because one can always re-order the derivatives and write \(\partial_{X\mu}\partial_{X\nu}=\partial_{X\mu}\partial_{X\nu}+a[\partial_{X \mu},\partial_{X\nu}]\) before converting them to covariant derivatives for any \(a\) without changing the free action. Spin 0 is special in this respect because the only appearance of derivatives in the action is in \(\eta^{\mu\nu}\partial_{X\mu}\partial_{X\nu}\). Nevertheless, for the sake of the discussion we will refer to minimal coupling as replacing derivatives with covariant ones in the particular order they appear in (14).
A few words must be said about lifting the hyperspace measure (6) to a curved spacetime manifold \(M\). Apart from the spacetime integration, this measure is just a formal way to implement index contraction in terms of hyperfields. Within the integration, one should think of the auxiliary vectors as living on the tangent space \(s,s^{\prime}\in T_{X}M\). One simply integrates over all components of the tangent vectors \(s,\,s^{\prime}\) at the point \(X\), weighted by \(e^{is\cdot s^{\prime}}\), where \(s\cdot s^{\prime}=g(s,s^{\prime})=g_{\mu\nu}s^{\mu}s^{\prime\nu}\).
The covariant derivative \(\nabla_{\mu}\) on a charge 1 integer spin \(n\) hyperfield \(\Phi_{n}(X,s)\), including both a \(U(1)\) gauge field \(A_{\mu}\), and a Levi-Civita connection \(\Gamma^{\lambda}_{\ \mu\nu}\), which implements the standard covariant derivative on its component field \(\phi_{\mu_{1}\cdots\mu_{n}}(X)\) is
\[\nabla_{\mu}=\partial_{X\mu}-iA_{\mu}-s^{\nu}\Gamma^{\lambda}_{\ \mu\nu}\partial_{s\lambda} \tag{17}\]
Covariant derivatives no longer commute, but instead their commutator when acting on \(\Phi_{n}(X,s)\) equals
\[[\nabla_{\mu},\nabla_{\nu}]=-iF_{\mu\nu}-s^{\sigma}R^{\omega}_{\ \sigma\mu\nu}\partial_{s\omega} \tag{18}\]
As part of this minimal coupling procedure, we will also deform the massive gauge symmetry (9) by replacing spacetime derivatives with covariant derivatives, and \(\eta^{\mu\nu}\) with \(g^{\mu\nu}\), so that it is consistent with \(U(1)\) electromagnetic gauge invariance and general coordinate invariance
\[\delta\Phi_{n}=i^{-1/2}s\cdot\nabla\epsilon_{n-1} \delta\Phi_{n-1}=i^{-1/2}s\cdot\nabla\epsilon_{n-2}+im\,\epsilon_ {n-1}\] \[\delta\Phi_{n-2}=-i^{1/2}s\cdot\nabla\partial_{s}^{2}\epsilon_{n- 1}+2im\,\epsilon_{n-2} \delta\Phi_{n-3}=-i^{1/2}s\cdot\nabla\partial_{s}^{2}\epsilon_{n-2}+3 m\partial_{s}^{2}\epsilon_{n-1} \tag{19}\]
It is straightforward to show that the action (14) is not invariant under the gauge symmetry (13) after minimal coupling, and hence the minimally coupled theory does not consistently describe a massive spin \(n\) particle. Indeed, the gauge variation of the minimally coupled action \(\delta S_{n}\) is a linear combination of terms proportional to \(\delta\mathcal{F}_{n-i}\), and the covariantized version of the massive Bianchi identities (15), (16). This is an important fact for the subsequent analysis, which follows from the massive action being derived from the corresponding massless action \(S_{n,0}\) in \(d+1\) dimensions built from a rank \(n\) double traceless hyperfield \(\Phi_{n,0}(X,s)\) and massless field strength \(\mathcal{F}_{n,0}(X,s)\), described in [39]. Indeed, under the gauge transformation \(\delta\Phi_{n,0}(X,s)=i^{-1/2}s\cdot\partial_{X}\epsilon_{n-1}(X,s)\), for \(\epsilon_{n-1}(X,s)\) a rank \(n-1\) traceless hyperfield, the gauge variation of \(S_{n,0}\) is
\[\delta S_{n,0}=\frac{1}{2}n!\int d^{d+1}X\frac{d^{d+1}s^{dd+1}s^{ \prime}}{(2\pi)^{d+1}}e^{is\cdot s^{\prime}}\times\\ \Big{(}\Phi_{n,0}(X,s)\big{(}1-\frac{1}{4}s^{\prime 2}\partial_{s^ {\prime}}^{2}\big{)}\delta\mathcal{F}_{n,0}(X,s^{\prime})-i^{1/2}\epsilon_{n -1}(X,s)\big{(}\partial_{s^{\prime}}\cdot\partial_{X}\mathcal{F}_{n,0}-\frac{ 1}{2}s^{\prime}\cdot\partial_{X}\partial_{s^{\prime}}^{2}\mathcal{F}_{n,0} \big{)}\Big{)} \tag{16}\]
After dimensional reduction, the gauge variation of the massless \(d+1\) dimensional field strength \(\delta\mathcal{F}_{n,0}\) decomposes into the massive \(d\) dimensional variations \(\delta\mathcal{F}_{n-i}\), and the expression \(\partial_{s}\cdot\partial_{X}\mathcal{F}_{n,0}-\frac{1}{2}s\cdot\partial_{X} \partial_{s}^{2}\mathcal{F}_{n,0}\) decomposes into the two massive Bianchi identities (15), (16). These are no longer zero after minimal coupling, but instead equal
\[\delta\mathcal{F}_{n} =i^{-1/2}(\nabla^{2}s\cdot\nabla-s\cdot\nabla[\partial_{s}\cdot \nabla,s\cdot\nabla])\epsilon_{n-1} \tag{17}\] \[\delta\mathcal{F}_{n-1} =i^{-1/2}(\nabla^{2}s\cdot\nabla-s\cdot\nabla[\partial_{s}\cdot \nabla,s\cdot\nabla])\epsilon_{n-2}+im(\nabla^{2}-[\partial_{s}\cdot\nabla,s \cdot\nabla])\epsilon_{n-1}\] (18) \[\delta\mathcal{F}_{n-2} =-i^{1/2}(\nabla^{2}s\cdot\nabla-s\cdot\nabla[\partial_{s}\cdot \nabla,s\cdot\nabla])\partial_{s}^{2}\epsilon_{n-1}+2im(\nabla^{2}-[\partial_ {s}\cdot\nabla,s\cdot\nabla])\epsilon_{n-2}\] (19) \[\delta\mathcal{F}_{n-3} =-i^{1/2}(\nabla^{2}s\cdot\nabla-s\cdot\nabla[\partial_{s}\cdot \nabla,s\cdot\nabla])\partial_{s}^{2}\epsilon_{n-2}+3m(\nabla^{2}-[\partial_{s }\cdot\nabla,s\cdot\nabla])\partial_{s}^{2}\epsilon_{n-1} \tag{20}\]
\[\partial_{s}\cdot\nabla\mathcal{F}_{n}-\frac{1}{2}s\cdot\nabla \partial_{s}^{2}\mathcal{F}_{n}+\frac{i}{2}s\cdot\nabla\mathcal{F}_{n-2}+i^{1 /2}m\mathcal{F}_{n-1}=\\ (\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2} )(\nabla^{2}-[\partial_{s}\cdot\nabla,s\cdot\nabla])\Phi_{n}+[\partial_{s} \cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2},[\partial_{s}\cdot\nabla,s \cdot\nabla]]\Phi_{n}\\ +\frac{i}{2}s\cdot\nabla(\nabla^{2}-[\partial_{s}\cdot\nabla,s \cdot\nabla])\Phi_{n-2}+\frac{i}{2}[s\cdot\nabla,[\partial_{s}\cdot\nabla,s \cdot\nabla]]\Phi_{n-2}\\ +i^{1/2}m(\nabla^{2}-[\partial_{s}\cdot\nabla,s\cdot\nabla])\Phi_ {n-1} \tag{21}\]
\[\partial_{s}\cdot\nabla\mathcal{F}_{n-1}-\frac{1}{2}s\cdot\nabla \partial_{s}^{2}\mathcal{F}_{n-1}+\frac{i}{2}s\cdot\nabla\mathcal{F}_{n-3}+ \frac{1}{2}i^{-1/2}m\partial_{s}^{2}\mathcal{F}_{n}+\frac{1}{2}i^{1/2}m\mathcal{ F}_{n-2}=\\ (\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2} )(\nabla^{2}-[\partial_{s}\cdot\nabla,s\cdot\nabla])\Phi_{n-1}+[\partial_{s} \cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2},[\partial_{s}\cdot\nabla,s \cdot\nabla]]\Phi_{n-1}\\ +\frac{i}{2}s\cdot\nabla(\nabla^{2}-[\partial_{s}\cdot\nabla,s \cdot\nabla])\Phi_{n-3}+\frac{i}{2}[s\cdot\nabla,[\partial_{s}\cdot\nabla,s \cdot\nabla]]\Phi_{n-3}\\ +\frac{1}{2}i^{-1/2}m(\nabla^{2}-[\partial_{s}\cdot\nabla,s\cdot \nabla])\partial_{s}^{2}\Phi_{n}+\frac{1}{2}i^{1/2}m(\nabla^{2}-[\partial_{s} \cdot\nabla,s\cdot\nabla])\Phi_{n-2} \tag{22}\]
One can check that these expressions are proportional to electromagnetic and gravitational curvatures \(F_{\mu\nu}\), \(R_{\omega\sigma\mu\nu}\).
#### 3.1.1 Restoring gauge symmetry
In order to restore the massive gauge symmetry, additional interactions with electromagnetism and gravity must be added to the action. Because the gauge violation of the action \(\delta S_{n}\neq 0\) depends on the properties of the field strengths \({\cal F}_{n-i}\), we may consider restoring the gauge symmetry by making appropriate modifications of the field strengths \({\cal F}_{n-i}\rightarrow{\cal F}_{n-i}+\Delta{\cal F}_{n-i}\), so that they are again gauge invariant and satisfy the massive Bianchi identities.
For instance, one could choose \(\Delta{\cal F}^{(0)}_{n-i}=([\partial_{s}\cdot\nabla,s\cdot\nabla]-\nabla^{2}) \Phi_{n-i}\), which cancels all gauge violations in (3.1)-(3.10) which are \({\cal O}(m)\). The new field strengths \({\cal F}^{(0)}_{n-i}\equiv{\cal F}_{n-i}+\Delta{\cal F}^{(0)}_{n-i}\) still have \({\cal O}(1)\) in \(m\) gauge violations
\[\delta{\cal F}^{(0)}_{n} =i^{-1/2}[[\partial_{s}\cdot\nabla,s\cdot\nabla],s\cdot\nabla] \epsilon_{n-1} \tag{3.11}\] \[\delta{\cal F}^{(0)}_{n-1} =i^{-1/2}[[\partial_{s}\cdot\nabla,s\cdot\nabla],s\cdot\nabla] \epsilon_{n-2}\] (3.12) \[\delta{\cal F}^{(0)}_{n-2} =-i^{1/2}[[\partial_{s}\cdot\nabla,s\cdot\nabla],s\cdot\nabla] \partial_{s}^{2}\epsilon_{n-1}\] (3.13) \[\delta{\cal F}^{(0)}_{n-3} =-i^{1/2}[[\partial_{s}\cdot\nabla,s\cdot\nabla],s\cdot\nabla] \partial_{s}^{2}\epsilon_{n-2} \tag{3.14}\]
\[\partial_{s}\cdot\nabla{\cal F}^{(0)}_{n}-\frac{1}{2}s\cdot\nabla \partial_{s}^{2}{\cal F}^{(0)}_{n}+\frac{i}{2}s\cdot\nabla{\cal F}^{(0)}_{n-2} +i^{1/2}m{\cal F}^{(0)}_{n-1}=\] \[[\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2},[\partial_{s}\cdot\nabla,s\cdot\nabla]]\Phi_{n}+\frac{i}{2}[s\cdot\nabla,[ \partial_{s}\cdot\nabla,s\cdot\nabla]]\Phi_{n-2} \tag{3.15}\]
\[\partial_{s}\cdot\nabla{\cal F}^{(0)}_{n-1}-\frac{1}{2}s\cdot \nabla\partial_{s}^{2}{\cal F}^{(0)}_{n-1}+\frac{i}{2}s\cdot\nabla{\cal F}^{(0 )}_{n-3}+\frac{1}{2}i^{-1/2}m\partial_{s}^{2}{\cal F}^{(0)}_{n}+\frac{1}{2}i^ {1/2}m{\cal F}^{(0)}_{n-2}=\] \[[\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2},[\partial_{s}\cdot\nabla,s\cdot\nabla]]\Phi_{n-1}+\frac{i}{2}[s\cdot\nabla,[ \partial_{s}\cdot\nabla,s\cdot\nabla]]\Phi_{n-3} \tag{3.16}\]
Because of the appearance of \(m\) in the gauge transformations (3.1) and massive Bianchi identities, one can further suppress the gauge violations to \({\cal O}(1/m)\), cancelling the \({\cal O}(1)\) violations, by adding a \(\Delta{\cal F}^{(1)}_{n-i}\) which is proportional to \(1/m\). This can be done indefinitely, generating an infinite series of \(\Delta{\cal F}^{(N)}_{n-i}\), each proportional to \(1/m^{N}\), for all integers \(N\geq 0\), until the action is exactly gauge invariant.
To prove this, we proceed by induction. Suppose that we have found modified field strengths \({\cal F}^{(2k)}_{n-i}\) which suppress all gauge violations to be \({\cal O}(1/m^{2k})\), which we write as
\[\delta{\cal F}^{(2k)}_{n} =\frac{1}{m^{2k}}\hat{\cal O}^{(2k)}_{n}\epsilon_{n-1} \tag{3.17}\] \[\delta{\cal F}^{(2k)}_{n-1} =\frac{1}{m^{2k}}\hat{\cal O}^{(2k)}_{n-1}\epsilon_{n-2}\] (3.18) \[\delta{\cal F}^{(2k)}_{n-2} =\frac{1}{m^{2k}}\hat{\cal O}^{(2k)}_{n-2}\epsilon_{n-1}\] (3.19) \[\delta{\cal F}^{(2k)}_{n-3} =\frac{1}{m^{2k}}\hat{\cal O}^{(2k)}_{n-3}\epsilon_{n-2} \tag{3.20}\]
\[\partial_{s}\cdot\nabla{\cal F}^{(2k)}_{n}-\frac{1}{2}s\cdot\nabla \partial_{s}^{2}{\cal F}^{(2k)}_{n}+\frac{i}{2}s\cdot\nabla{\cal F}^{(2k)}_{n-2}+ i^{1/2}m{\cal F}^{(2k)}_{n-1}=\frac{1}{m^{2k}}\hat{\cal U}^{(2k)}_{n}\Phi_{n}+ \frac{1}{m^{2k}}\hat{\cal U}^{(2k)}_{n-2}\Phi_{n-2} \tag{3.21}\] \[\partial_{s}\cdot\nabla{\cal F}^{(2k)}_{n-1}-\frac{1}{2}s\cdot \nabla\partial_{s}^{2}{\cal F}^{(2k)}_{n-1}+\frac{i}{2}s\cdot\nabla{\cal F}^{( 2k)}_{n-3}+\frac{1}{2}i^{-1/2}m\partial_{s}^{2}{\cal F}^{(2k)}_{n}+\frac{1}{2} i^{1/2}m{\cal F}^{(2k)}_{n-2}=\] \[\frac{1}{m^{2k}}\hat{\cal U}^{(2k)}_{n-1}\Phi_{n-1}+\frac{1}{m^{2 k}}\hat{\cal U}^{(2k)}_{n-3}\Phi_{n-3}\]
for some operators \(\hat{\cal O}^{(2k)}_{n-i}\), \(\hat{\cal U}^{(2k)}_{n-i}\), where the subscript \(n-i\) does not denote the rank, but the hyperfields \({\cal F}^{(2k)}_{n-i}\), \(\Phi_{n-i}\) they are associated with, respectively. We first show that if \(\hat{\cal O}^{(2k)}_{n-i}\) and \(\hat{\cal U}^{(2k)}_{n-i}\) satisfy conditions (to be determined) ensuring that \(\Delta{\cal F}^{(2k+1)}_{n-i}\) and \(\Delta{\cal F}^{(2k+2)}_{n-i}\) can be constructed so that \({\cal F}^{(2k+2)}_{n-i}\equiv{\cal F}^{(2k)}_{n-i}+\Delta{\cal F}^{(2k+1)}_{n -i}+\Delta{\cal F}^{(2k+2)}_{n-i}\) suppresses the gauge violations to \({\cal O}(1/m^{2k+2})\), then the corresponding operators at the next order \(\hat{\cal O}^{(2k+2)}_{n-i}\) and \(\hat{\cal U}^{(2k+2)}_{n-i}\) will also satisfy those conditions.
To cancel the \({\cal O}(1/m^{2k})\) gauge violations (3.17), (3.20), (3.21) and (3.22), one may choose \(\Delta{\cal F}^{(2k+1)}_{n-i}\) to be
\[\Delta{\cal F}^{(2k+1)}_{n} =\frac{i}{m^{2k+1}}\hat{\cal O}^{(2k)}_{n}\Phi_{n-1} \tag{3.23}\] \[\Delta{\cal F}^{(2k+1)}_{n-1} =-\frac{i^{-1/2}}{m^{2k+1}}\Big{(}\hat{\cal U}^{(2k)}_{n}\Phi_{n} +\hat{\cal U}^{(2k)}_{n-2}\Phi_{n-2}\Big{)}\] (3.24) \[\Delta{\cal F}^{(2k+1)}_{n-2} =-\frac{i^{-1/2}}{m^{2k+1}}\Big{(}(i^{1/2}\partial_{s}^{2}\hat{ \cal O}^{(2k)}_{n}+2\,\hat{\cal U}^{(2k)}_{n-1})\Phi_{n-1}+2\,\hat{\cal U}^{(2 k)}_{n-3}\Phi_{n-3}\Big{)}\] (3.25) \[\Delta{\cal F}^{(2k+1)}_{n-3} =\frac{i}{2m^{2k+1}}\hat{\cal O}^{(2k)}_{n-3}\Phi_{n-2} \tag{3.26}\]
The remaining \({\cal O}(1/m^{2k})\) gauge violations (3.18) and (3.19) must then be cancelled by \(\delta(\Delta{\cal F}^{(2k+1)}_{n-1})\) and \(\delta(\Delta{\cal F}^{(2k+1)}_{n-2})\), respectively. This in turn is only possible if the following conditions hold
\[\hat{\cal C}^{(2k)}_{1} \equiv\hat{\cal O}^{(2k)}_{n-1}-2i^{1/2}\hat{\cal U}^{(2k)}_{n-2}=0 \tag{3.27}\] \[\hat{\cal C}^{(2k)}_{2} \equiv\hat{\cal O}^{(2k)}_{n-2}-i\partial_{s}^{2}\hat{\cal O}^{(2k )}_{n}-2i^{1/2}\hat{\cal U}^{(2k)}_{n-1}-6i^{-1/2}\hat{\cal U}^{(2k)}_{n-3} \partial_{s}^{2}=0 \tag{3.28}\]
Let us assume this is true and proceed with constructing \(\Delta{\cal F}^{(2k+2)}_{n-i}\). The gauge violations associated with \({\cal F}^{(2k+1)}_{n-i}={\cal F}^{(2k)}_{n-i}+\Delta{\cal F}^{(2k+1)}_{n-i}\) are
\[\delta{\cal F}^{(2k+1)}_{n} =\frac{i^{1/2}}{m^{2k+1}}\hat{\cal O}^{(2k)}_{n}s\cdot\nabla\epsilon _{n-2} \tag{3.29}\] \[\delta{\cal F}^{(2k+1)}_{n-1} =\frac{1}{m^{2k+1}}\Big{(}i\,\hat{\cal U}^{(2k)}_{n}s\cdot\nabla+ \hat{\cal U}^{(2k)}_{n-2}s\cdot\nabla\partial_{s}^{2}\Big{)}\epsilon_{n-1}\] (3.30) \[\delta{\cal F}^{(2k+1)}_{n-2} =\frac{1}{m^{2k+1}}\Big{(}-i^{-1/2}\partial_{s}^{2}\hat{\cal O}^{ (2k)}_{n}s\cdot\nabla+2i\,\hat{\cal U}^{(2k)}_{n-1}s\cdot\nabla+2\,\hat{\cal U} ^{(2k)}_{n-3}s\cdot\nabla\partial_{s}^{2}\Big{)}\epsilon_{n-2}\] (3.31) \[\delta{\cal F}^{(2k+1)}_{n-3} =\frac{i^{-1/2}}{2m^{2k+1}}\hat{\cal O}^{(2k)}_{n-3}s\cdot\nabla \partial_{s}^{2}\epsilon_{n-1} \tag{3.32}\]
\[\partial_{s}\cdot\nabla\mathcal{F}_{n-1}^{(2k+1)}-\frac{1}{2}s\cdot \nabla\partial_{s}^{2}\mathcal{F}_{n-1}^{(2k+1)}+\frac{i}{2}s\cdot\nabla \mathcal{F}_{n-3}^{(2k+1)}+i^{1/2}m\mathcal{F}_{n-1}^{(2k+1)}=\] \[\qquad\frac{1}{m^{2k+1}}\Big{(}i(\partial_{s}\cdot\nabla-s\cdot \nabla\partial_{s}^{2})\hat{\mathcal{O}}_{n}^{(2k)}-i^{1/2}s\cdot\nabla\,\hat{ \mathcal{U}}_{n-1}^{(2k)}\Big{)}\Phi_{n-1}-\frac{i^{1/2}}{m^{2k+1}}s\cdot \nabla\,\hat{\mathcal{U}}_{n-3}^{(2k)}\Phi_{n-3} \tag{3.33}\]
\[\partial_{s}\cdot\nabla\mathcal{F}_{n-1}^{(2k+1)}-\frac{1}{2}s \cdot\nabla\partial_{s}^{2}\mathcal{F}_{n-1}^{(2k+1)}+\frac{i}{2}s\cdot \nabla\mathcal{F}_{n-3}^{(2k+1)}+\frac{1}{2}i^{-1/2}m\partial_{s}^{2} \mathcal{F}_{n}^{(2k+1)}+\frac{1}{2}i^{1/2}m\mathcal{F}_{n-2}^{(2k+1)}=\] \[\qquad-\frac{i^{-1/2}}{m^{2k+1}}(\partial_{s}\cdot\nabla-\frac{1 }{2}s\cdot\nabla\partial_{s}^{2})(\hat{\mathcal{U}}_{n}^{(2k)}\Phi_{n}+\hat{ \mathcal{U}}_{n-2}^{(2k)}\Phi_{n-2})-\frac{1}{4m^{2k+1}}s\cdot\nabla\hat{ \mathcal{O}}_{n-3}^{(2k)}\Phi_{n-2} \tag{3.34}\]
Again, it is straightforward to find modifications \(\Delta\mathcal{F}_{n-i}^{(2k+2)}\) which cancel the \(\mathcal{O}(1/m^{2k+1})\) gauge violations in (3.29), (3.32), (3.33) and (3.34)
\[\Delta\mathcal{F}_{n}^{(2k+2)}= -\frac{i^{-1/2}}{2m^{2k+2}}\hat{\mathcal{O}}_{n}^{(2k)}s\cdot \nabla\Phi_{n-2} \tag{3.35}\] \[\Delta\mathcal{F}_{n-1}^{(2k+2)}= -\frac{i^{1/2}}{m^{2k+2}}\Big{(}(\partial_{s}\cdot\nabla-s\cdot \nabla\partial_{s}^{2})\hat{\mathcal{O}}_{n}^{(2k)}-i^{-1/2}s\cdot\nabla\hat {\mathcal{U}}_{n-1}^{(2k)}\Big{)}\Phi_{n-1}\] \[+\frac{1}{m^{2k+2}}s\cdot\nabla\hat{\mathcal{U}}_{n-3}^{(2k)} \Phi_{n-3}\] (3.36) \[\Delta\mathcal{F}_{n-2}^{(2k+2)}= -2\frac{i}{m^{2k+2}}(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot \nabla\partial_{s}^{2})(\hat{\mathcal{U}}_{n}^{(2k)}\Phi_{n}+\hat{\mathcal{U}}_ {n-2}^{(2k)}\Phi_{n-2})\] \[+\frac{i^{-1/2}}{2m^{2k+2}}(s\cdot\nabla\hat{\mathcal{O}}_{n-3}^{ (2k)}-i\partial_{s}^{2}\hat{\mathcal{O}}_{n}^{(2k)}s\cdot\nabla)\Phi_{n-2}\] (3.37) \[\Delta\mathcal{F}_{n-3}^{(2k+2)}= \frac{i^{1/2}}{2m^{2k+2}}\hat{\mathcal{O}}_{n-3}^{(2k)}s\cdot \nabla\partial_{s}^{2}\Phi_{n-1} \tag{3.38}\]
Demanding that \(\delta(\Delta\mathcal{F}_{n-1}^{(2k+2)})\) and \(\delta(\Delta\mathcal{F}_{n-2}^{(2k+2)})\) also cancel the \(\mathcal{O}(1/m^{2k+1})\) gauge violations in (3.30) and (3.31) imposes further conditions on \(\hat{\mathcal{O}}_{n-i}^{(2k)}\) and \(\hat{\mathcal{U}}_{n-i}^{(2k)}\)
\[\hat{\mathcal{C}}_{3}^{(2k)}\equiv \,i^{-1/2}s\cdot\nabla\hat{\mathcal{O}}_{n-3}^{(2k)}+2\,\hat{ \mathcal{U}}_{n-1}^{(2k)}s\cdot\nabla\] \[-4i(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{ 2})\hat{\mathcal{U}}_{n-2}^{(2k)}-2i\,\hat{\mathcal{U}}_{n-3}^{(2k)}s\cdot \nabla\partial_{s}^{2}=0 \tag{3.39}\] \[\hat{\mathcal{C}}_{4}^{(2k)}\equiv \,i^{1/2}(\partial_{s}\cdot\nabla-s\cdot\nabla\partial_{s}^{2}) \hat{\mathcal{O}}_{n}^{(2k)}-\hat{\mathcal{U}}_{n}^{(2k)}s\cdot\nabla\] \[-s\cdot\nabla\,\hat{\mathcal{U}}_{n-1}^{(2k)}+i\,\hat{\mathcal{U }}_{n-2}^{(2k)}s\cdot\nabla\partial_{s}^{2}+3is\cdot\nabla\,\hat{\mathcal{U}}_ {n-3}^{(2k)}\partial_{s}^{2}=0 \tag{3.40}\]
Assuming that these conditions are satisfied, we may use \(\mathcal{F}_{n-i}^{(2k+2)}=\mathcal{F}_{n-i}^{(2k+1)}+\Delta\mathcal{F}_{n-i}^{(2 k+2)}\) to find the corresponding operators at the next order \(\hat{\mathcal{O}}_{n-i}^{(2k+2)}\) and \(\hat{\mathcal{U}}_{n-i}^{(2k+2)}\) associated with
\({\cal O}(1/m^{2k+2})\) gauge violations
\[\hat{\cal O}_{n}^{(2k+2)}= \frac{1}{2}\hat{\cal O}_{n}^{(2k)}(s\cdot\nabla)^{2}\partial_{s}^{2} \tag{41}\] \[\hat{\cal O}_{n-1}^{(2k+2)}= -(\partial_{s}\cdot\nabla-s\cdot\nabla\partial_{s}^{2})\hat{\cal O }_{n}^{(2k)}s\cdot\nabla\] \[+i^{-1/2}s\cdot\nabla\,\hat{\cal U}_{n-1}^{(2k)}s\cdot\nabla-i^{ 1/2}s\cdot\nabla\,\hat{\cal U}_{n-3}^{(2k)}s\cdot\nabla\partial_{s}^{2}\] (42) \[\hat{\cal O}_{n-2}^{(2k+2)}= -2i^{1/2}(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial _{s}^{2})(\hat{\cal U}_{n}^{(2k)}s\cdot\nabla-i\,\hat{\cal U}_{n-2}^{(2k)}s \cdot\nabla\partial_{s}^{2})\] \[+\frac{i}{2}\partial_{s}^{2}\hat{\cal O}_{n}^{(2k)}(s\cdot\nabla) ^{2}\partial_{s}^{2}-\frac{1}{2}s\cdot\nabla\hat{\cal O}_{n-3}^{(2k)}s\cdot \nabla\partial_{s}^{2}\] (43) \[\hat{\cal O}_{n-3}^{(2k+2)}= \frac{1}{2}\hat{\cal O}_{n-3}^{(2k)}s\cdot\nabla\partial_{s}^{2}s\cdot\nabla\] (44) \[\hat{\cal U}_{n}^{(2k+2)}= s\cdot\nabla(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2}) \hat{\cal U}_{n}^{(2k)}\] (45) \[\hat{\cal U}_{n-1}^{(2k+2)}= -i^{1/2}(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial _{s}^{2})(\partial_{s}\cdot\nabla-s\cdot\nabla\partial_{s}^{2})\hat{\cal O}_{ n}^{(2k)}\] \[+(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2 })s\cdot\nabla\hat{\cal U}_{n-1}^{(2k)}-\frac{1}{4}i^{-1/2}s\cdot\nabla\hat{ \cal O}_{n-3}^{(2k)}s\cdot\nabla\partial_{s}^{2}\] (46) \[\hat{\cal U}_{n-2}^{(2k+2)}= s\cdot\nabla(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla \partial_{s}^{2})\hat{\cal U}_{n-2}^{(2k)}+\frac{1}{4}i^{1/2}(s\cdot\nabla)^{2 }\hat{\cal O}_{n-3}^{(2k)}\] \[-\frac{1}{2}i^{-1/2}(\partial_{s}\cdot\nabla-s\cdot\nabla \partial_{s}^{2})\hat{\cal O}_{n}^{(2k)}s\cdot\nabla\] (47) \[\hat{\cal U}_{n-3}^{(2k+2)}= (\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2 })s\cdot\nabla\,\hat{\cal U}_{n-3}^{(2k)} \tag{48}\]
To recapitulate, given the existence of field strengths \({\cal F}_{n-i}^{(2k)}\) that satisfy (3.17) - (3.22), we have found four conditions (3.27), (3.28), (3.39), and (3.40) on the operators \(\hat{\cal O}_{n-i}^{(2k)}\) and \(\hat{\cal U}_{n-i}^{(2k)}\), which if satisfied enables the construction of \(\Delta{\cal F}_{n-i}^{(2k+1)}\) and \(\Delta{\cal F}_{n-i}^{(2k+2)}\), resulting in improved field strengths \({\cal F}_{n-i}^{(2k+2)}={\cal F}_{n-i}^{(2k)}+\Delta{\cal F}_{n-i}^{(2k+1)}+ \Delta{\cal F}_{n-i}^{(2k+2)}\) with \({\cal O}(1/m^{2k+2})\) gauge violations. It is worth mentioning at this stage that this procedure is not unique. For instance, one could add a term proportional \(\Phi_{n}\) to \(\Delta{\cal F}_{n-3}^{(2k+1)}\). This would add to the action a term linear in \(\Phi_{n}\), and so we ignore this possibility.
The next step is to show that if \(\hat{\cal O}_{n-i}^{(2k)}\) and \(\hat{\cal U}_{n-i}^{(2k)}\) satisfy \(\hat{\cal C}_{j}^{(2k)}=0\) for \(j=1,2,3,4\), then \(\hat{\cal O}_{n-i}^{(2k+2)}\) and \(\hat{\cal U}_{n-i}^{(2k+2)}\) satisfy \(\hat{\cal C}_{j}^{(2k+2)}=0\). This follows after writing \(\hat{\cal O}_{n-i}^{(2k+2)}\) and \(\hat{\cal U}_{n-i}^{(2k+2)}\) in terms of \(\hat{\cal O}_{n-i}^{(2k)}\) and \(\hat{\cal U}_{n-i}^{(2k)}\)
\[\hat{\cal C}_{1}^{(2k+2)}= \,\frac{1}{2}i^{-1/2}s\cdot\nabla\hat{\cal C}_{3}^{(2k)}=0 \tag{49}\] \[\hat{\cal C}_{2}^{(2k+2)}= \,2i^{1/2}(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla \partial_{s}^{2})\hat{\cal C}_{4}^{(2k)}=0\] (50) \[\hat{\cal C}_{3}^{(2k+2)}= \,(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2 })s\cdot\nabla\hat{\cal C}_{3}^{(2k)}=0\] (51) \[\hat{\cal C}_{4}^{(2k+2)}= \,s\cdot\nabla(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla \partial_{s}^{2})\hat{\cal C}_{4}^{(2k)}=0 \tag{52}\]
The final step is to verify that the operators \(\hat{\mathcal{O}}^{(0)}_{n-i}\) and \(\hat{\mathcal{U}}^{(0)}_{n-i}\) from (3.11) - (3.16) satisfy \(\hat{\mathcal{C}}^{(0)}_{j}=0\). The operators \(\hat{\mathcal{O}}^{(0)}_{n-i}\) and \(\hat{\mathcal{U}}^{(0)}_{n-i}\) associated with minimal coupling are
\[\hat{\mathcal{O}}^{(0)}_{n}=i^{-1/2}[[\partial_{s}\cdot\nabla,s \cdot\nabla],s\cdot\nabla] \qquad\qquad\hat{\mathcal{U}}^{(0)}_{n}=[\partial_{s}\cdot\nabla- \frac{1}{2}s\cdot\nabla\partial_{s}^{2},[\partial_{s}\cdot\nabla,s\cdot\nabla]] \tag{3.53}\] \[\hat{\mathcal{O}}^{(0)}_{n-1}=i^{-1/2}[[\partial_{s}\cdot\nabla,s \cdot\nabla],s\cdot\nabla] \qquad\qquad\hat{\mathcal{U}}^{(0)}_{n-1}=[\partial_{s}\cdot\nabla- \frac{1}{2}s\cdot\nabla\partial_{s}^{2},[\partial_{s}\cdot\nabla,s\cdot\nabla]]\] (3.54) \[\hat{\mathcal{O}}^{(0)}_{n-2}=-i^{1/2}[[\partial_{s}\cdot\nabla,s \cdot\nabla],s\cdot\nabla]\partial_{s}^{2}\quad\hat{\mathcal{U}}^{(0)}_{n-2}= \frac{i}{2}[s\cdot\nabla,[\partial_{s}\cdot\nabla,s\cdot\nabla]]\] (3.55) \[\hat{\mathcal{O}}^{(0)}_{n-3}=-i^{1/2}[[\partial_{s}\cdot\nabla,s \cdot\nabla],s\cdot\nabla]\partial_{s}^{2}\quad\hat{\mathcal{U}}^{(0)}_{n-3}= \frac{i}{2}[s\cdot\nabla,[\partial_{s}\cdot\nabla,s\cdot\nabla]] \tag{3.56}\]
That these satisfy \(\hat{\mathcal{C}}^{(0)}_{j}=0\) is straightforward to verify, thus providing the base case for the induction argument to go through. The modifications \(\Delta\mathcal{F}^{(N)}_{n-i}\) (3.23) - (3.26) and (3.35) - (3.38) provide a gauge invariant completion of minimal coupling.
#### 3.1.2 Non-minimal gauge invariant interactions
As we saw in the previous section, minimally coupling \(\Phi_{n-i}\) to electromagnetism and gravity does not preserve the massive gauge symmetry (3.3), but there exists modifications \(\Delta\mathcal{F}^{(N)}_{n-i}\) to the fields strengths \(\mathcal{F}_{n-i}\to\mathcal{F}_{n-i}+\sum_{N=0}^{\infty}\Delta\mathcal{F}^{( N)}_{n-i}\) which restores the gauge symmetry exactly.
Apart from minimal coupling, which gives \(\Phi_{n}\) a charge/mass, there are non-minimal couplings one might consider adding to your theory. For instance, one might further consider adding a term \(\Delta\mathcal{F}^{(0)}_{n}=g_{0}([\partial_{s}\cdot\nabla,s\cdot\nabla]- \nabla^{2})\Phi_{n}\) to \(\mathcal{F}_{n}\), which is related to the magnetic dipole and gravitational quadrupole moment couplings, because
\[[\partial_{s}\cdot\nabla,s\cdot\nabla]-\nabla^{2}=is^{\mu}F_{\mu \nu}\partial_{s}^{\nu}+s^{\mu}R_{\mu\nu}\partial_{s}^{\nu}-s^{\sigma}s^{\nu}R _{\omega\sigma\mu\nu}\partial_{s}^{\mu}\partial_{s}^{\nu} \tag{3.57}\]
and, as explained in [29], this is precisely how such a coupling occurs at the level of the equations of motion. Adding such terms will, like in the case of minimal coupling, break massive gauge invariance. Luckily, the analysis of subsubsection 3.1.1 provides a simple prescription for adding further interactions which restores the gauge symmetry. Indeed, we may consider an _arbitrary_ non-minimal coupling operator \(\hat{\mathcal{M}}\) with mass dimension \(2k+2\) which is invariant with respect to the massive gauge symmetry, and adding
\[\Delta\mathcal{F}^{(2k)}_{n}=\frac{1}{m^{2k}}\hat{\mathcal{M}} \Phi_{n}, \Delta\mathcal{F}^{(2k)}_{n-2}=\frac{i}{m^{2k}}\partial_{s}^{2}\hat{ \mathcal{M}}\Phi_{n} \tag{3.58}\]
This introduces \(\mathcal{O}(1/m^{2k})\) gauge violations, with associated operators \(\hat{\mathcal{O}}^{(2k)}_{n-i}\) and \(\hat{\mathcal{U}}^{(2k)}_{n-i}\) equal to
\[\hat{\mathcal{O}}^{(2k)}_{n} =i^{-1/2}\hat{\mathcal{M}}s\cdot\nabla \tag{3.59}\] \[\hat{\mathcal{O}}^{(2k)}_{n-2} =i^{1/2}\partial_{s}^{2}\hat{\mathcal{M}}s\cdot\nabla\] (3.60) \[\hat{\mathcal{U}}^{(2k)}_{n} =(\partial_{s}\cdot\nabla-s\cdot\nabla\partial_{s}^{2})\hat{ \mathcal{M}} \tag{3.61}\]
while the rest are zero. These operators trivially satisfy the four conditions \(\hat{\mathcal{C}}^{(2k)}_{j}=0\) for any \(\hat{\mathcal{M}}\). Therefore, the gauge symmetry can be restored in exactly the same manner described in subsubsection 3.1.1. Multipole and tidal couplings are obviously accommodated by this class of gauge invariant interactions.
#### 3.1.3 Resummed non-local field strengths
Gauge invariance requires infinitely many terms in the action which are arbitrarily high order in derivatives, making interactions inherently non-local in nature. To make this manifest, we may formally resum the modifications \(\Delta\mathcal{F}^{(N)}_{n-i}\) and write the interactions in terms of non-local operators.
Suppose we have field strengths \(\mathcal{F}^{(2k)}_{n-i}\) with \(\mathcal{O}(1/m^{2k})\) gauge violations that can be cancelled via modifications \(\Delta\mathcal{F}^{(N)}_{n-i}\) for all \(N>2k\) in the way described in subsubsection 3.1.1. The gauge invariant field strengths \(\mathcal{F}_{n-i}=\mathcal{F}^{(2k)}_{n-i}+\sum_{N=2k+1}^{\infty}\Delta \mathcal{F}^{(N)}_{n-i}\) may then be written as
\[\mathcal{F}_{n}=\mathcal{F}^{(2k)}_{n}+\frac{i}{m^{2k+1}}\hat{ \mathcal{O}}^{(2k)}_{n}\frac{1}{1-\frac{1}{2m^{2}}(s\cdot\nabla)^{2}\partial_ {s}^{2}}X_{n-1} \tag{3.62}\] \[\mathcal{F}_{n-1}=\mathcal{F}^{(2k)}_{n-1}-\frac{i^{-1/2}}{m^{2k+ 1}}\frac{1}{1-\frac{1}{m^{2}}s\cdot\nabla(\partial_{s}\cdot\nabla-\frac{1}{2}s \cdot\nabla\partial_{s}^{2})}\Big{(}\hat{\mathcal{U}}^{(2k)}_{n}\Phi_{n}+\hat {\mathcal{U}}^{(2k)}_{n-2}\Phi_{n-2}\Big{)}\] \[+\frac{1}{m^{2k+2}}\frac{1}{1-\frac{1}{m^{2}}s\cdot\nabla( \partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})}s\cdot \nabla\Big{(}\hat{\mathcal{U}}^{(2k)}_{n-1}\Phi_{n-1}+\hat{\mathcal{U}}^{(2k) }_{n-3}\Phi_{n-3}\Big{)}\] \[-\frac{i^{1/2}}{m^{2k+2}}\frac{1}{1-\frac{1}{m^{2}}s\cdot\nabla( \partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})}(\partial_{s }\cdot\nabla-s\cdot\nabla\partial_{s}^{2})\hat{\mathcal{O}}^{(2k)}_{n}\frac{1} {1-\frac{1}{2m^{2}}(s\cdot\nabla)^{2}\partial_{s}^{2}}X_{n-1}\] \[-\frac{1}{4m^{2k+3}}\frac{1}{1-\frac{1}{m^{2}}s\cdot\nabla( \partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})}(s\cdot \nabla)^{2}\hat{\mathcal{O}}^{(2k)}_{n-3}\frac{1}{1-\frac{1}{2m^{2}}s\cdot \nabla\partial_{s}^{2}s\cdot\nabla}X_{n-2}\] (3.63) \[\mathcal{F}_{n-2}=\mathcal{F}^{(2k)}_{n-2}-2\frac{i^{-1/2}}{m^{2k+ 1}}\frac{1}{1-\frac{1}{m^{2}}(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot \nabla\partial_{s}^{2})s\cdot\nabla}\Big{(}\hat{\mathcal{U}}^{(2k)}_{n-1}\Phi_ {n-1}+\hat{\mathcal{U}}^{(2k)}_{n-3}\Phi_{n-3}\Big{)}\] \[-\frac{1}{m^{2k+1}}\frac{1}{1-\frac{1}{m^{2}}(\partial_{s}\cdot \nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})s\cdot\nabla}\partial_{s}^{2} \hat{\mathcal{O}}^{(2k)}_{n}\frac{1}{1-\frac{1}{2m^{2}}(s\cdot\nabla)^{2} \partial_{s}^{2}}X_{n-1}\] \[-2\frac{i}{m^{2k+2}}\frac{1}{1-\frac{1}{m^{2}}(\partial_{s}\cdot \nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})s\cdot\nabla}(\partial_{s} \cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})\Big{(}\hat{\mathcal{U}} ^{(2k)}_{n}\Phi_{n}+\hat{\mathcal{U}}^{(2k)}_{n-2}\Phi_{n-2}\Big{)}\] \[+\frac{i^{-1/2}}{2m^{2k+2}}\frac{1}{1-\frac{1}{m^{2}}(\partial_{s} \cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})s\cdot\nabla}s\cdot\nabla \hat{\mathcal{O}}^{(2k)}_{n-3}\frac{1}{1-\frac{1}{2m^{2}}s\cdot\nabla\partial _{s}^{2}s\cdot\nabla}X_{n-2}\] \[+\frac{2}{m^{2k+3}}\frac{1}{1-\frac{1}{m^{2}}(\partial_{s}\cdot \nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})s\cdot\nabla}(\partial_{s}\cdot \nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})^{2}\hat{\mathcal{O}}^{(2k)}_{n} \frac{1}{1-\frac{1}{2m^{2}}(s\cdot\nabla)^{2}\partial_{s}^{2}}X_{n-1}\] (3.64) \[\mathcal{F}_{n-3}=\mathcal{F}^{(2k)}_{n-3}+\frac{i}{2m^{2k+1}} \hat{\mathcal{O}}^{(2k)}_{n-3}\frac{1}{1-\frac{1}{2m^{2}}s\cdot\nabla\partial _{s}^{2}s\cdot\nabla}X_{n-2} \tag{3.65}\]
where we have defined \(X_{n-1}\equiv\Phi_{n-1}+\frac{i^{1/2}}{2m}s\cdot\nabla\Phi_{n-2}\) and \(X_{n-2}\equiv\Phi_{n-2}+\frac{i^{-1/2}}{m}s\cdot\nabla\partial_{s}^{2}\Phi_{n -1}\). One can check that these expressions are gauge invariant and satisfy the covariantized
massive Bianchi identities. To simplify these expressions, we are free to gauge fix, by for instance choosing \(\Phi_{n-1},\Phi_{n-2}=0\)
\[\mathcal{F}_{n} =\mathcal{F}_{n}^{(2k)} \tag{3.66}\] \[\mathcal{F}_{n-1} =\mathcal{F}_{n-1}^{(2k)}-\frac{i^{-1/2}}{m^{2k+1}}\frac{1}{1- \frac{1}{m^{2}}s\cdot\nabla(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla \partial_{s}^{2})}\hat{\mathcal{U}}_{n}^{(2k)}\Phi_{n}\] \[\quad+\frac{1}{m^{2k+2}}\frac{1}{1-\frac{1}{m^{2}}s\cdot\nabla( \partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})}s\cdot\nabla \hat{\mathcal{U}}_{n-3}^{(2k)}\Phi_{n-3}\] (3.67) \[\mathcal{F}_{n-2} =\mathcal{F}_{n-2}^{(2k)}-2\frac{i^{-1/2}}{m^{2k+1}}\frac{1}{1- \frac{1}{m^{2}}(\partial_{s}\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{ 2})s\cdot\nabla}\hat{\mathcal{U}}_{n-3}^{(2k)}\Phi_{n-3}\] \[\quad-2\frac{i}{m^{2k+2}}\frac{1}{1-\frac{1}{m^{2}}(\partial_{s} \cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})s\cdot\nabla}(\partial_{s }\cdot\nabla-\frac{1}{2}s\cdot\nabla\partial_{s}^{2})\hat{\mathcal{U}}_{n}^{( 2k)}\Phi_{n}\] (3.68) \[\mathcal{F}_{n-3} =\mathcal{F}_{n-3}^{(2k)} \tag{3.69}\]
For spins \(n=0,1\), the non-local gauge invariant completions truncate to a local operator, so that a local field theory is possible. For \(n\geq 2\) on the other hand, non-locality appears to be unavoidable.
#### 3.1.4 Interactions linear in spin \(n\) fields
If the massive spin \(n\) particle has \(U(1)\) charge \(0\), there is another class of interactions, i.e. those which are only linear in the matter fields \(\Phi_{n-i}\) in the action, which can be made gauge invariant. To get a charge \(0\) particle, it is sufficient to impose the reality conditions on the fields \(\tilde{\Phi}_{n}=\Phi_{n}\), \(\tilde{\Phi}_{n-1}=-\Phi_{n-1}\), \(\tilde{\Phi}_{n-2}=\Phi_{n-2}\), and \(\tilde{\Phi}_{n-3}=-\Phi_{n-3}\). The gauge parameters in turn are constrained \(\tilde{\epsilon}_{n-1}=\epsilon_{n-1}\), \(\tilde{\epsilon}_{n-2}=-\epsilon_{n-2}\). If we rewrite everything in terms of real fields by performing the field redefinitions \(\Phi_{n-1}\to i\Phi_{n-1}\), \(\Phi_{n-3}\to i\Phi_{n-3}\), and \(\epsilon_{n-2}\to i\epsilon_{n-2}\), the covariantized massive gauge transformations become
\[\begin{split}\delta\Phi_{n}=i^{-1/2}s\cdot\nabla\epsilon_{n-1}& \delta\Phi_{n-1}=i^{-1/2}s\cdot\nabla\epsilon_{n-2}+m\,\epsilon_{n-1}\\ \delta\Phi_{n-2}=-i^{1/2}s\cdot\nabla\partial_{s}^{2}\epsilon_{n- 1}-2m\,\epsilon_{n-2}&\delta\Phi_{n-3}=-i^{1/2}s\cdot\nabla \partial_{s}^{2}\epsilon_{n-2}-3im\partial_{s}^{2}\epsilon_{n-1}\end{split} \tag{3.70}\]
where \(\nabla_{\mu}=\partial_{X\mu}-s^{\nu}\Gamma^{\lambda}{}_{\mu\nu}\partial_{s\lambda}\).
Coupling \(\Phi_{n-i}\) to hyperfields \(\mathcal{J}_{n-i}\) which are gauge invariant functions of the gravitational and possibly electromagnetic fields by adding to the action
\[\Delta S_{n}=n!\int d^{d}X\frac{d^{d}sd^{d}s^{\prime}}{(2\pi)^{d}}e^{is\cdot s^ {\prime}}\Phi_{n-i}(X,s)\mathcal{J}_{n-i}(X,s^{\prime}) \tag{3.71}\]
where the sum over \(i=0,\ldots,3\) is implied, is gauge invariant provided that the \(\mathcal{J}_{n-i}\) satisfy
\[\partial_{s}\cdot\nabla\mathcal{J}_{n}+is^{2}\partial_{s}\cdot \nabla\mathcal{J}_{n-2}-i^{-1/2}m\mathcal{J}_{n-1}-3i^{1/2}ms^{2}\mathcal{J}_{n -3}=0 \tag{3.72}\] \[\partial_{s}\cdot\nabla\mathcal{J}_{n-1}+is^{2}\partial_{s}\cdot \nabla\mathcal{J}_{n-3}+2i^{-1/2}m\mathcal{J}_{n-2}=0 \tag{3.73}\]
To construct \(\mathcal{J}_{n-i}\) which satisfy (3.72) and (3.73), we may proceed in a similar fashion to subsubsection 3.1.1, by starting with \(\mathcal{J}_{n-i}\) which do not satisfy (3.72) and (3.73), and
finding improvement terms inductively that make the interactions exactly gauge invariant. Instead, we may more simply search for gauge invariant linear combinations of the fields \(\Phi_{n-i}\). There are two gauge invariant linear combinations
\[A_{n}\equiv\Phi_{n}-\frac{i^{-1/2}}{m}s\cdot\nabla\frac{1}{1-\frac {1}{2m^{2}}(s\cdot\nabla)^{2}\partial_{s}^{2}}X_{n-1} \tag{3.74}\] \[A_{n-3}\equiv\Phi_{n-3}+3i\partial_{s}^{2}\frac{1}{1-\frac{1}{2 m^{2}}(s\cdot\nabla)^{2}\partial_{s}^{2}}X_{n-1}-\frac{i^{1/2}}{2m}s\cdot\nabla \partial_{s}^{2}\frac{1}{1-\frac{1}{2m^{2}}s\cdot\nabla\partial_{s}^{2}s\cdot \nabla}X_{n-2} \tag{3.75}\]
where now \(X_{n-1}=\Phi_{n-1}+\frac{i^{-1/2}}{2m}s\cdot\nabla\Phi_{n-2}\) and \(X_{n-2}=\Phi_{n-2}+\frac{i^{1/2}}{m}s\cdot\nabla\partial_{s}^{2}\Phi_{n-1}\). Note that these gauge invariant combinations are non-local for \(n\geq 3\). Using \(A_{n}\) and \(A_{n-3}\), we may construct gauge invariant interactions linear in \(\Phi_{n-i}\) of any type, including the ones classified in [46]
\[\Delta S_{n}=n!\int d^{d}X\frac{d^{d}sd^{d}s^{\prime}}{(2\pi)^{d}}e^{is\cdot s ^{\prime}}\Big{(}A_{n}(X,s)\mathcal{J}_{n}(X,s^{\prime})+A_{n-3}(X,s)\mathcal{ J}_{n-3}(X,s^{\prime})\Big{)} \tag{3.76}\]
for any \(\mathcal{J}_{n}\) and \(\mathcal{J}_{n-3}\). Rewriting (3.76) in the form (3.71) through integration by parts will give \(\mathcal{J}_{n-1}\) and \(\mathcal{J}_{n-2}\) that are functions of \(\mathcal{J}_{n}\) and \(\mathcal{J}_{n-3}\), which altogether satisfy (3.72) and (3.73).
### Half integer spins
In this section, we repeat the analysis of subsection 3.1 for massive half integer spin particles described by the free action (2.22). We will find gauge invariant interactions through a completely analogous argument.
We first begin by minimally coupling our theory to electromagnetism and gravity. This amounts to replacing every spacetime derivative \(\partial_{X\mu}\) in the action with a covariant derivative \(\nabla_{\mu}\), as well as again replacing every \(\eta_{\mu\nu}\) with \(g_{\mu\nu}(X)\), and \(d^{d}X\) with \(d^{d}X\sqrt{-g}\). In the case of half integer spin fields, we also use Dirac matrices \(\gamma_{\mu}(X)\) satisfying \(\{\gamma_{\mu},\gamma_{\nu}\}=2g_{\mu\nu}\). This is made possible by introducing a vielbein \(e_{\mu}^{\,a}(X)\) satisfying \(e_{\mu}^{\,a}e_{\nu}^{\,b}\eta_{ab}=g_{\mu\nu}\) and \(e_{\mu}^{\,a}e_{\nu}^{\,b}\sigma^{\mu\nu}=\eta^{ab}\), and writing \(\gamma_{\mu}(X)=e_{\mu}^{\,a}(X)\gamma_{a}\), where \(\gamma_{a}\) are constant Dirac matrices satisfying \(\{\gamma_{a},\gamma_{b}\}=2\eta_{ab}\). Here Latin indices \(a,b=0,\dots,d-1\) are raised and lowered with the flat metric \(\eta^{ab}\) and \(\eta_{ab}\), respectively.
The covariant derivative \(\nabla_{\mu}\) on a charge 1 half integer spin \(n+1/2\) hyperfield \(\Psi_{n}(X,s)\), including a \(U(1)\) gauge field \(A_{\mu}\), a spin connection \(\omega_{\mu}^{\,ab}\) and Levi-Civita connection \(\Gamma^{\lambda}_{\,\,\,\mu\nu}\) related via \(\Gamma^{\lambda}_{\,\,\,\mu\nu}=e^{\lambda}_{\,\,a}(\partial_{\mu}e_{\nu}^{\,a }+\omega_{\mu}^{\,\,ab}e_{\nu b})\), which implements the standard covariant derivative on its component field \(\psi_{\mu_{1}\cdots\mu_{n}}(X)\) is
\[\nabla_{\mu}=\partial_{X\mu}-iA_{\mu}+\frac{1}{4}\omega_{\mu}^{\,\,ab}\gamma_{ ab}-s^{\nu}\Gamma^{\lambda}_{\,\,\,\mu\nu}\partial_{s\lambda} \tag{3.77}\]
Covariant derivatives no longer commute, but instead their commutator when acting on \(\Psi_{n}(X,s)\) equals
\[[\nabla_{\mu},\nabla_{\nu}]=-iF_{\mu\nu}+\frac{1}{4}R_{\omega\sigma\mu\nu} \gamma^{\omega\sigma}-s^{\sigma}R^{\omega}_{\,\,\,\sigma\mu\nu}\partial_{s\omega} \tag{3.78}\]
As part of minimal coupling, we will as before deform the massive gauge symmetry (2.18) by replacing spacetime derivatives with covariant derivatives, \(\eta^{\mu\nu}\) with \(g^{\mu\nu}\), and \(\gamma_{\mu}\) with \(e_{\mu}^{\,a}\gamma_{a}\), so that it is consistent with \(U(1)\) electromagnetic gauge invariance and general coordinate invariance
\[\delta\Psi_{n}=i^{-1/2}s\cdot\nabla\epsilon_{n-1}\] \[\delta\Psi_{n-1}=-is\cdot\nabla\hbox to 0.0pt{/}\partial_{s} \epsilon_{n-1}+im\epsilon_{n-1}\] \[\delta\Psi_{n-2}=-i^{1/2}s\cdot\nabla\partial_{s}^{2}\epsilon_{n -1}+2i^{1/2}m\hbox to 0.0pt{/}\partial_{s}\epsilon_{n-1} \tag{3.79}\]
The action (2.22) is not invariant under the gauge symmetry (3.79) after minimal coupling, and hence does not consistently describe a massive spin \(n+1/2\) particle. Indeed, the gauge variation of the minimally coupled action \(\delta S_{n+1/2}\) is a linear combination of terms proportional to \(\delta{\cal S}_{n-i}\), and the covariantized version of the fermionic massive Bianchi identity (2.23). This follows from the fact that the massive action is derived from the corresponding massless action \(S_{n+1/2,0}\) in \(d+1\) dimensions built from a rank \(n\) triple \(\gamma\) traceless hyperfield \(\Psi_{n,0}(X,s)\) and massless field strength \({\cal S}_{n,0}(X,s)\), described in [39]. Under the gauge transformation \(\delta\Psi_{n,0}(X,s)=i^{-1/2}s\cdot\partial_{X}\epsilon_{n-1}(X,s)\), for \(\epsilon_{n-1}(X,s)\) a rank \(n-1\)\(\gamma\) traceless hyperfield, the gauge variation of \(S_{n+1/2,0}\) is
\[\delta S_{n+1/2,0}=-n!\int d^{d+1}X\frac{d^{d+1}sd^{d+1}s^{\prime} }{(2\pi)^{d+1}}e^{is\cdot s^{\prime}}\times\] \[\Big{(}\overline{\Psi}_{n,0}(X,s)\big{(}1-\frac{1}{2}\hbox to 0.0pt{/} \delta^{\prime}\hbox to 0.0pt{/}\partial_{s^{\prime}}-\frac{1}{4}s^{\prime 2} \partial_{s^{\prime}}^{2}\big{)}\delta{\cal S}_{n,0}(X,s^{\prime})-\frac{1}{ 2}i^{1/2}\bar{\epsilon}_{n-1}(X,s)\big{(}\hbox to 0.0pt{/}\partial_{s^{ \prime}}\hbox to 0.0pt{/}\partial_{X}-s^{\prime}\cdot\partial_{X}\partial_{s^{ \prime}}^{2}\big{)}{\cal S}_{n,0}\Big{)} \tag{3.80}\]
After dimensional reduction, the gauge variation of the massless \(d+1\) dimensional field strength \(\delta{\cal S}_{n,0}\) decomposes into the massive \(d\) dimensional variations \(\delta{\cal S}_{n-i}\), and the expression \(\hbox to 0.0pt{/}\partial_{s}\hbox to 0.0pt{/}\partial_{X}{\cal S}_{n,0}-s \cdot\partial_{X}\partial_{s}^{2}{\cal S}_{n,0}\) decomposes into the fermionic massive Bianchi identity (2.23). These are no longer zero after minimal coupling, but instead equal
\[\delta{\cal S}_{n}=i^{-1/2}[\!\nabla,s\cdot\nabla]\epsilon_{n-1} \tag{3.81}\] \[\delta{\cal S}_{n-1}=-i[\!\nabla,s\cdot\nabla]\hbox to 0.0pt{/} \partial_{s}\epsilon_{n-1}\] (3.82) \[\delta{\cal S}_{n-2}=-i^{1/2}[\!\nabla,s\cdot\nabla]\partial_{s}^ {2}\epsilon_{n-1} \tag{3.83}\]
\[\hbox to 0.0pt{/}\partial_{s}(\!\nabla\!\nabla-m){\cal S}_{n}-s\cdot \nabla\partial_{s}^{2}{\cal S}_{n}-i^{1/2}(\!\nabla-m){\cal S}_{n-1}+i\,s\cdot \nabla{\cal S}_{n-2}=\] \[\big{(}[\!\hbox to 0.0pt{/}\partial_{s},(\!\nabla\!)^{2}]-\hbox to 0.0pt{/} \partial_{s}[\!\nabla,s\cdot\nabla]\hbox to 0.0pt{/}\partial_{s}\big{)}\Psi_{n}+i^{1/2}\{\! \partial_{s},[\!\nabla,s\cdot\nabla]\}\Psi_{n-1}-i[\!\nabla,s\cdot\nabla]\Psi_ {n-2} \tag{3.84}\]
#### 3.2.1 Restoring gauge symmetry
We now show how to restore the gauge symmetry after minimal coupling by adding further interactions with electromagnetism and gravity in the form of making appropriate modifications of the field strengths \({\cal S}_{n-i}\to{\cal S}_{n-i}+\Delta{\cal S}_{n-i}\), so that they are again gauge invariant and satisfy the massive Bianchi identity, in a way exactly analogous to the integer spin case in subsubsection 3.1.1.
In particular, first note that the gauge violations in (3.81) - (3.84), are \({\cal O}(1)\) in \(m\). Because of the appearance of \(m\) in the gauge transformations (3.79) and massive Bianchi identity, it is possible to suppress the gauge violations to \({\cal O}(1/m)\), cancelling the \({\cal O}(1)\) violations, by adding a \(\Delta{\cal S}^{(1)}_{n-i}\) which is proportional to \(1/m\). We will prove by induction that this process can continue indefinitely, generating an infinite series of \(\Delta{\cal S}^{(N)}_{n-i}\), each proportional to \(1/m^{N}\), for all integers \(N\geq 0\), until the action is exactly gauge invariant.
Suppose we have found modified field strengths \({\cal S}^{(2k)}_{n-i}\) which suppress all gauge violations to be \({\cal O}(1/m^{2k})\), which we write as
\[\delta{\cal S}^{(2k)}_{n}= \frac{1}{m^{2k}}\hat{\cal O}^{(2k)}_{n}\epsilon_{n-1} \tag{3.85}\] \[\delta{\cal S}^{(2k)}_{n-1}= \frac{1}{m^{2k}}\hat{\cal O}^{(2k)}_{n-1}\epsilon_{n-1}\] (3.86) \[\delta{\cal S}^{(2k)}_{n-2}= \frac{1}{m^{2k}}\hat{\cal O}^{(2k)}_{n-2}\epsilon_{n-1} \tag{3.87}\]
\[\partial\!\!\!/_{s}(\nabla\!\!\!/-m){\cal S}^{(2k)}_{n}-s\cdot \nabla\partial^{2}_{s}{\cal S}^{(2k)}_{n}-i^{1/2}(\nabla\!\!\!/-m){\cal S}^{(2 k)}_{n-1}+i\,s\cdot\nabla{\cal S}^{(2k)}_{n-2}=\] \[\frac{1}{m^{2k}}\hat{\cal U}^{(2k)}_{n}\Psi_{n}+\frac{1}{m^{2k}} \hat{\cal U}^{(2k)}_{n-1}\Psi_{n-1}+\frac{1}{m^{2k}}\hat{\cal U}^{(2k)}_{n-2} \Psi_{n-2} \tag{3.88}\]
for some operators \(\hat{\cal O}^{(2k)}_{n-i}\), \(\hat{\cal U}^{(2k)}_{n-i}\). We first show that if \(\hat{\cal O}^{(2k)}_{n-i}\) and \(\hat{\cal U}^{(2k)}_{n-i}\) satisfy conditions (to be determined) ensuring that \(\Delta{\cal S}^{(2k+1)}_{n-i}\) and \(\Delta{\cal S}^{(2k+2)}_{n-i}\) can be constructed so that \({\cal S}^{(2k+2)}_{n-i}\equiv{\cal S}^{(2k)}_{n-i}+\Delta{\cal S}^{(2k+1)}_{ n-i}+\Delta{\cal S}^{(2k+2)}_{n-i}\) suppresses the gauge violations to \({\cal O}(1/m^{2k+2})\), then the corresponding operators at the next order \(\hat{\cal O}^{(2k+2)}_{n-i}\) and \(\hat{\cal U}^{(2k+2)}_{n-i}\) will also satisfy those conditions.
To cancel the \({\cal O}(1/m^{2k})\) gauge violations (3.85), (3.87), and (3.88), one may choose \(\Delta{\cal S}^{(2k+1)}_{n-i}\) to be
\[\Delta{\cal S}^{(2k+1)}_{n} =\frac{i}{m^{2k+1}}\hat{\cal O}^{(2k)}_{n}\Psi_{n-1} \tag{3.89}\] \[\Delta{\cal S}^{(2k+1)}_{n-1} =-\frac{i^{-1/2}}{m^{2k+1}}\Big{(}\hat{\cal U}^{(2k)}_{n}\Psi_{n} +(-i\partial\!\!\!/_{s}\hat{\cal O}^{(2k)}_{n}+\hat{\cal U}^{(2k)}_{n-1})\Psi_ {n-1}+\hat{\cal U}^{(2k)}_{n-2}\Psi_{n-2}\Big{)}\] (3.90) \[\Delta{\cal S}^{(2k+1)}_{n-2} =\frac{i}{m^{2k+1}}\hat{\cal O}^{(2k)}_{n-2}\Psi_{n-1} \tag{3.91}\]
The remaining \({\cal O}(1/m^{2k})\) gauge violation (3.86) must then be cancelled by \(\delta(\Delta{\cal S}^{(2k+1)}_{n-1})\). This in turn is only possible if the following condition holds
\[\hat{\cal C}^{(2k)}_{1}\equiv\hat{\cal O}^{(2k)}_{n-1}-i^{-1/2}\partial\!\!\!/ _{s}\hat{\cal O}^{(2k)}_{n}-i^{1/2}\hat{\cal U}^{(2k)}_{n-1}-2\,\hat{\cal U}^{ (2k)}_{n-2}\partial\!\!\!/_{s}=0 \tag{3.92}\]
Let us assume this is true and proceed with constructing \(\Delta{\cal S}^{(2k+2)}_{n-i}\). The gauge violations
associated with \(\mathcal{S}^{(2k+1)}_{n-i}=\mathcal{S}^{(2k)}_{n-i}+\Delta\mathcal{S}^{(2k+1)}_{n-i}\) are
\[\delta\mathcal{S}^{(2k+1)}_{n} =\frac{1}{m^{2k+1}}\hat{\mathcal{O}}^{(2k)}_{n}s\cdot\nabla\not{ \partial}_{s}\epsilon_{n-1} \tag{3.93}\] \[\delta\mathcal{S}^{(2k+1)}_{n-1} =\frac{1}{m^{2k+1}}\Big{(}i^{1/2}(-i\not{\partial}_{s}\hat{ \mathcal{O}}^{(2k)}_{n}+\hat{\mathcal{U}}^{(2k)}_{n-1})s\cdot\nabla\not{ \partial}_{s}+i\,\hat{\mathcal{U}}^{(2k)}_{n}s\cdot\nabla+\hat{\mathcal{U}}^{( 2k)}_{n-2}s\cdot\nabla\partial_{s}^{2}\Big{)}\epsilon_{n-1}\] (3.94) \[\delta\mathcal{S}^{(2k+1)}_{n-2} =\frac{1}{m^{2k+1}}\hat{\mathcal{O}}^{(2k)}_{n-2}s\cdot\nabla \not{\partial}_{s}\epsilon_{n-1}\] (3.95) \[\not{\partial}_{s}(\!\nabla-\!m)\mathcal{S}^{(2k+1)}_{n}-s\cdot \nabla\partial_{s}^{2}\mathcal{S}^{(2k+1)}_{n}-i^{1/2}(\!\nabla-m)\mathcal{S} ^{(2k+1)}_{n-1}+i\cdot s\cdot\nabla\mathcal{S}^{(2k+1)}_{n-2}=\] \[\qquad\qquad\frac{1}{m^{2k+1}}\Big{(}\!\nabla\!\hat{\mathcal{U}}^ {(2k)}_{n-1}+i(\![\!\not{\partial}_{s},\!\nabla]\!]-s\cdot\nabla\partial_{s}^{ 2})\hat{\mathcal{O}}^{(2k)}_{n}-s\cdot\nabla\hat{\mathcal{O}}^{(2k)}_{n-2} \Big{)}\Psi_{n-1}\] \[\qquad\qquad\qquad+\frac{1}{m^{2k+1}}\!\nabla\!\hat{\mathcal{U}}^ {(2k)}_{n}\Psi_{n}+\frac{1}{m^{2k+1}}\!\nabla\!\hat{\mathcal{U}}^{(2k)}_{n-2} \Psi_{n-2} \tag{3.96}\]
It is straightforward to find \(\Delta\mathcal{S}^{(2k+2)}_{n-i}\) which cancel the \(\mathcal{O}(1/m^{2k+1})\) gauge violations in (3.93), (3.95), and (3.96)
\[\Delta\mathcal{S}^{(2k+2)}_{n}=\frac{i}{m^{2k+2}}\hat{\mathcal{O} }^{(2k)}_{n}s\cdot\nabla\not{\partial}_{s}\Psi_{n-1} \tag{3.97}\] \[\Delta\mathcal{S}^{(2k+2)}_{n-1}=-\frac{i^{-1/2}}{m^{2k+2}}\Big{(} \!\nabla\!\hat{\mathcal{U}}^{(2k)}_{n}\Psi_{n}+\!\nabla\!\hat{\mathcal{U}}^ {(2k)}_{n-2}\Psi_{n-2}\Big{)}\] \[-\frac{i^{-1/2}}{m^{2k+2}}\Big{(}\!\nabla\!\hat{\mathcal{U}}^{(2k )}_{n-1}-i\!\not{\partial}_{s}\hat{\mathcal{O}}^{(2k)}_{n}s\cdot\nabla\!\not {\partial}_{s}+i([\!\!\not{\partial}_{s},\!\nabla]\!]-s\cdot\nabla\partial_{s} ^{2})\hat{\mathcal{O}}^{(2k)}_{n}-s\cdot\nabla\hat{\mathcal{O}}^{(2k)}_{n-2} \Big{)}\Psi_{n-1}\] (3.98) \[\Delta\mathcal{S}^{(2k+2)}_{n-2}=\frac{i}{m^{2k+2}}\hat{\mathcal{ O}}^{(2k)}_{n-2}s\cdot\nabla\!\not{\partial}_{s}\Psi_{n-1} \tag{3.99}\]
Demanding that \(\delta(\Delta\mathcal{S}^{(2k+2)}_{n-1})\) also cancels the remaining \(\mathcal{O}(1/m^{2k+1})\) gauge violation in (3.94) imposes one final condition on \(\hat{\mathcal{O}}^{(2k)}_{n-i}\) and \(\hat{\mathcal{U}}^{(2k)}_{n-i}\)
\[\hat{\mathcal{C}}^{(2k)}_{2}\equiv i^{1/2}([\!\not{\partial}_{s},\!\nabla\!]-s\cdot\nabla\partial_{s}^{2}) \hat{\mathcal{O}}^{(2k)}_{n}-i^{-1/2}s\cdot\nabla\hat{\mathcal{O}}^{(2k)}_{n-2 }-\hat{\mathcal{U}}^{(2k)}_{n}s\cdot\nabla\] \[+i^{-1/2}\!\nabla\!\hat{\mathcal{U}}^{(2k)}_{n-1}-i^{-1/2}\hat{ \mathcal{U}}^{(2k)}_{n-1}s\cdot\nabla\!\not{\partial}_{s}-2i\!\nabla\!\hat{ \mathcal{U}}^{(2k)}_{n-2}\!\not{\partial}_{s}+i\hat{\mathcal{U}}^{(2k)}_{n-2} s\cdot\nabla\partial_{s}^{2}=0 \tag{3.100}\]
Assuming that this condition is satisfied, we may use \(\mathcal{S}^{(2k+2)}_{n-i}=\mathcal{S}^{(2k+1)}_{n-i}+\Delta\mathcal{S}^{(2k+2)}_ {n-i}\) to find the corresponding operators at the next order \(\hat{\mathcal{O}}^{(2k+2)}_{n-i}\) and \(\hat{\mathcal{U}}^{(2k+2)}_{n-i}\) associated with \(\mathcal{O}(1/m^{2k+2})\) gauge violations
\[\hat{\mathcal{O}}^{(2k+2)}_{n}= \hat{\mathcal{O}}^{(2k)}_{n}(s\cdot\nabla\!\not{\partial}_{s})^{2} \tag{3.101}\] \[\hat{\mathcal{O}}^{(2k+2)}_{n-1}= i^{-1/2}\!\not{\partial}_{s}\hat{\mathcal{O}}^{(2k)}_{n}(s\cdot \nabla\!\not{\partial}_{s})^{2}-i^{-1/2}([\!\not{\partial}_{s},\!\nabla\!]-s \cdot\nabla\partial_{s}^{2})\hat{\mathcal{O}}^{(2k)}_{n}s\cdot\nabla\!\not{ \partial}_{s}+i\!\nabla\!\hat{\mathcal{U}}^{(2k)}_{n}s\cdot\nabla\] \[+i^{1/2}\!\nabla\!\hat{\mathcal{U}}^{(2k)}_{n-1}s\cdot\nabla\!\not {\partial}_{s}+\!\nabla\!\hat{\mathcal{U}}^{(2k)}_{n-2}s\cdot\nabla\partial_{s}^{2} -i^{1/2}s\cdot\nabla\hat{\mathcal{O}}^{(2k)}_{n-2}s\cdot\nabla\!\not{\partial}_{s}\] (3.102) \[\hat{\mathcal{O}}^{(2k+2)}_{n-2}= \hat{\mathcal{O}}^{(2k)}_{n-2}(s\cdot\nabla\!\not{\partial}_{s})^{2}\] (3.103) \[\hat{\mathcal{U}}^{(2k+2)}_{n}= (\!\nabla)^{2}\hat{\mathcal{U}}^{(2k)}_{n}\] (3.104) \[\hat{\mathcal{U}}^{(2k+2)}_{n-1}= i([\!\not{\partial}_{s},\!\nabla]-s\cdot\nabla\partial_{s}^{2})\hat{ \mathcal{O}}^{(2k)}_{n}s\cdot\nabla\!\not{\partial}_{s}+i\!\nabla\!([\!\not{ \partial}_{s},\!\nabla]-s\cdot\nabla\partial_{s}^{2})\hat{\mathcal{O}}^{(2k)}_{n}\] \[+(\!\nabla)^{2}\hat{\mathcal{U}}^{(2k)}_{n-1}-\!\nabla\!s\cdot \nabla\hat{\mathcal{O}}^{(2k)}_{n-2}-s\cdot\nabla\hat{\mathcal{O}}^{(2k)}_{n-2 }s\cdot\nabla\!\not{\partial}_{s}\] (3.105) \[\hat{\mathcal{U}}^{(2k+2)}_{n-2}= (\!\nabla)^{2}\hat{\mathcal{U}}^{(2k)}_{n-2} \tag{3.106}\]
The next step is to show that if \(\hat{\mathcal{O}}^{(2k)}_{n-i}\) and \(\hat{\mathcal{U}}^{(2k)}_{n-i}\) satisfy \(\hat{\mathcal{C}}^{(2k)}_{j}=0\) for \(j=1,2\), then \(\hat{\mathcal{O}}^{(2k+2)}_{n-i}\) and \(\hat{\mathcal{U}}^{(2k+2)}_{n-i}\) satisfy \(\hat{\mathcal{C}}^{(2k)}_{j}=0\). This follows after writing \(\hat{\mathcal{O}}^{(2k+2)}_{n-i}\) and \(\hat{\mathcal{U}}^{(2k+2)}_{n-i}\) in terms of \(\hat{\mathcal{O}}^{(2k)}_{n-i}\) and \(\hat{\mathcal{U}}^{(2k)}_{n-i}\)
\[\hat{\mathcal{C}}^{(2k+2)}_{1} =-i\not{\nabla}\hat{\mathcal{C}}^{(2k)}_{2}=0 \tag{3.107}\] \[\hat{\mathcal{C}}^{(2k+2)}_{2} =(\not{\nabla})^{2}\hat{\mathcal{C}}^{(2k)}_{2}=0 \tag{3.108}\]
Finally, we must verify that the operators \(\hat{\mathcal{O}}^{(0)}_{n-i}\) and \(\hat{\mathcal{U}}^{(0)}_{n-i}\) associated with minimal coupling (3.81) - (3.84) satisfy \(\hat{\mathcal{C}}^{(0)}_{j}=0\). \(\hat{\mathcal{O}}^{(0)}_{n-i}\) and \(\hat{\mathcal{U}}^{(0)}_{n-i}\) are equal to
\[\hat{\mathcal{O}}^{(0)}_{n} =i^{-1/2}[\not{\nabla},s\cdot\nabla], \hat{\mathcal{U}}^{(0)}_{n} =[\not{\partial}_{s},(\not{\nabla})^{2}]-\not{\partial}_{s}[\not{ \nabla},s\cdot\nabla]\not{\partial}_{s} \tag{3.109}\] \[\hat{\mathcal{O}}^{(0)}_{n-1} =-i[\not{\nabla},s\cdot\nabla]\not{\partial}_{s}, \hat{\mathcal{U}}^{(0)}_{n-1} =i^{1/2}\{\not{\partial}_{s},[\not{\nabla},s\cdot\nabla]\}\] (3.110) \[\hat{\mathcal{O}}^{(0)}_{n-2} =-i^{1/2}[\not{\nabla},s\cdot\nabla]\partial_{s}^{2}, \hat{\mathcal{U}}^{(0)}_{n-2} =-i[\not{\nabla},s\cdot\nabla] \tag{3.111}\]
That these satisfy \(\hat{\mathcal{C}}^{(0)}_{j}=0\) is easily verified, thus providing the base case for the induction argument to go through. The modifications \(\Delta\mathcal{S}^{(N)}_{n-i}\) (3.89) - (3.91) and (3.97) - (3.99) provide a gauge invariant completion of minimal coupling.
#### 3.2.2 Non-minimal gauge invariant interactions
As we saw in the previous section, minimally coupling \(\Psi_{n-i}\) to electromagnetism and gravity does not preserve the massive gauge symmetry (3.79), but there exists modifications \(\Delta\mathcal{S}_{n-i}\to\mathcal{S}_{n-i}+\sum_{N=0}^{\infty}\Delta\mathcal{ S}^{(N)}_{n-i}\) which restores the gauge symmetry exactly.
We would also like to know how to add non-minimal couplings to electromagnetism and gravity in a gauge invariant way. For instance, one might consider adding a term \(\Delta\mathcal{S}^{(1)}_{n}=\frac{1}{m}g_{1}([\not{\partial}_{s}\not{\nabla}, s\cdot\nabla]-(\not{\nabla})^{2})\Psi_{n}\) to \(\mathcal{S}_{n}\), which is related to the magnetic dipole and gravitational quadrupole moment couplings, because it is linear in curvatures with no spacetime derivatives
\[[\not{\partial}_{s}\not{\nabla},s\cdot\nabla]-(\not{\nabla})^{2} =\not{\partial}_{s}[\not{\nabla},s\cdot\nabla]=is^{\mu}F_{\mu\nu} \partial^{\nu}_{s}+i\gamma^{\mu\nu}F_{\mu\nu}-is^{\mu}F_{\mu\nu}\gamma^{\nu \lambda}\partial_{s\lambda}\] \[+s^{\mu}R_{\mu\nu}\partial^{\nu}_{s}-s^{\sigma}s^{\nu}R_{\omega \sigma\mu\nu}\partial^{\mu}_{s}\partial^{\nu}_{s}\] \[-\gamma^{\mu\nu}s^{\omega}R_{\omega\sigma\mu\nu}\partial^{\sigma} _{s}-\gamma^{\sigma\mu}s^{\nu}R_{\omega\sigma\mu\nu}\partial^{\omega}_{s}+s^{ \sigma}s^{\nu}R_{\omega\sigma\mu\nu}\gamma^{\mu\lambda}\partial_{s\lambda} \partial^{\omega}_{s} \tag{3.112}\]
Adding such a term will break massive gauge invariance. Luckily there is, again, a prescription for adding further interactions which restores the gauge invariance in exactly the same manner described in subsubsection 3.2.1. We show two classes of non-minimal interactions which can be made gauge invariant. The first is seeded by the following non-minimal coupling modifications, associated with an arbitrary operator \(\hat{\mathcal{M}}\) with mass dimension \(2k+2\)
which is invariant with respect to the massive gauge symmetry
\[\Delta{\cal S}_{n}^{(2k+1)} =\frac{1}{m^{2k+1}}\hat{\cal M}\Psi_{n} \tag{3.113}\] \[\Delta{\cal S}_{n-1}^{(2k+1)} =\frac{i^{-1/2}}{m^{2k+1}}\partial_{s}\hat{\cal M}\Psi_{n}\] (3.114) \[\Delta{\cal S}_{n}^{(2k+2)} =\frac{i^{1/2}}{m^{2k+2}}\hat{\cal M}s\cdot\nabla\Psi_{n-1},\] (3.115) \[\Delta{\cal S}_{n-1}^{(2k+2)} =-\frac{i^{-1/2}}{m^{2k+2}}\Big{(}([\partial\!\!\!/_{s},\!\nabla]- s\cdot\nabla\partial_{s}^{2})\hat{\cal M}\Psi_{n}-i^{1/2}\partial\!\!\!/_{s} \hat{\cal M}s\cdot\nabla\Psi_{n-1}\Big{)} \tag{3.116}\]
These modifications introduce \({\cal O}(1/m^{2k+2})\) gauge violations, described by the operators \(\hat{\cal O}_{n-i}^{(2k+2)}\) and \(\hat{\cal U}_{n-i}^{(2k+2)}\) equal to
\[\hat{\cal O}_{n}^{(2k+2)} =i^{-1/2}\hat{\cal M}(s\cdot\nabla)^{2}\partial\!\!\!/_{s} \tag{3.117}\] \[\hat{\cal O}_{n-1}^{(2k+2)} =i([\partial\!\!\!/_{s},\!\nabla]-s\cdot\nabla\partial_{s}^{2}) \hat{\cal M}s\cdot\nabla-i\partial\!\!\!/_{s}\hat{\cal M}(s\cdot\nabla)^{2} \partial\!\!\!/_{s}\] (3.118) \[\hat{\cal U}_{n}^{(2k+2)} =\overline{\cal N}([\partial\!\!\!/_{s},\!\nabla]-s\cdot\nabla \partial_{s}^{2})\hat{\cal M}\] (3.119) \[\hat{\cal U}_{n-1}^{(2k+2)} =i^{1/2}([\partial\!\!\!/_{s},\!\nabla]-s\cdot\nabla\partial_{s}^ {2})\hat{\cal M}s\cdot\nabla \tag{3.120}\]
while the rest are zero. These satisfy the two conditions \(\hat{\cal C}_{j}^{(2k+2)}=0\) for any \(\hat{\cal M}\). The second class is seeded by the modifications
\[\Delta{\cal S}_{n}^{(2k)} =\frac{1}{m^{2k}}\hat{\cal M}\Psi_{n}, \Delta{\cal S}_{n-1}^{(2k)} =\frac{i^{-1/2}}{m^{2k}}\partial\!\!\!/_{s}\hat{\cal M}\Psi_{n} \tag{3.121}\]
associated with a gauge invariant operator \(\hat{\cal M}\) with mass dimension \(2k+1\). This introduces \({\cal O}(1/m^{2k})\) gauge violations, with associated operators \(\hat{\cal O}_{n-i}^{(2k)}\) and \(\hat{\cal U}_{n-i}^{(2k)}\) equal to
\[\hat{\cal O}_{n}^{(2k)} =i^{-1/2}\hat{\cal M}s\cdot\nabla \tag{3.122}\] \[\hat{\cal O}_{n-1}^{(2k)} =-i\partial\!\!\!/_{s}\hat{\cal M}s\cdot\nabla\] (3.123) \[\hat{\cal U}_{n}^{(2k)} =([\partial\!\!\!/_{s},\!\nabla]-s\cdot\nabla\partial_{s}^{2}) \hat{\cal M} \tag{3.124}\]
while the rest are zero. Again, these satisfy the two conditions \(\hat{\cal C}_{j}^{(2k)}=0\) for any \(\hat{\cal M}\). Together, these two classes of interactions can accommodate many types of non-minimal interactions, including multipole and tidal couplings.
#### 3.2.3 Resummed non-local field strengths
We now would like to formally resum the modifications \(\Delta{\cal S}_{n-i}^{(N)}\) and write the interactions in terms of non-local operators.
Suppose we have field strengths \({\cal S}_{n-i}^{(2k)}\) with \({\cal O}(1/m^{2k})\) gauge violations that can be cancelled via modifications \(\Delta{\cal S}_{n-i}^{(N)}\) for all \(N>2k\) in the way described in subsubsection 3.2.1. The gauge invariant field strengths \({\cal S}_{n-i}={\cal S}_{n-i}^{(2k)}+\sum_{N=2k+1}^{\infty}\Delta{\cal S}_{n-i} ^{(N)}\) may then be written
as
\[\mathcal{S}_{n} =\mathcal{S}_{n}^{(2k)}+\frac{i}{m^{2k+1}}\hat{\mathcal{O}}_{n}^{(2k )}\frac{1}{1-\frac{1}{m}s\cdot\nabla\not{\partial}_{s}}\Psi_{n-1} \tag{3.125}\] \[\mathcal{S}_{n-1} =\mathcal{S}_{n-1}^{(2k)}-\frac{i^{-1/2}}{m^{2k+1}}\frac{1}{1- \frac{1}{m}\overline{\mathcal{N}}}\Big{(}\hat{\mathcal{U}}_{n}^{(2k)}\Psi_{n }+\hat{\mathcal{U}}_{n-1}^{(2k)}\Psi_{n-1}+\hat{\mathcal{U}}_{n-2}^{(2k)}\Psi_ {n-2}\Big{)}\] \[-\frac{i^{1/2}}{m^{2k+2}}\frac{1}{1-\frac{1}{m}\overline{ \mathcal{N}}}\Big{(}\big{(}\not{\partial}_{s}(\overline{\mathcal{N}}-m)-s \cdot\nabla\partial_{s}^{2}\big{)}\hat{\mathcal{O}}_{n}^{(2k)}+i\,s\cdot\nabla \hat{\mathcal{O}}_{n-2}^{(2k)}\Big{)}\frac{1}{1-\frac{1}{m}s\cdot\nabla\not{ \partial}_{s}}\Psi_{n-1}\] (3.126) \[\mathcal{S}_{n-2} =\mathcal{S}_{n-2}^{(2k)}+\frac{i}{m^{2k+1}}\hat{\mathcal{O}}_{n- 2}^{(2k)}\frac{1}{1-\frac{1}{m}s\cdot\nabla\not{\partial}_{s}}\Psi_{n-1} \tag{3.127}\]
One can check that these expressions are gauge invariant and satisfy the covariantized fermionic massive Bianchi identity. To simplify these expressions, we are free to gauge fix, by for instance choosing \(\Psi_{n-1}=0\)
\[\mathcal{S}_{n} =\mathcal{S}_{n}^{(2k)} \tag{3.128}\] \[\mathcal{S}_{n-1} =\mathcal{S}_{n-1}^{(2k)}-\frac{i^{-1/2}}{m^{2k+1}}\frac{1}{1- \frac{1}{m}\overline{\mathcal{N}}}\Big{(}\hat{\mathcal{U}}_{n}^{(2k)}\Psi_{n }+\hat{\mathcal{U}}_{n-2}^{(2k)}\Psi_{n-2}\Big{)}\] (3.129) \[\mathcal{S}_{n-2} =\mathcal{S}_{n-2}^{(2k)} \tag{3.130}\]
In this case, it is only for spin \(n+1/2=1/2\) that the non-local gauge invariant completions vanish. Altogether then, non-local interactions appear to be unavoidable in this formulation for massive particles of spin \(s\geq 3/2\).
## 4 Discussion
In this paper, we successfully constructed gauge invariant interactions between massive particles of any spin and electromagnetism and gravity. A non-abelian gauge field extension of these results is straightforward. A conceivable possibility would have been that the massive gauge symmetry predicted unique interactions, perhaps predicting the interactions compatible with tree level unitarity [9; 10]. Instead, we find complete freedom in choosing multipole moments and electromagnetic/gravitational susceptibilities, via the interactions discussed in subsubsection 3.1.2 and subsubsection 3.2.2, as well as freedom in choosing all types of interactions linear in \(U(1)\) charge 0 integer spin matter fields discussed in subsubsection 3.1.4. We have by no means exhausted all possible types of gauge invariant interactions, but have only outlined the ones with the clearest physical interpretation.
The advantage of this construction is that it works for any spin \(s=n,n+1/2\), and any spacetime dimension \(d\). When modelling spinning black holes or neutron stars in this way, one must account for classical values of spin, which roughly speaking involves taking the \(n\to\infty\) limit while keeping \(\hbar n\) finite. Together with the propagators found in [39], it is possible to compute observables as a function of \(n\) so that this limit can be taken, while using dimensional regularization. One challenge to doing this is in taking into account the infinitely many interactions necessary for gauge invariance.
As mentioned in the introduction, the connection between gauge invariance and the preservation of physical degrees of freedom is more subtle in an interacting theory. Gauge symmetry implies generalized Ward identities of Feynman diagrams to all orders in perturbation theory, ensuring that no unphysical polarizations propagate in intermediate states, but it does not guarantee the stability/existence of the particle in question. This is relevant in the context of confining gauge theories, in which the massless spin 1 gluons that would be excitations of the elementary gauge field \(A_{\mu}^{a}\) are not observable as asymptotic states. Now that an explicit example of an exactly gauge invariant interacting theory of massive higher spin particles is known, an important next step is to study the consequences of the generalized Ward identities within this theory. This problem we hope to take up in future work.
The interactions constructed here are singular in the massless limit \(m\to 0\). This is a signal that the theory breaks down at energies \(E\gg m\), consistent with our understanding that massive higher spin particles cannot be elementary. Indeed, at high energies the particle will look massless, and at least for spins higher than 2, such massless particles cannot interact in asymptotically flat space in accordance with the equivalence principle [47; 48; 49]. In an asymptotically (anti)-de Sitter space, it may be possible to replicate the analysis done here for massless higher spin particles, replacing \(m\) with some curvature parameter \(\Lambda\), perhaps making contact with Vasiliev gravity [50].
For spins \(s\geq 3/2\), the interactions are inherently non-local, with non-local length scale \(L\sim 1/m\). Non-local effects then cannot be ignored at energies \(E\sim m\). A low energy effective field theory with energy cutoff \(\Lambda\) above the higher spins mass \(m\) (a necessary requirement to access the physics of said higher spin particle) will experience causality violations. This is consistent with general \(S\)-matrix arguments regarding spins \(s>2\)[5]. There are of course theories with massive spin \(s=3/2\) particles which are causal e.g. [6], but these theories have quite special interactions dictated by supersymmetry. String theory offers a causal theory of interacting massive particles with spins \(s\geq 2\), but this is only because of finely tuned interactions with other massive particles with a slightly higher mass \(m^{\prime}\gtrsim m\) which pushes the acausal energy scale higher \(E\sim m^{\prime}\). The rough idea is this continues indefinitely, until causality is restored [4; 5]. In contrast, the theories considered here involve only a single massive particle without self interactions, interacting with electromagnetism and gravity in a generic manner.
The prospect of constructing self interactions, or interactions between a collection of massive particles of varying types which suppresses causality violations, is a very interesting one. Self interactions can be constructed as functions of the gauge invariant linear combinations \(A_{n}\) and \(A_{n-3}\) defined in subsubsection 3.1.4, but these will always be non-local when \(n\geq 3\).3 Interactions between massive particles of varying types would require a different gauge principle than the ones presented here. As mentioned above, there is already an explicit example of causal interactions, i.e. the interactions present between
massive excitations in string theory. It would be illuminating to reproduce the specific interactions present in string theory from first principles, starting with a free theory and introducing consistent interactions in much the same way it has been done for Yang-Mills theory and general relativity [26].
Recent progress from bootstrapping scattering amplitudes has shown that there are physically sensible tree level scattering amplitudes involving an infinite collection of massive excitations other than those known to arise from string theory [51; 52; 53; 54]. A physical realization of these amplitudes is as of yet unknown. The generic nature of these scattering amplitudes offers a clue that the construction of consistent interactions other than the ones present in string theory at the Lagrangian level is possible, providing a physical realization. An analysis of possible interactions would therefore shed light on the uniqueness properties of string theory.
To explore these possibilities, it is important to gain more control on theories with an infinite collection of massive particles of all representations, not just the totally symmetric spin representations to which we have so far restricted ourselves to. This can for instance be accommodated via a hyperfield \(\Phi(X,\{s_{i}\})\) which is a general function of \(N\) auxiliary vector coordinates \(s_{i}^{\mu}\). A formal Taylor expansion in \(s_{i}^{\mu}\) will generate fields \(\phi_{\mu_{1}\cdots\mu_{n}}^{i_{1}\cdots i_{n}}(X)\) which span all Young tableaux if \(N>d\). To proceed, one should first construct free actions for massive particles of arbitrary representations which exhibits a gauge symmetry, so that there is a procedure for adding interactions consistent with said gauge symmetry. This will be further expanded on in another work.
## Acknowledgements
The author would like to thank Lucile Cangemi, Henrik Johansson, Paolo Pichini, Massimo Porrati, Trevor Scheopner and E.T. Tomboulis for invaluable discussions during the preparation of this work. The author would also like to thank Nordita for their hospitality during the workshop "Amplifying Gravity at All Scales" as part of this work was carried out. Finally, the author thanks Marcus Spradlin for pointing out the possibility of interactions linear in matter fields. This research is supported in part by the Mani L. Bhaumik Institute for Theoretical Physics.
|
2307.16678 | Holonomic representation of biadjoint scalar amplitudes | We study tree-level biadjoint scalar amplitudes in the language of
$D$-modules. We construct left ideals in the Weyl algebra $D$ that allow a
holonomic representation of $n$-point amplitudes in terms of the linear partial
differential equations they satisfy. The resulting representation encodes the
simple pole and recursive properties of the amplitude. | Leonardo de la Cruz | 2023-07-31T13:54:43Z | http://arxiv.org/abs/2307.16678v1 | # Holonomic representation of biadijoint scalar amplitudes
###### Abstract
We study tree-level biadjoint scalar amplitudes in the language of \(D\)-modules. We construct left ideals in the Weyl algebra \(D\) that allow a holonomic representation of \(n\)-point amplitudes in terms of the linear partial differential equations they satisfy. The resulting representation encodes the simple pole and recursive properties of the amplitude.
## 1 Introduction
Given a system of \(q\) linear partial differential equations for a function \(f(x)=f(x_{1},\ldots,x_{N})\)
\[P_{i}(x_{1},\ldots,x_{N},\partial_{x_{1}},\ldots,\partial_{x_{N}})f(x)=0,\quad i =1,\ldots,q \tag{1}\]
the study of the system and \(f\) itself can be translated into the study of their _annihilating_ operators \(P_{1},\ldots,P_{q}\). The variables \(x_{1},\ldots,x_{N}\), the differential operators \(\partial_{x_{1}},\ldots,\partial_{x_{N}}\), the commutations rules \([x_{i},x_{j}]=[\partial_{x_{i}},\partial_{x_{j}}]=0\), \([\partial_{x_{i}},x_{j}]=\delta_{ij}\) generate a non-commutative ring known as the _Weyl algebra_\(D_{N}\), usually denoted simply by \(D\). Thus, the operators \(P_{i}\) are generators of a (left) ideal \(I\) in \(D_{N}\) and the system of differential equations can be seen as a module over \(D_{N}\)(a \(D\)-module). When the dimension of this module is minimal [1], namely \(N\), the function \(f\) is said to be holonomic. Among other properties, holonomic functions can be constructed from other holonomic functions by performing operations such as addition, multiplication, and integration. A related concept is the so called _canonical holonomic representation_ of \(f\), introduced by Zeilberger [2], which consists of a holonomic ideal together with a certain set of initial conditions that determine \(f\) uniquely. The relation between \(D\)-modules and holonomic functions has been comprehensible reviewed in Ref. [3]. On the computational side, \(D\)-module methods1 have been implemented for various Computer Algebra Systems (CAS) incorporating the algorithms described in Refs. [4, 7], in particular the construction of annihilator ideals.
Footnote 1: The main concepts of the theory can be consulted in math textbooks Refs. [4, 5] or in Ref. [6] for a more physicist-oriented literature.
The theory of \(D\)-modules has been applied to the study of differential equations of Feynman integrals and their relation with hypergeometric functions [8]. \(D\)-modules are at the core of the identification of Feynman integrals as A-hypergeometric functions [9, 10], a perspective that has been further developed in Refs. [10, 11, 12, 13, 14, 15, 16, 17] (see also Refs. [18, 19, 20, 21, 22, 23, 24]). The ideals associated with Feynman integrals are holonomic and thus Feynman integrals are holonomic functions [25]. Because holonomicity is preserved under integration, it can be used to derive relations between Feynman integrals in parametric representation and establish other interesting properties [26] (see also Refs. [27, 28]). The point of view of \(D\)-modules has also appeared in the construction of annihilators for amplitudes in Yang-Mills and gravity [29] motivated by conformal symmetries in tree-level graviton amplitudes [30].
In this paper, we will apply basic aspects of the \(D\)-modules to tree-level biadjoint scalar amplitudes. Amplitudes are rational functions so they are holonomic functions. This means that we can construct their canonical holonomic representations by deriving holonomic left ideals (the annihilator \(D\)-ideal) and set boundary conditions that determine them completely.
The remainder of this paper is organized as follows. In Section 2 we review the recursive definition of biadjoint scalar amplitudes and basic notions of \(D\)-modules. In Section 3 we compute their annihilators and construct their holonomic representations. Our conclusions are presented in Section 4.
## 2 Review
### Notation
Amplitudes for \(n\) massless particles depend on kinematic (Mandelstam) invariants defined by
\[s_{ijk\dots}:=(p_{i}+p_{j}+p_{k}+\dots)^{2}, \tag{2}\]
where the outgoing massless momenta are subject to momentum conservation \(\sum_{i=1}^{n}p_{i}^{\mu}=0\) and on-shell conditions \(p_{i}^{2}=0\). Momentum conservation and on-shell conditions set the number of independent variables as
\[N:=\frac{1}{2}n(n-3)\,. \tag{3}\]
We will omit the momentum dependence in double-ordered scalar amplitudes and write \(m_{n}(w|\tilde{w}):=m_{n}(p,w,\tilde{w})\). For \(\tilde{w}=w\) we define \(m_{n}(w):=m_{n}(w|w)\) and for the standard ordering \(w=123\dots n\) we omit the ordering and simply write \(m_{n}\). We utilize the notation \(m_{n}(S)\) to emphasize that we view the amplitude as a function of \(N\) Mandelstam invariants. We write \(\partial_{x_{i}}:=\partial/\partial x_{i}\) and for Euler operators \(\theta_{x_{i}}:=x_{i}\partial_{x_{i}}\).
### Biaadjoint scalar amplitudes
The basic description of biaadjoint scalar amplitudes starts with a lagrangian density
\[\mathcal{L}=\frac{1}{2}\partial_{\mu}\varphi_{a\alpha}\partial^{\mu}\varphi_{ a\alpha}-\frac{\lambda}{3!}f^{abc}\tilde{f}^{\alpha\beta\gamma}\varphi_{a \alpha}\varphi_{b\beta}\varphi_{c\gamma}\,, \tag{4}\]
where the structure constants \(f^{abc}\) and \(\tilde{f}^{\alpha\beta\gamma}\) are associated to the gauge groups \(U(\mathrm{N})\) and \(U(\mathrm{\tilde{N}})\), respectively2. Full \(n-\)point amplitudes have a double-color decomposition into traces depending on the generators and tree-level double-ordered primitive (partial) amplitudes \(m_{n}(\sigma|\tilde{\sigma})\) given by
Footnote 2: Generators are normalized according to \([T^{a},T^{b}]=\mathrm{i}f^{abc}T^{c}\) and \([\tilde{T}^{\alpha},\tilde{T}^{\beta}]=\mathrm{i}\tilde{f}^{\alpha\beta \gamma}\tilde{T}^{\gamma}\).
\[\mathcal{M}_{n}(p)=\lambda^{n-2}\sum_{\sigma\in S_{n}/\mathbb{Z}_{n}}\sum_{ \tilde{\sigma}\in S_{n}/\mathbb{Z}_{n}}\mathrm{Tr}(T^{a_{\sigma(1)}}\cdots T^ {a_{\sigma(n)}})\mathrm{Tr}(T^{a_{\sigma(1)}}\cdots T^{a_{\sigma(n)}})m_{n}( \sigma|\tilde{\sigma})\,, \tag{5}\]
where \(\sigma\) and \(\tilde{\sigma}\) denote cyclic orderings. We will focus on the partial amplitudes \(m_{n}(\sigma|\tilde{\sigma})\) for the remainder of this work. There are various equivalent ways of representing them, including the Cachazo-He-Yuan representation [31, 32], canonical forms [33], and intersection numbers [34, 35]. However, their basic definition is in terms of Feynman diagrams. Let \(G\) be a diagram. We denote by \(E(G)\) the set of internal edges and by \(s_{e}\) the Lorentz invariant corresponding to the internal edge \(e\). Partial amplitudes \(m_{n}(\sigma|\tilde{\sigma})\) are then given by
\[m_{n}(\sigma|\tilde{\sigma})=(-1)^{n-3+n_{\mathrm{ap}}(\sigma,\tilde{\sigma})} \sum_{G\in\mathcal{T}_{n}(\sigma)\cap\mathcal{T}_{n}(\tilde{\sigma})}\prod_{e \in E(G)}\frac{1}{s_{e}}\,, \tag{6}\]
where \(\mathcal{T}_{n}(\sigma)\cap\mathcal{T}_{n}(\tilde{\sigma})\) denotes the set of all trivalent graphs compatible with the external orderings \(\sigma\) and \(\tilde{\sigma}\), and \(n_{\text{flip}}(\sigma,\tilde{\sigma})\) is the number of flips needed to transform any diagram from \(\mathcal{T}_{n}(\sigma)\cap\mathcal{T}_{n}(\tilde{\sigma})\) with the external ordering \(\sigma\) into another with external ordering \(\tilde{\sigma}\), see Refs. [32, 36] for more details.
A more convenient definition for our purposes is through the Berends-Giele recursion [37] proposed in Ref. [38]. Let us briefly review it in order to introduce some notation. External cyclic orderings can be identified with ordered sequences of letters (words) in an alphabet \(\mathbb{A}_{n}=\{\,1,\ldots,n\,\}\). Words of length \(n\) are sequences of letters \(l_{i}\in\mathbb{A}_{n}\) of the form \(w=l_{1}\cdots l_{n}\). The empty word is denoted by \(e\) and the Mandelstam invariant of a word \(w\) of length \(|w|\) is defined by \(s_{w}:=s_{l_{1}l_{2}\cdots l_{|w|}}\). Let \(w\) be a word in the alphabet \(\mathbb{A}_{n}\) and let \(\sum_{xy=w}\) be the sum over all possible ways to deconcatenate the word \(w\) into two non-empty words \(x\) and \(y\). The recursion for the amplitude is constructed from
\[\phi_{w_{1},w_{2}}=\frac{1}{s_{w_{1}}}\sum_{xy=w_{1}}\sum_{ab=w_{2}}\left[\phi _{x,a}\phi_{y,b}-(x\leftrightarrow y)\right],\quad\phi_{w_{1},w_{2}}\equiv 0, \quad\text{if}\quad w_{1}\setminus w_{2}\neq e \tag{7}\]
with the start of the recursion defined as \(\phi_{i,j}=\delta_{ij}\). The \(n\)-point amplitude is
\[m_{n}(w_{1}n|w_{2}n)=(-1)^{(n-3)}s_{w_{1}}\varphi_{w_{1},w_{2}}\,, \tag{8}\]
where the double cyclic invariance has been used to set the orderings to \(w=w_{1}n\) and \(\tilde{w}=w_{2}n\). Choosing the canonical ordering \(w=123\ldots n\), amplitudes up to \(n=5\) are given by
\[m_{3}=1,\quad m_{4}=-\frac{1}{s_{12}}-\frac{1}{s_{23}},\quad m_{5}=\frac{1}{s_ {12}s_{123}}+\frac{1}{s_{12}s_{34}}+\frac{1}{s_{123}s_{23}}+\frac{1}{s_{23}s_ {234}}+\frac{1}{s_{234}s_{34}}\,. \tag{9}\]
Higher-point amplitudes can be obtained recursively from Eqs.(7)-(8). A Mathematica implementation of this recursion is given in Appendix A.
The functional form of the amplitude is independent of the labels so we will use Eq.(8) to define (sub)-amplitudes when letters \(l_{i}\) are replaced by sub-words \(w_{i}\). Let \(w=w_{1}w_{2}\ldots w_{n-1}w_{n}\), then the sub-amplitude for \(w\) is given by the evaluation of Eq.(8) with the replacements \(l_{i}\to w_{i},1\leq i\leq n-1\) and \(w_{n}=e\). For instance, let us take \(w=w_{1}w_{2}w_{3}w_{4}\), then the corresponding 4-point sub-amplitude reads
\[m_{4}(w_{1},w_{2},w_{3}|w_{1},w_{2},w_{3})=-\frac{1}{s_{w_{1}w_{2}}}-\frac{1} {s_{w_{2}w_{3}}}\,, \tag{10}\]
where we have set \(w_{4}=e\). The sub-amplitudes we are considering also appear in the algorithm of Ref. [32] to compute biadjoint scalar amplitudes by drawing polygons.
### D-modules and holonomic functions
We will keep this section short and refer the reader to Chapter 6 of [39] and Ref. [3] for details. The \(N\)-th Weyl algebra with complex coefficients is the ring of differential operators \(\partial_{x_{1}},\ldots,\partial_{x_{N}}\) with coefficients in the polynomial ring \(\mathbb{C}[x_{1},\ldots,x_{N}]\)
\[D_{N}:=\mathbb{C}[x_{1},\ldots,x_{N}]\left\langle\partial_{x_{1}},\ldots, \partial_{x_{N}}\right\rangle, \tag{11}\]
which is a non-commutative algebra generated by \(x_{1},\ldots,x_{N}\), \(\partial_{1},\ldots,\partial_{N}\) modulo the commutations relations \([x_{i},x_{j}]=[\partial_{x_{i}},\partial_{x_{j}}]=0\) and \([\partial_{x_{i}},x_{j}]=\delta_{ij}\). Using the commutation relations, any element of \(D_{N}\) can be expressed uniquely in a basis of normal ordered _monomials_
\[x^{\alpha}\partial^{\beta}=x_{1}^{\alpha_{1}}\cdots x_{N}^{\alpha_{N}}\partial _{x_{1}}^{\beta_{1}}\cdots\partial_{x_{N}}^{\beta_{N}}\,, \tag{12}\]
where the differential operators appear at the rightmost position. Here \(\alpha,\beta\in\mathbb{N}^{N}\) are exponent vectors, where the length is given by \(|\alpha|=\alpha_{1}+\cdots+\alpha_{N}\) and similarly for \(\beta\). We will also consider left ideals where the coefficients of the operators are rational expressions in \(N\) variables. The latter can be expressed in general as \(f/g\), \(g\neq 0\), where \(f,g\) are polynomials in \(N\) variables with complex coefficients. The ring of differential operators with rational function coefficients is defined by
\[R_{N}:=\mathbb{C}(x_{1},\ldots,x_{N})\,\langle\partial_{x_{1}},\ldots, \partial_{x_{N}}\rangle\,, \tag{13}\]
where \(\mathbb{C}(x_{1},\ldots,x_{N})\) is the field of rational expressions in \(N\) variables. In addition to the commutation rules for \(D_{N}\), in \(R_{N}\) the multiplication of the operator \(\partial_{x_{i}}\) and \(a(x)\in\mathbb{C}(x)\) is defined by
\[\partial_{x_{i}}a(x):=a(x)\partial_{x_{i}}+\partial_{x_{i}}a(x)\,. \tag{14}\]
Any element of \(R_{N}\) can be expressed in terms of a basis of the form \(a^{\alpha}(x)\partial^{\beta}\), where as in Eq.(12) the operators \(\partial_{i}\) are at the rightmost position. Any element of \(R_{N}\) acts on a function as
\[a(x)\partial_{\alpha}\bullet f(x)=a(x)\frac{\partial^{|\alpha|}f(x)}{\partial x _{1}^{\alpha_{1}}\cdots\partial x_{N}^{\alpha_{N}}}\,, \tag{15}\]
where the symbol \(\bullet\) is used to distinguish it from the multiplication in \(R_{N}\). For \(p,q\in R_{N}\), we have
\[(pq)\bullet f=p\bullet(q\bullet f)\,. \tag{16}\]
Now we will give some definitions and a proposition whose proof can be found in Ref. [2] and Chapter 20 in [5].
**Definitions**
_Holonomic function_. A left ideal is called holonomic if its (Bernstein) dimension is the smallest possible, namely \(N\). Let \(f\) be a function and consider all elements in \(D_{N}\) that annihilate \(f\)
\[\mathrm{Ann}_{D_{N}}(f)=\{\,P\in D_{N}:P\bullet f=0\,\}. \tag{17}\]
A function \(f\) is called holonomic if its annihilator \(\mathrm{Ann}_{D_{N}}(f)\) is a holonomic ideal. _Canonical holonomic representation (Zeilberger)_. A canonical holonomic representation is given by the ideal (17) together with the following set of initial conditions. Let \(\alpha_{1},\ldots,\alpha_{n}\) denote the orders of the operators in \(\mathrm{Ann}_{D_{N}}(f)\), then set \(\alpha_{1}\cdots\alpha_{N}\) initial conditions
\[\partial_{x_{1}}^{i_{1}}\cdots\partial_{x_{N}}^{i_{N}}\bullet f|_{x=x_{0}}, \quad 0\leq i_{k}<\alpha_{k},\quad\text{for}\quad k=1,\ldots,N, \tag{18}\]
where \(x_{0}\) is any point that is not in the characteristic set of the system, namely the set of common zeros of the leading coefficients of the operators \(P_{i}\).
**Proposition**:
_Let \(f\) and \(g\) be holonomic functions in \(N\) variables, then \(1/f\), \(1/g\), \(fg\), and \(f+g\) are holonomic functions.3_
Footnote 3: Other operations that preserve holonomicity are convolution, restriction and both indefinite and definite integration [2].
#### Example
Consider the function \(f(x,y)=ye^{-x^{2}}\) whose annihilator ideal is given by
\[I=\left\langle 2x+\partial_{x},y\partial_{y}-1\right\rangle. \tag{19}\]
The system of PDEs associated with it is denoted by \(L_{1}\bullet f(x,y)=L_{2}\bullet f(x,y)=0\), where \(L_{1}\), \(L_{2}\), are the generators of the left ideal \(I\). The generators of the ideal are of order 1 and hence we need to set a single boundary condition, which we can choose as \(f(0,1)=1\).
## 3 Holonomic biadjoint scalar amplitudes
A corollary of the the above proposition is that _biodjoint scalar amplitudes are holonomic functions_. To prove it notice that each contribution of the amplitude, say the Feynman-diagram based representation (6) is a sum of inverse products of Mandelstam invariants of the form \(s_{12}s_{23}\dots\), which are themselves holonomic and so are their inverses and sums. This is of course true for any rational function.
Holonomicity of biadjoint amplitudes implies that we can find a representation of them as left ideals, or in other words as solutions of a system of partial differential equations. Moreover, imposing a set of initial conditions on the amplitude \(m_{n}(S)\), thought as a function of \(N\) variables, we can construct a canonical holonomic representation of \(m_{n}\) as defined above. In order to construct such representation we need to establish first the Weyl algebra corresponding to \(n\)-point amplitudes. For this purpose let us define the ring on which polynomial coefficients of operators live. Focusing on \(m_{n}(123\dots|123\dots)\), we notice that the recursive structure of the amplitude based on Eq.(8) induces a basis \(B_{n}\) of words at each \(n\) given by
\[B_{n}=\{\,w\in\mathrm{Part}_{2}\ \mathbb{A}_{n-1}\,\}\cup\{\,w\in \mathrm{Part}_{3}\ \mathbb{A}_{n-1}\,\}\cup\dots\cup\{\,w\in\mathrm{Part}_{n-2}\ \mathbb{A}_{n-1}\,\}\, \tag{20}\]
where \(\mathrm{Part}_{i}\mathbb{A}_{k}\) is the overlapping partition of a \(\mathbb{A}_{k}\) with offset 1 and length \(i\). We have the chain of inclusions
\[B_{3}\subseteq B_{4}\subseteq\dots\subseteq B_{n-1}\subseteq B _{n}\,, \tag{21}\]
where by definition \(B_{3}\) is the empty set. Therefore, kinematic invariants will be labeled by \(S_{n}=\{\,s_{w}\mid w\in B_{n}\,\}\), where \(|S_{n}|=\frac{1}{2}n(n-3)=N\), so its associated ring is \(\mathbb{C}[S_{n}]\). Other bases can be equally acceptable as long as they contain \(N\) elements, see e.g., Ref. [40]. A more formal derivation of this ring can be found in Ref. [41]. We then define the corresponding set of operators by \(\partial_{S_{n}}:=\{\,\partial_{S_{w}}\mid w\in B_{n}\,\}\) so the associated Weyl algebra is
\[D_{N}=\mathbb{C}[S_{n}]\left\langle\partial_{S_{n}}\right\rangle\,. \tag{22}\]
We are now interested in constructing the annihilator ideal \(I_{n}:=\text{Ann}_{D_{N}}(m_{n})\) of \(m_{n}\). For a polynomial \(f^{\alpha}(x)\) we can always construct annihilators of the form
\[f\partial_{i}-\alpha(\partial_{i}f)\in\text{Ann}_{D_{N}}(f^{\alpha}),\quad \text{for}\quad i=1,\dots,N\,. \tag{23}\]
Scattering amplitudes can be expressed as rational functions of the form \(m_{n}=f/g\) so by analogy with (23) we can construct simple annihilators of the form
\[Q_{i}=m_{n}\partial_{i}-\frac{1}{g}\left[-m_{n}\partial_{i}g+\partial_{i}f \right],\quad i=1,\dots,N, \tag{24}\]
where \(Q_{i}\in R_{N}\). Equivalently, using Eq.(16), we have
\[P_{i}=gf\partial_{i}+(f\partial_{i}g-g\partial_{i}f),\quad i=1,\dots,N, \tag{25}\]
where \(P_{i}\in D_{N}\). Moreover, from the identity \(\theta_{x}\bullet(1/x)=-1/x\) it is easy to deduce that \(m_{n}\) is annihilated by
\[H_{n}=\left[\sum_{w\in B_{n}}\theta_{s_{w}}+(n-3)\right]\,, \tag{26}\]
which is a consequence of the simple pole structure of amplitudes. Therefore, a possible left ideal of \((N+1)\) generators is
\[\left\langle P_{1},\dots,P_{N},H_{n}\right\rangle\subset D_{N}\,. \tag{27}\]
The holonomicity of \(m_{n}\) implies that we can do better and construct a left ideal with exactly \(N\) generators (see Section 4 of Ref. [2]). We may obtain one by, say, dropping \(H_{n}\) or a single \(P_{i}\). We can of course start with the annihilators (24) and construct an ideal in \(R_{N}\) instead. A more systematic way of obtaining these ideals is through CAS. The task of computing annihilators in general difficult but several implementations already exist, e.g, the Mathematica package HolonomicFunctions[7, 42, 43] or the Macaulay2 package "Dmodules".
### 4-points
At 4-points Eq.(20) gives \(B_{4}=\{\,w\in\text{Part}_{2}\mathbb{A}_{3}\,\}=\{\,12,23\,\}\) and therefore \(S_{4}=\{\,s_{12},s_{23}\,\}\). The associated Weyl algebra is then \(D_{2}=\mathbb{C}[s_{12},s_{23}]\left\langle\partial_{s_{12}},\partial_{s_{23} }\right\rangle:=\mathbb{C}[S_{4}]\left\langle\partial_{S_{4}}\right\rangle\). The left ideals that annihilate the amplitude are not unique so it is interesting to compare them. Let us consider e.g. Eq.(27) for \(n=4\) and dropping \(H_{4}\). The corresponding ideal is
\[\text{Ann}_{D_{2}}(m_{4})=\left\langle(s_{12}+s_{23})\theta_{s_{12}}+s_{23},( s_{12}+s_{23})\theta_{s_{23}}+s_{12}\right\rangle\,, \tag{28}\]
which is also the output of HolonomicFunctions. Let us compare it against the representation computed from Macaulay2
\[\text{Ann}_{D_{2}}(m_{4})=\left\langle\partial_{s_{12}}\partial_{s_{23}},s_{12 }\partial_{s_{12}}+s_{23}\partial_{s_{23}}+1,s_{23}\partial_{s_{23}}^{2}+2 \partial_{s_{23}}\right\rangle\,, \tag{29}\]
which has three generators and where the maximum order is two. These representations can be shown to be equivalent after performing a left Grobner basis computation of Eq.(29)4. The same is true for ideal composed by three generators \(\langle P_{1},P_{2},H_{4}\rangle\), which can also be reduced to Eq.(28).
Footnote 4: The computation of Gröbner bases for differential operators is outside the scope of this work. The HolonomicFunctions package has the command OreGroebner for this purpose. In the examples we use DegreeLexicographic order.
Now, acting on the left with \(1/s_{12}\) and \(1/s_{23}\) on generators of the ideal (28), respectively, we can rewrite it as
\[I_{4}=\langle m_{4}s_{12}\theta_{s_{12}}-1,m_{4}s_{23}\theta_{s_{23}}-1\rangle\,, \tag{30}\]
where strictly speaking the annihilator now belongs to \(R_{2}\). Despite the fact that the annihilator depends on the function we wish to represent, the system of differential equations have the simple form
\[s_{12}\theta_{s_{12}}m_{4}(S)=1,\] \[s_{23}\theta_{s_{23}}m_{4}(S)=1. \tag{31}\]
The amplitude is not determined uniquely by this pair of differential equations since we have not imposed boundary conditions. Since the order of the generators is one we need to set a single boundary condition (see Eq.(18)) which we choose as \(\lim_{s_{12},s_{23}\rightarrow\infty}m_{4}(s_{12},s_{23})=0\). The ideal (30) and the boundary condition constitute a canonical holonomic representation of \(m_{4}\).
Before going to higher points, let us briefly consider annihilators of other partial amplitudes, say, \(m_{4}(1234|1243)\) and \(m_{4}(1234|1423)\). We have
\[m_{4}(1234|1243)=\frac{1}{s_{12}}\Rightarrow\text{Ann}_{D}(m_{4 }(1234|1243))=\langle\theta_{s_{12}}+1\rangle\,, \tag{32}\] \[m_{4}(1234|1423)=\frac{1}{s_{23}}\Rightarrow\text{Ann}_{D}(m_{4 }(1234|1423))=\langle\theta_{s_{23}}+1\rangle\,, \tag{33}\]
which we can express as
\[\text{Ann}_{D}(m_{4}(1234|1243))= \langle m_{4}(1234|1243)s_{12}\theta_{s_{12}}+1\rangle\,, \tag{34}\] \[\text{Ann}_{D}(m_{4}(1234|1423))= \langle m_{4}(1234|1423)s_{23}\theta_{s_{23}}+1\rangle\,, \tag{35}\]
and the differential equations read
\[s_{12}\theta_{s_{12}}m_{4}(1234|1243)=-1, \tag{36}\] \[s_{23}\theta_{s_{23}}m_{4}(1234|1423)=-1, \tag{37}\]
respectively. Amplitudes with orderings different from \(\tilde{w}=w=1234\) only contain subset of poles and thus this property is also reflected on the annihilators. In general, it is possible to construct the annihilator and the holonomic representation of, say, \(f+g\) if the representations of \(f\) and \(g\) are known. Instead, here we will directly focus on the annihilators of \(m_{n}(12\ldots n|12\ldots n)\).
### 5-points
At 5-points the basis is given by
\[B_{5}=\{\,12,23,34\,\}\cup\{\,w\in\text{Part}_{3}\mathbb{A}_{3}\,\}=\{\,12,23,34, 123,234\,\} \tag{38}\]
so the associated Weyl algebra is \(D_{5}=\mathbb{C}(S_{5})\,\langle\partial_{S_{5}}\rangle\). Starting with \(I_{5}=\langle P_{1},\dots,P_{5}\rangle\) the annihilator ideal can be brought into the form
\[I_{5}= \Big{\langle}\frac{1}{s_{123}}+\frac{1}{s_{34}}+m_{5}s_{12}\theta _{s_{12}},\frac{1}{s_{123}}+\frac{1}{s_{234}}+m_{5}s_{23}\theta_{s_{23}},\frac {1}{s_{12}}+\frac{1}{s_{234}}+m_{5}s_{34}\theta_{s_{34}}, \tag{39}\] \[\frac{1}{s_{12}}+\frac{1}{s_{23}}+m_{5}s_{123}\theta_{s_{123}}, \frac{1}{s_{23}}+\frac{1}{s_{34}}+m_{5}s_{234}\theta_{s_{234}}\Big{\rangle}\,.\]
This form is equivalent to the ideal obtained from HolonomicFunctions and also Macaulay2 after a Grobner bases computation. The rational terms appearing in the annihilators have the functional form of a 4-point amplitude. Indeed, the rational terms of the generators of \(I_{5}\) are given by
\[\frac{1}{s_{123}}+\frac{1}{s_{34}}= -m_{4}(12,3,4),\quad\frac{1}{s_{12}}+\frac{1}{s_{23}}=-m_{4}(1,2,3 ),\quad\frac{1}{s_{123}}+\frac{1}{s_{234}}=-m_{4}(1,23,4),\] \[\frac{1}{s_{23}}+\frac{1}{s_{34}}= -m_{4}(2,3,4),\quad\frac{1}{s_{12}}+\frac{1}{s_{234}}=-m_{4}(1,2, 34), \tag{40}\]
where we have used Eq.(10). Hence, defining
\[\mathbb{A}_{5}:= m_{5}\text{ diag}\left(s_{12},\ s_{23},\ s_{34},\ s_{123},\ s_{234}\right), \tag{41}\] \[\theta_{5}:= (\theta_{s_{12}},\theta_{s_{23}},\theta_{s_{34}},\theta_{s_{123} },\theta_{s_{234}})^{T},\] (42) \[\kappa_{5}:= (m_{4}(12,3,4),m_{4}(1,23,4),m_{4}(1,2,34),m_{4}(1,2,3),m_{4}(2,3,4))^{T}\;, \tag{43}\]
we can express \(I_{5}\) as \(\langle\mathbb{A}_{5}\theta_{5}-\kappa_{5}\rangle\), which together with the initial condition \(m_{5}|_{S_{5}\to\infty}=0\) give a holonomic representation of \(m_{5}\).
### Higher points
We follow the same procedure at higher points. Starting with \(I_{n}=\langle P_{1},\dots,P_{N}\rangle\) we divide over a factor of Mandelstam invariants, which can be read off from the product \(s_{w}g\), for \(w\in B_{n}\) (see Eq.(24)). We find that the annihilator ideal with \(N\) generators of the \(n\)-point amplitude is
\[I_{n}=\langle\mathbb{A}_{n}\theta_{n}-\kappa_{n}\rangle\,, \tag{44}\]
where the \(N\times N\) diagonal matrix \(\mathbb{A}\) and the vector of parameters are given by
\[\mathbb{A}_{n}=m_{n}\text{diag}(s_{12},s_{23},\dots),\quad\theta_{n}=(\theta_ {s_{12}},\theta_{s_{23}},\dots)^{T},\quad\kappa_{n}=(\iota_{n}(12),\iota_{n}( 23),\dots)^{T}, \tag{45}\]
respectively. The ellipsis indicates all remaining words in the basis \(B_{n}\) and the function \(\iota(w)\) is defined by
\[\iota_{n}(w):=m_{|w+1|}(l_{1}\dots l_{|w|})m_{n-|w|+1}(1,2,\dots,w,\dots,n-1) \tag{46}\]
for \(2\leq|w|\leq n-2\) and zero otherwise. Since all operators in \(I_{n}\) are of order one we need a single boundary condition as indicated by Eq.(18), which we choose as \(m_{n}|_{S_{n}\to\infty}=0\). This ideal and the boundary condition then specifies a canonical holonomic representation of the \(n\)-point amplitude. For \(n>5\), we find nonlinear entries in \(\kappa_{n}\), which depend of lower point sub-amplitudes as can be seen from Eq.(46) (see Appendix B for examples). The ideal in Eq.(45) implies that the amplitude \(m_{n}\) thought as a function of \(N\) variables satisfies the system of differential equations
\[s_{w}\theta_{s_{w}}m_{n}(S)=\kappa_{w},\quad\forall w\in B_{n}\,. \tag{47}\]
We have checked Eqs.(44)-(47) up to \(n=10\).
## 4 Conclusions
We have studied double-ordered biadjoint amplitudes in the language of \(D\)-modules. Their holonomicity implies that there exist a canonical representation of \(n\)-point amplitudes made up of an annihilator left ideal with precisely \(N\) generators, which we have constructed explicitly. We have found that in general the annihilator can be constructed from a diagonal matrix which depends on the independent kinematic invariants and a vector made up of lower point sub-amplitudes.
It would be interesting to study canonical holonomic representations of generalizations of biadjoint amplitudes, which have been proposed in Refs.[44, 45]. Similar Berends-Giele recursions to those used here also exist for Yang-Mills[46] and were also important for the derivation of second order differential operators in Ref.[29] so it would be interesting to construct canonical holonomic representations for these amplitudes and determine whether the resulting differential equations manifest a dependence on sub-amplitudes.
## Acknowledgements
We thank Carlos Mafra for helpful comments on the manuscript. This work is supported by the European Research Council under grant ERC-AdG-885414.
## Appendix A Mathematica code to calculate biadjoint scalar amplitudes
SetAttributes[s,Orderless] xAB[A_,B_]:=If[Complement[A,B]=={},xAB[A,B],0]; xAB[{i_Integer},{j_Integer}]:=If[i==j, 1, 0]; xAB[A_,B_]:=xAB[A,B]=(1/s[A])Plus@flatten[Table[xAB[A[[1;;;;]]],B[[1;;k]]] xAB[A[[j+1;;Length[A]]]],B[[k+1;;Length[B]]]]-xAB[A[[j+1;;Length[A]]],B[[1;;k]]] xAB[A[[1;;]],B[[k+1;;Length[B]]]],{j,1, Length[A]-1},{k,1, Length[B]-1}]];
(*Standard ordering 123...n-1 *) mAB[A_, B_] := (-1)^(Length[A] - 2) xAB[A, B] (s @ @ A) /. {List -> Sequence} // Expand;
## Appendix B Explicit values of \(\kappa_{n}\)
Here we give some explicit values of \(\kappa_{n}\) up to \(n=7\). From Eq.(20) we have
\[B_{6}= \{12,23,34,45,123,234,345,1234,2345\}, \tag{48}\] \[B_{7}= \{12,23,34,45,56,123,234,345,456,1234,2345,3456,12345,23456\}. \tag{49}\]
Hence, using Eq.(46) the corresponding values of \(\kappa\) are
\[\kappa_{6}= \tag{50}\] \[[m_{5}(12,3,4,5),m_{5}(1,23,4,5),m_{5}(1,2,34,5),m_{5}(1,2,3,45), m_{4}(1,2,3)m_{4}(123,4,5),\] \[m_{4}(2,3,4)m_{4}(1,234,5),m_{4}(3,4,5)m_{4}(1,2,345),m_{5}(1,2,3,4),m_{5}(2,3,4,5)],\] \[\kappa_{7}=\] (51) \[[m_{6}(12,3,4,5,6),m_{6}(1,23,4,5,6),m_{6}(1,2,34,5,6),m_{6}(1,2, 3,45,6),m_{6}(1,2,3,4,56),m_{6}(1,2,3,4,56),\] \[m_{4}(1,2,3)m_{5}(123,4,5,6),m_{4}(2,3,4)m_{5}(1,234,5,6),m_{4}(3,4,5)m_{5}(1,2,345,6),\] \[m_{4}(4,5,6)m_{5}(1,2,3,456),m_{5}(1,2,3,4)m_{4}(1234,5,6),m_{5}(2,3,4,5)m_{4}(1,2345,6),\] \[m_{5}(3,4,5,6)m_{4}(1,2,3456),m_{6}(1,2,3,4,5),m_{6}(2,3,4,5,6)],\]
respectively, where we have used \(m_{3}(w_{1},w_{2})=1\).
|
2309.10969 | Bell Correlations as Selection Artefacts | We show that Bell correlations may arise as a special sort of selection
artefact, produced by ordinary control of the initial state of the experiments
concerned. This accounts for nonlocality, without recourse to any direct
spacelike causality or influence. The argument improves an earlier proposal in
(arXiv:2101.05370v4 [quant-ph], arXiv:2212.06986 [quant-ph]) in two main
respects: (i) in demonstrating its application in a real Bell experiment; and
(ii) in avoiding the need for a postulate of retrocausality. This version
includes an Appendix, discussing the relation of the proposal to the
conclusions of Wood and Spekkens (arXiv:1208.4119 [quant-ph]). | Huw Price, Ken Wharton | 2023-09-19T23:50:45Z | http://arxiv.org/abs/2309.10969v3 | # Bell Correlations as Selection Artefacts
###### Abstract
We show that Bell correlations may arise as a special sort of selection artefact, produced by ordinary control of the initial state of the experiments concerned. This accounts for nonlocality, without recourse to any direct spacelike causality or influence. The argument improves an earlier proposal in [Price & Wharton 2021b, Price & Wharton 2022] in two main respects: (i) in demonstrating its application in a real Bell experiment; and (ii) in avoiding the need for a postulate of retrocausality.
**Keywords:** Bell correlations, nonlocality, collider bias, entanglement, retrocausality
## 1 Introduction
### Overview
We propose an explanation of the correlations characteristic of Bell experiments, showing how they may arise as a special sort of preselection artefact. This explanation accounts for nonlocality, without recourse to any direct spacelike causality or influence. If correct, the proposal offers a novel way to reconcile nonlocality with relativity.1
We begin with a brief account of the landscape of discussions of the implications of Bell's Theorem, in order to explain where our proposal sits in relation to other approaches.
### Orientation
In the discussions of issues arising from the work of Einstein, Podolsky, and Rosen and Schrodinger in the 1930s [1, 2, 3], and John Bell in the 1960s [1], a key reference point is the Common Cause Principle (CCP). The following formulation of CCP will do for our purposes [1]:
The Common Cause Principle says that every correlation is either due to a direct causal effect linking the correlated entities, or is brought about by a third factor, a so-called common cause.
With reference to CCP, the relevant history goes like this. In 1935 EPR noted that correlations implied by what Schrodinger soon dubbed 'entanglement' seemed to require explanation by common causes, not present within QM itself. EPR concluded that QM was incomplete, and Schrodinger concurred. The alternative - the other option allowed by CCP - was that measurement choices on one side of an experiment could influence results on the other, even though the two sides might be far apart, and spacelike separated. To EPR and Schrodinger, that sort of 'nonlocal' influence seemed absurd; as Schrodinger put it, 'that would be magic' [3].
In the 1960s, however, Bell proved that under plausible assumptions, the common cause option is untenable. That seems to leave us, as the above formulation of CCP puts it, with 'a direct causal effect linking the correlated entities'; and hence with the kind of nonlocality that EPR and Schrodinger believed to be absurd. As Bell saw, of course, this meant at least a _prima facie_ conflict with Relativity.
In broad-brush terms, we can classify responses to Bell's argument as follows. This taxonomy is not comprehensive or precise, but it will serve to locate our current proposal.
1. **Accept nonlocality**, acknowledging its conflict with Relativity. Most such responses seek to mitigate the conflict, by arguing, for example, that Bell nonlocality is not the kind of full-blown (signalling) caus
ation that would be in serious conflict with Relativity; or that the preferred frame required by nonlocality is not empirically detectable.2
Footnote 2: See [11] for a comprehensive defence of this option.
2. **Avoid nonlocality**, by arguing that Bell's result depends on an assumption of 'Realism' or 'Classicality', and rejecting this assumption.3 Footnote 3: See [11, 12] for a more sympathetic treatment of the former version.
3. **Render nonlocality compatible with Relativity**, by arguing that it is a partially _retrocausal_ process, acting via the past light cones of the two observers. This requires that we abandon Bell's assumption of _Statistical Independence_ (SI), so as to allow measurement choices to influence hidden variables (HVs) in their past.4 Footnote 4: See [11] for a review of this approach. We discuss SI further in §8.3 below. Note that this option requires that we make a choice about the term ‘nonlocality’. There is a narrow use of the term, implying _direct_ spacelike influence, and a broad use, allowing the indirect influence proposed here. Those who prefer the narrow use will regard this option as another way of _avoiding_ nonlocality. Footnote 5: Some authors, though not themselves guilty of this confusion, use the term ‘superdeterminism’ for any view rejecting SI [12].
4. **Restore common causes** (and hence avoid nonlocality), by treating measurement settings as among the elements of reality influenced by factors in the common past of the experimenters. This option, called _superdeterminism_, also requires violation of SI, and hence is sometimes confused with option 3.5 As this classification shows, however, it rests on a different choice between the two alternatives offered by CCP. Footnote 5: As [11] put it, Bell’s ‘entire analysis is predicated on the assumption that, of the potential outcomes of a given experiment, one and only one occurs, and hence that it makes sense to speak of _the_ outcome of an experiment.’
5. **See to avoid the problem**, by arguing that the Bell correlations are not subject to CCP. Here there are at least two previous proposals. 1. Arguing that the Bell correlations arise from the fact that our viewpoint as observers is always 'perspectival', e.g., confined to one branch of a larger set of'many worlds' (with no need for nonlocality in the bigger picture).6 Footnote 6: As [11] put it, Bell’s ‘entire analysis is predicated on the assumption that, of the potential outcomes of a given experiment, one and only one occurs, and hence that it makes sense to speak of _the_ outcome of an experiment.’
2. Rejecting CCP altogether, arguing that the lesson of the Bell correlations is simply that the world contains robust patterns of correlations not explicable in the two ways that CCP allows.7 Footnote 7: See [22] for a view of this kind. This view does not avoid nonlocality, of course. It merely declines to explain it as CCP requires.
### Our approach
In our own previous work we have explored option 3, seeking to make a case for retrocausality.8 Part of the case, of course, was its evident potential to mitigate the consequences of nonlocality for Relativity.
Footnote 8: See [23, 24].
More recently [23], we noted that retrocausality has the effect of introducing _colliders_ (i.e., variables influenced by more than one contributing cause) into the causal model of Bell experiments, because the source event is influenced by measurement choices from both sides. This is interesting, in our view, because it suggests an avenue for explaining Bell correlations that bypasses CCP altogether.
It is well known that 'conditioning on a collider' - i.e., selecting cases in which the collider variable takes a particular value - can induce correlations between its independent causes. Such correlations do not call for explanation by one of the two routes permitted by CCP. They are merely'selection artefacts', as people say, and are not robust, in the sense of supporting difference-making counterfactuals.
In recent work [23], we show that _some_ cases of entanglement - namely, those due to delayed-choice entanglement-swapping - are selection artefacts, in this sense.9 As we argue there, it follows that EPR-Bell experiments that rely on such delayed-choice entanglement-swapping do not imply nonlocality, in the usual way. The correlations involved do not support the required counterfactuals.10
Building on this work, we proposed that ordinary cases of entanglement might be explained in these terms, if we add two ingredients: retrocausality, and normal control of the initial conditions of an experiment [14]. As we explain below, the latter factor turns out to account for the difference between the non-robust correlations in the delayed-choice entanglement-swapping case, and the robust, counterfactual-supporting Bell correlations in ordinary EPR-Bell experiments.
The present piece improves our proposal in two ways: first, by working through its application in a real Bell experiment; and second, importantly, by eliminating the need for a independent assumption of retrocausality. We show that we can get all the retrocausality the proposal needs by considering an imaginary case in which ordinary initial control is absent. (We compare this to the use of frictionless idealisations in mechanics.) In this imaginary case, causality is time-symmetric by definition. This gives us the colliders on which the proposal depends, while at the same time explaining why the retrocausality in question is lacking in the real world, in which ordinary initial control is not absent.
In terms of our taxonomy in SS1.2, our proposal is a hybrid. It agrees with 5(a) and 5(b) that the observed phenomena of Bell correlations do not call for the application of CCP, once properly understood, although not for the reasons that those options propose.11 Yet it agrees with option 3 that there is a robust counterfactual-supporting connection between the two wings of a Bell experiment, albeit one that is explained as a special sort of selection artefact, rather than as the kind of causal link mandated by CCP, once common causes are abandoned. As in the case of option 3, this connection is fully explained by connections within the light cones, so doesn't require any primitive spacelike connections, or preferred frame.
Footnote 11: Our view doesn’t postulate additional real phenomena to which we don’t have access, as the Everett view does. And unlike 5(b), it doesn’t claim that Bell experiments involve a _new_ kind of CCP-independent correlation – on the contrary, we propose that Bell correlations turn out to fall under a _familiar_ exception to CCP.
## 2 The Dortmund model
Let's begin with two versions of a familiar \(\vee\)-shaped Bell experiment. The term '\(\vee\)-shaped' alludes to the spacetime geometry of the experiments concerned, in a typical diagram with time on the \(y\)-axis.
In the first version (\(\vee_{1}\)), a pair of entangled spin-1/2 particles is produced with parallel spins in the plane of eventual measurement; we label this initial state \(I_{1}\).12 The particles are sent to two observers, Alice and Bob, who each perform a spin measurement at one of three angles in the specified measurement plane, arranged at \(120^{\circ}\) from each other. Let \(\{a,b\}\) be the settings and \(\{A,B\}\) be the outcomes, in the usual notation. The probabilities predicted by quantum theory are as follows:
Footnote 12: For example, if all measurements are constrained to occur in the \(x-y\) plane, the state \((|\!\!\uparrow\downarrow\rangle+|\!\!\uparrow\rangle)/\sqrt{2}\) has the desired property. This is distinct from the singlet state \((|\!\!\uparrow\downarrow\rangle-|\!\!\downarrow\uparrow\rangle)/\sqrt{2}\), which will be used in the next example.
When \(a=b\), \(P(A=B)=1\), \(P(A\neq B)=0\)
When \(a\neq b\), \(P(A=B)=0.25\), \(P(A\neq B)=0.75\).
The expression \(P(A=B)=0.25\) is shorthand for two equally probable cases, \(A=B=0\) and \(A=B=1\) (each, in this case, with probability 0.125). Similarly \(P(A\neq B)=0.75\) is shorthand for the two outcome pairs \(A=0,B=1\) and \(A=1,B=0\), each with probability 0.375.
The second version (\(\vee_{2}\)) of the experiment is the same, except that we begin with a pair of particles with antiparallel spins (the singlet state); we label this state \(I_{2}\). This yields the complementary probabilities:
When \(a=b\), \(P(A=B)=0\), \(P(A\neq B)=1\)
When \(a\neq b\), \(P(A=B)=0.75\), \(P(A\neq B)=0.25\).
Let's also imagine an experiment Big-\(\vee\) that runs both \(\vee_{1}\) and \(\vee_{2}\), and mixes the results, randomly and in equal proportions.13 We call it 'big' because it combines two component experiments, \(\vee_{1}\) and \(\vee_{2}\).
Footnote 13: We can actually imagine several versions of such an experiment – more on some of those variations below (§8.2).
It is easy to check that in Big-\(\vee\), the probabilities of having originated in \(I_{1}\) or \(I_{2}\), given the final results, are given by the following expressions. For originating in \(I_{1}\) we have:
When \(a=b\), \(P(I_{1}|A=B)=1\), \(P(I_{1}|A\neq B)=0\)
When \(a\neq b\), \(P(I_{1}|A=B)=0.25\), \(P(I_{1}|A\neq B)=0.75\).
For originating in \(I_{2}\) we have the complementary values:
When \(a=b\), \(P(I_{2}|A=B)=0\), \(P(I_{2}|A\neq B)=1\)
When \(a\neq b\), \(P(I_{2}|A=B)=0.75\), \(P(I_{2}|A\neq B)=0.25\).
Big-\(\vee\) does not exhibit Bell correlations in its results as a whole. The Bell correlations of the two sub-ensembles cancel out, so that the four variables in \(\{a,b,A,B\}\) are pairwise independent of each other. The Bell correlations reappear, of course, if we sort the results of Big-\(\vee\) into the two sub-ensembles: those originating from \(I_{1}\) (case \(\vee_{1}\)) and those originating from \(I_{2}\) (case \(\vee_{2}\)).
These facts imply that in Big-\(\vee\) there are probabilistic dependencies between the four variables in \(\{a,b,A,B\}\) and a variable \(\mathbf{I}\), representing the initial state (i.e., \(I_{1}\) or \(I_{2}\)). Each of the variables in \(\{a,b,A,B\}\) is probabilisitically dependent on \(\mathbf{I}\), conditional on the remaining three variables in \(\{a,b,A,B\}\). We depict these dependencies in Figure 1. We have marked in green the setting variables, over which Alice and Bob have experimental control. We stress that the dashed bidirectional arrows represent _probabilistic_ dependencies, but not _causal_ dependencies. In particular, Alice's and Bob's choice of settings would _not_ normally be taken to influence the initial state variable \(\mathbf{I}\), as in Figure 2.
Let's think about what this means for the counterfactual judgements Alice makes, about what would have happened, had she made a different
Figure 1: Probabilistic dependencies in Big-\(\vee\).
Figure 2: An _unrealistic_ causal interpretation of Big-\(\vee\)
choice of the setting \(a\). Since her choice does not influence **I**, **I** would have had the same value, had she chosen differently. But we know that \(B\) would have been different, in some cases - that's Bell nonlocality at work.
Figure 3 shows the intuitive causal model for Big-\(\vee\), or equivalently for each of the two sub-cases, \(\vee_{1}\) and \(\vee_{2}\). The red and blue arrows represent the nonlocal influences implied by the existence of Bell correlations. It is a contentious question exactly what sort of influences these are (e.g., whether they are really _causal_).14 We set aside that issue, and ask readers to interpret the blue and red arrows in terms of their own view of the matter.15
Footnote 14: For discussion, see [11, 12] and references therein.
Footnote 15: We do require that the connection be counterfactual-supporting, so readers who are unconvinced about that may want to get off at this stop.
## 3 What's wrong with Figure 2?
Why is Figure 2 physically unrealistic? A number of answers suggest themselves: (i) causation works from past to future, not the reverse; (ii) the past is fixed, and therefore not amenable to influence by later choices; and (iii) as an initial variable, **I** is the kind of thing that an experimenter can control, or fix, before Alice and Bob choose their settings.
The precise meaning of such answers, and their relation to one another, need not concern us here. But we note that for all of them, it is plausible that they are related in some way to the prevailing thermodynamic asymmetry in our universe (or our _region_ of the universe, if we don't want to exclude
Figure 3: Big-\(\vee\) with Bell nonlocality.
the possibility that it might be a local matter, as some have suggested). One thing that counts in favour of this suggestion is that in any of these three forms, we are trying to explain something _time-asymmetric_. The thermodynamic asymmetry isn't the only possible physical basis for such time-asymmetries, but it is by far the most plausible one.16
Footnote 16: For discussion of this point, see, e.g. [Price 1996, Albert 2000, Price & Weslake 2010, Rovelli 2021]. Some authors argue that agency is a crucial ingredient here; but that, too, plausibly depends on the thermodynamic asymmetry.
Again, we don't need to examine this argument in detail. What matters here is that we can _imagine_ the case in which the asymmetry that makes Figure 2 physically unrealistic is absent. The suggested link with the thermodynamic asymmetry makes that relatively easy, because we seem to be able to imagine that it might be absent (or oriented differently in distant regions of our universe).
For terminological convenience, we adopt the label _Initial Control_ for the time-asymmetric feature of our world - whatever it is - that makes Figure 2 unrealistic. Our next step is to imagine a world in which Initial Control is absent.
## 4 Turning off Initial Control
We want to imagine turning off Initial Control in Big-\(\vee\). We need to imagine that the factors that normally enable control of initial conditions of an experiment are absent. It doesn't matter how we imagine this happening - our argument doesn't require that it be physically realistic - but turning off the Past Hypothesis seems a reliable, if drastic, way to achieve it!17
Footnote 17: By ‘Past Hypothesis’ we mean the low entropy boundary condition apparently required in the early universe to explain the observed thermodynamic asymmetry, and all that rests on it. See [Price 1996, Albert 2000, Price 2010, Rovelli 2021] for discussion.
As a loose analogy, it may be helpful to compare turning off Initial Control to turning off friction. The frictionless case is wholly unrealistic, in many domains, but is nevertheless helpful to consider, not least to distinguish the effects of friction from other things.18
Footnote 18: It speaks in favour of this comparison that the time-asymmetric nature of friction is also linked to the thermodynamic asymmetry.
Consider the effect of turning off Initial Control on the counterfactuals in Big-\(\vee\). Normally, as we have seen, Alice will be entitled to treat \(\mathbf{I}\) as
the same value as it actually has; which means, as we said, that the difference would need to show up in Bob's outcome, in some cases. But with Initial Control turned off, Alice loses her entitlement to say that had she chosen a different setting, I would nevertheless have had the same value. That's what turning off Initial Control means. It means not treating initial values (in this case, the value of the variable **I**) as necessarily _fixed_, for the purposes of one's causal modelling.20
Footnote 20: We are setting aside for the moment the question whether there might be _actual_ versions of Big-\(\vee\) in which we do not have control of the variable **I**; see §8.2.
Once Alice allows that in this imagined case - with Initial Control turned off - her choices might make a difference to the value of **I**, then she should model the case in the way shown in Figure 4. The green arrows represent the new possible dependencies, admitted by turning off Initial Control. The new feature of this case that interests us is that Figure 4 treats the variable **I** as a _collider_, in causal modelling terms.21
Footnote 21: Two notes about Figure 4. First, the black arrows should be regarded as legacies of the causal structure shown in Figure 3, the case in which Initial Control is not turned off. Other than adding the new green arrows, we set aside questions about the effect of turning off Initial Control on the dependencies shown in Figure 3. Second, we acknowledge that there are reasonable questions about the meaning of causal dependence in this imaginary case. We set those aside, too, simply noting that whatever account we use, it needs to be time-symmetric in this case - that's the point.
Figure 4: Without Initial Control – **I** may become a collider.
## 5 Conditioning on a collider
### General considerations
A _collider_ is a variable with more than one direct cause, within a causal model. In other words, in the graphical format of directed acyclic graphs (DAGs), it is a node at which two or more arrows converge (hence the term 'collider').
It is well known that conditioning on such a variable - i.e., selecting the cases in which it takes a certain value - may induce a correlation between its causes, even if they are actually independent. As [Cole et al 2010, 417] put it, 'conditioning on the common effect imparts an association between two otherwise independent variables; we call this selection bias.'22
Footnote 22: Collider bias is also called Berkson’s bias, after a Mayo Clinic physician and statistician who noted it in the 1940s [Berkson 1946]. But the point dates back at least to the Cambridge economist A C Pigou [Pigou 1911]. (We are grateful to Jason Grossman and George Davey Smith here.)
Here's a simple example, adapting the so-called 'Death in Damascus' case, familiar in decision theory [Gibbard & Harper 1978]. Suppose that you and Death are each deciding where to travel tomorrow. You have the same two possible destinations - in the usual version, Damascus and Aleppo. As in Figure 5, your choice and Death's choice both influence an (aptly named) collider variable, which determines your survival. Let this variable take value 0 if you and Death do not meet, and 1 if you do. (We assume for simplicity that if the two of you choose the same destination, you will meet; and that this will be fatal, from your perspective.)
If we sample statistics for this kind of case, across a large population,
Figure 5: A simple collider (‘Death in Damascus’)
they may suggest that people have an uncanny ability to evade Death, always choosing the opposite destination. If so, we've probably introduced selection bias, by interviewing survivors only.
If you are a survivor, you might think to yourself, "I'm a survivor, so if I had chosen the other destination, Death would also have made a different choice." You would be wrong. If you had chosen the other destination, you wouldn't have been a survivor. This illustrates an important fact. The correlations that result from conditioning on a collider do not support counterfactuals (in normal circumstances - we'll come to an exception in a moment).
### Colliders in Big-\(\vee\)
Let's return to our imagined version of Big-\(\vee\), with Initial Control turned off, depicted in Figure 4. As Alice reflects on this imagined case, these notions from causal modelling give her an obvious way of thinking about the Bell correlations in \(\vee_{1}\) and \(\vee_{2}\). They look like selection artefacts, each resulting from conditioning on one of the two possible values of the collider variable **I**.
As we just saw, selection artefacts don't support counterfactuals, but in this imagined case they don't need to. In this case, Alice's choices make a difference to **I**, and so it is a plus, not a minus, that they wouldn't support the counterfactuals associated with nonlocality. Alice can say, "I thought my measurement choices were making a difference to the outcomes on Bob's side of the experiment, but that was a selection artefact. They were actually making a difference to the value of **I**."
The reader may wonder how this imagined case could possibly be relevant to the real world. In the real world, after all:
* Initial Control is _not_ turned off
* The causal model in Figure 4 is _un_realistic
* Bell correlations _do_ support counterfactuals.
Remarkably, there's a way to solve all three problems, in one step. All we need to do is to turn Initial Control back on. But to explain why this does the trick, we need one more observation about colliders.
Constrained colliders
In our previous work we have introduced the notion of a _constrained_ collider [Price & Wharton 2021b, Price & Wharton 2022]. Intuitively, this is a restriction imposed from outside a causal model, biasing or completely specifying the value of the variable at the collider.
In Death in Damascus, for example, we saw that your choice and Death's choice both influence a collider variable that takes value 0 if you do not meet, and 1 if you do. If Fate wants to ensure that your number's up, as it were, she _constrains_ this collider, setting its value to 1. In effect, Fate imposes a future boundary condition, _requiring_ that the collider variable take value 1.
This boundary condition makes a big difference to the counterfactuals. If it weren't for Fate's involvement, grieving relatives would be entitled to say, "If only they had made the other choice, they would still be with us today!" Once Fate constrains the collider, this is no longer true. If you'd made the other choice you would have met Death in the other place, instead.
With Fate constraining the collider, in other words, there is a counterfactual-supporting connection between your movement and Death's. You control Death's movements, in effect.23 This is what we term _Connection across a Constrained Collider_ (CCC) [Price & Wharton 2022].
Footnote 23: This is the kind of thing that makes the Death in Damascus case interesting for decision theorists, of course; see [Price & Weslake 2010] for discussion.
Collider constraint can come by degrees. Fate might be kind to you, and give you some small chance of eluding Death, at least tomorrow. We will be interested here in the full constraint version, in which case we'll say that the variable at the collider is _locked_.
A locked variable can only take one value - that's the point. This means that it is no longer really a _variable_ at all, in the usual sense of a causal model, and can no longer be an effect of any of the remaining variables. We could put it like this: causation requires making a difference, and locking prevents making a difference. By locking the collider variable in the Death in Damascus case, Fate makes it the case that your choice makes no difference to whether you encounter Death. You no longer have any causal influence on the matter.
This may seem highly unrealistic, but there is at least one place in
physics where collider constraint has actually been proposed. It is the key to the so-called Horowitz-Maldecena hypothesis, concerning the black hole information paradox. Horowitz and Maldacena describe the proposal as follows:
In the process of black hole evaporation, particles are created in correlated pairs with one falling into the black hole and the other radiated to infinity. The correlations remain even when the particles are widely separated. The final state boundary condition at the black hole singularity acts like a measurement that collapses the state into one associated with the infalling matter. This transfers the information to the outgoing Hawking radiation in a process similar to "quantum teleportation". [Horowitz & Maldacena 2004]
The key difference from ordinary quantum teleportation is that the 'final state boundary condition' imposes a particular result on the measurement concerned, thus eliminating the usual need for postselection. In our terminology, this amounts to constraining a collider at that point.
Discussing the Horowitz-Maldacena hypothesis recently, Malcolm Perry puts it like this:
[t]he interior of the black hole is therefore a strange place where one's classical notions of causality... are violated. This does not matter as long as outside the black hole such pathologies do not bother us. [Perry 2021, 9]
As we'll explain, our proposal is going to be that boundary conditions doing this job are actually extremely common, if you know where to look. In the other direction of time, they are just ordinary initial boundary conditions, and don't need black holes.
## 7 Turning Initial Control back on
We noted two points about constrained colliders. Connection across a Constrained Collider (CCC) does support counterfactuals. And a _locked_ collider is removed from its causal model altogether, in the sense that because it is locked, it cannot be influenced by variables elsewhere.
This gives Alice a way of applying the lessons of the imaginary case of Big-\(\vee\) (without Initial Control) to the real case. Turning Initial Control back on - adding it by hand to the imagined version of Big-\(\vee\), as it were - produces a model applicable to real-world operational versions of \(\vee_{1}\) and \(\vee_{2}\). Each of these experiments may be regarded as a locked case of the imagined version of Big-\(\vee\). In \(\vee_{1}\), Initial Control locks the variable \(\mathbf{I}\) to value \(I_{1}\); in \(\vee_{2}\) it locks it to value \(I_{2}\).
As just noted, locking the collider at \(\mathbf{I}\) removes it from the causal model, thereby explaining why Figure 2 and Figure 4 are unrealistic. And it generates a connection across the collider that does support the counterfactual-supporting influences shown in red and blue in Figure 3.
This proposal has a huge payoff, from Alice's point of view. It gives her a way of explaining the Bell correlations, without any relativity-challenging nonlocality. Referring to Figure 6, in other words, Alice can say that the Bell correlations in \(\vee_{2}\) are explained by the fact that \(\vee_{2}\) may be regarded as a product of a locked collider, the collider itself having the structure shown in Figure 4.
In Figure 6 we again indicate by red and blue arrows the dependencies entailed by the Bell correlation. Now, however, it seems appropriate to represent these dependencies as linking the two sides of the experiment _via_ the locked node at \(I_{2}\). If nothing else, this depiction serves to emphasise that there is no direct spacelike influence involved. The connection arises from the locked collider, which is in the overlap of the past light cones of Alice and Bob.
We stress again that the retrocausality that Alice needs in Figure 4, to explain Bell correlations in this way, is something that _comes for free_ when
Figure 6: Big-\(\vee\) with the dependencies induced by a locked node at \(I_{2}\).
Initial Control is imagined turned off. It is an ingredient that becomes available anyway, to the extent that the argument needs it, when we consider the case in which Initial Control is absent.24
Footnote 24: Contrary to our own argument in [Price & Wharton 2022, Price & Wharton 2023], then, this recipe for explaining entanglement needs three ingredients, not four!
Of course, Alice might have some other notion of causality in mind, such that turning off Initial Control is not sufficient for retrocausality _in her sense_. For example, she might regard it as true by definition (according to what she means by 'causality') that causation only works forward. That doesn't matter for our argument. We don't require that the variable **I** be a collider in Alice's sense of causation, whatever that might be; but simply that it be a collider in the minimal sense that comes for free by turning off Initial Control.
We also emphasise that the argument does not require that Alice has retrocausal influence over the initial state of _actual_ experiments of the form of \(\vee_{1}\) and \(\vee_{2}\). It simply requires that she not rule out such influence in the imaginary case in which Initial Control is missing - which turns out to be true by definition, in the sense of causality that matters. This is sufficient to make sense of a version of Big-\(\vee\) without Bell correlations, from which the Bell experiments \(\vee_{1}\) and \(\vee_{2}\) emerge by a kind of preselection: by fixing the value of the initial variable **I**. In other words, it shows how Bell correlations in real experiments can be a selection artefact.
## 8 Discussion
### The real world is a lot bigger than Big-\(\vee\)
In the argument above, Big-\(\vee\) is a stand-in for something much bigger. It takes a huge amount of Initial Control to prepare a version of Big-\(\vee\), of course, even if we don't take the additional step of choosing between \(\vee_{1}\) and \(\vee_{2}\). So we haven't really got rid of Initial Control - we've just moved it a little further out. Still, we've provided a simple but realistic model on which more general versions can be built, and a proof of principle. Intuitively, enlarging the model will provide more, rather than fewer, ways in which Bell correlations can emerge as preselection artefacts.
### Physically realistic unlocked versions of Big-\(\lor\)?
We have been assuming that any actual, physically realistic version of Big-\(\lor\) would be the kind that mixes results from versions of \(\lor_{1}\) and \(\lor_{2}\) in which the initial state is indeed locked. But is this really the case, or are there realistic versions of Big-\(\lor\) in which **I** is not locked? We leave this as an open question,25 and recommend it among other things as a training exercise. As we said, our argument does not depend on a positive answer, but it does require the ability to imagine that the answer _might_ be positive. That's a step on the path to imagining the case in which there is no Initial Control.
Footnote 25: The delayed choice framework seems a promising place to look.
### Two paths to retrocausality26
Footnote 26: Not to be confused with Adlam’s [11] two _roads_ to retrocausality!
There are important differences between the approach taken above and typical retrocausal approaches to the explanation of Bell correlations.27 We noted one of these in SS1.2. In terms of our taxonomy there, typical retrocausal approaches fall into category 3. They seek to make Bell's nonlocality relativity-friendly, by representing it as a zigzag process, via past light cones. The present approach aims instead to remove Bell correlations from the scope of CCP altogether, by construing them as selection artefacts.
Footnote 27: See [12, 13] for recent surveys of the latter.
This difference is linked to another. So far as we are aware, all previous work in this area (including our own) considers retrocausal influences on properties conventionally denoted by \(\lambda\), understood to be the factors _not_ fixed by the initial preparation of the state \(\psi\) of the quantum system concerned.
As noted in SS1.2, retrocausality requires a violation of a key assumption required by Bell's Theorem, the principle called Statistical Independence (SI), or Measurement Independence. Expressed in terms of \(\lambda\), Statistical Independence amounts to this:
1. \(P(\lambda|\alpha)=P(\lambda)\) (where \(\alpha\) is a setting variable).
Standard retrocausal models allow the possibility that \(P(\lambda|\alpha)\neq P(\lambda)\).
In the usual approach, \(\psi\) is an input variable, and is simply not assigned a probability. Hence it doesn't make sense to ask whether
\[P(\psi|\alpha)=P(\psi).\]
Turning off Initial Control changes this. \(\psi\) can now be assigned a probability, if we wish, and turning off Initial Control can be then be regarded as an explicit rejection of the assumption \(\mathrm{SI}_{\psi}\). We remind readers that our argument requires this only in what we termed an imagined case, which can be physically unrealistic. The argument works even if in the actual world, \(\psi\) is always subject to Initial Control.
Why is this difference between these two approaches interesting? For at least two reasons, we think. First, the present approach doesn't require any _actual_ retrocausality, and thereby presents a much smaller target to would-be objectors. Second, the approach seems to be ontology-neutral, at least over a broad range,28 in the sense that whatever account is offered of the ontology of quantum state preparation - whatever the ontological basis of \(\psi\), in effect - there will be a way of raising the possibility of turning off Initial Control _in that ontology._
Footnote 28: The exceptions will include approaches that reject ontological questions altogether, in the quantum realm.
### Remembering the sub-operational level
We have noted that the kind of retrocausality needed for our argument comes for free, when we imagine turning off Initial Control. Turning off Initial Control in Big-\(\vee\)_just is_ allowing the possibility that Alice's and Bob's settings choices might influence \(\mathbf{I}\), in the sense that matters. But if Initial Control is turned off, so that Alice's and Bob's settings choices may make a difference to \(\mathbf{I}\), how is this supposed to work? To what else, between \(\mathbf{I}\) and the setting choices \(a\) and \(b\), can the latter also make a difference?
This question takes us to the sub-operational level, and we should expect any answers to be heavily dependent on choice of quantum ontology. We note the following general points. First, whatever goes into this sub-operational level, it will need to allow the kind of retrocausality, or future input dependence, that comes for free with giving up Initial Control. It would be nonsense to imagine making a difference to \(\mathbf{I}\) without making a difference to the proposed intermediate ontology.
Second, the converse does not hold. That is, Initial Control of **I**_need not_ imply full Initial Control of sub-operational ontology between **I** and the settings. In the sense that has turned out to matter, retrocausality and lack of Initial Control go together. And in ordinary circumstances, we don't have reason to assume that we full control over HVs in the quantum realm - quite the contrary. So the equation of lack of Initial Control with retrocausality, in the sense of the latter relevant to present discussions, seems to imply that retrocausality is always the default option in the HV case. This would be a striking reversal of fortune for the retrocausal approach, putting its would-be opponents on the back foot.
## 9 Summary
We have shown that in a simple but realistic case, Bell correlations may be regarded as a special sort of preselection artefact, produced by ordinary control of the initial conditions of the relevant experiments. We conjecture that the reason this explanation has not been noticed is the sheer familiarity of the initial control concerned. To understand the role it plays we need to imagine it absent. This step has not previously been taken in this context,29 so far as we are aware.
Footnote 29: It has been taken in a single-particle context, including by us; see, for example: [22, 23, 24].
When we do imagine the Initial Control-free regime, we bring into play the causal model of Figure 4, in which the initial state of Big-\(\vee\) is a retrocausal collider. In this regime, the Bell correlations in \(\vee_{1}\) and \(\vee_{2}\) can be explained as collider artefacts. Restoring Initial Control, taking us back to the real world, then _constrains_ the collider in question, a step known from other cases to convert mere collider artefacts into counterfactual-supporting connections, from one side of the constraint to the other.
In effect, the initial state preparation that Initial Control allows thus 'thwarts' the influence that Alice's and Bob's choices of settings would otherwise have on the same initial variable. Thwarted at that point, the difference due to different setting choices has to emerge somewhere else, and the result is Bell nonlocality.30 |
2309.12818 | How Automated Market Makers Approach the Thin Market Problem in
Cryptoeconomic Systems | The proper design of automated market makers (AMMs) is crucial to enable the
continuous trading of assets represented as digital tokens on markets of
cryptoeconomic systems. Improperly designed AMMs can make such markets suffer
from the thin market problem (TMP), which can cause cryptoeconomic systems to
fail their purposes. We developed an AMM taxonomy that showcases AMM design
characteristics. Based on the AMM taxonomy, we devised AMM archetypes
implementing principal solution approaches for the TMP. The main purpose of
this article is to support practitioners and researchers in tackling the TMP
through proper AMM designs. | Daniel Kirste, Niclas Kannengießer, Ricky Lamberty, Ali Sunyaev | 2023-09-22T12:15:34Z | http://arxiv.org/abs/2309.12818v2 | # How Automated Market Makers Approach the Thin Market Problem in Cryptoeconomic Systems
###### Abstract
The proper design of automated market makers (AMMs) is crucial to enable the continuous trading of assets represented as digital tokens on markets of cryptoeconomic systems. Improperly designed AMMs can make such markets suffer from the thin market problem (TMP), which can cause cryptoeconomic systems to fail their purposes. We developed an AMM taxonomy that showcases AMM design characteristics. Based on the AMM taxonomy, we devised AMM archetypes implementing principal solution approaches for the TMP. The main purpose of this article is to support practitioners and researchers in tackling the TMP through proper AMM designs.
automated market makers, cryptoeconomic systems, decentralized exchange, decentralized finance, blockchain
## I Introduction
Organizations aim to efficiently allocate resources through markets, often through initial public offerings of stocks [1, 2]. Resource allocation in markets requires transfers of asset ownership [3]. Asset ownership transfers are commonly processed by intermediaries like banks and notaries [4, 5], which can increase transaction costs (e.g., banking and clearing fees) [6], slow down transaction settlement (e.g., cross-border payments) [7], and decrease flexibility (e.g., regarding offering structures of stocks) [7].
By leveraging distributed ledger technology (DLT), in particular blockchain technology, cryptoeconomic systems offer an alternative approach for resource allocation that can reduce the reliance on intermediaries, decrease transaction costs, and enhance flexibility [5, 8]. Cryptoeconomic systems are sociotechnical systems wherein market participants (e.g., individuals, organizations, and software components) manage ownership of assets based on digital tokens that are secured by principles of cryptographic systems [5, 9], such as digital signatures. To offer and trade tokens in cryptoeconomic systems, market participants commonly use automated market makers (AMMs). AMMs are software agents that are used to provide liquidity to cryptoeconomic system markets by continuously offering trades of token pairs to investors based on mathematically specified price functions [10]. For example, the fictive company Token Comp issues TKC tokens to raise capital. It offers the TKC tokens to investors in exchange for ETH tokens (i.e., the native currency of the Ethereum system) via an AMM. The AMM holds the token pair TKC/ETH and continuously offers token trades to market participants. Alice buys transfers 1 ETH to the AMM in order to buy TKC tokens. The AMM then calculates the token price using its price function (i.e., the number of TKC tokens Alice will receive for 1 ETH) and transfers the corresponding amount of TKC tokens to Alice. Conversely, Alice can exchange TKC for ETH tokens against the AMM at any time. By continuously offering exchanges of ETH and TKC tokens, the AMM provides liquidity to the TKC/ETH market.
The proper design of AMMs is crucial for successful resource allocation in cryptoeconomic systems. Improperly designed AMMs can make cryptoeconomic system markets suffer from the thin market problem (TMP). The TMP refers to unreliable asset pricing in markets of low liquidity, which increases financial risks for investors and organizations that issue tokens for resource allocation [11, 12, 13]. For example, in markets subject to the TMP, selling large token amounts in a short time strongly decreases token prices [12]. Continuing with the previous example, Alice can (unintentionally) strongly impact TKC token prices. During the process of selling tokens, Alice continuously decreases the token price along the price function of the AMM. Thereby, Alice experiences financial losses by selling TKC tokens at lower prices than expected. Financial risks, such as financial losses caused by strong price changes, can shy away market participants from investing in cryptoeconomic systems. Without investors buying a sufficient volume of tokens (e.g., TKC tokens), organizations and even the entire cryptoeconomic system fail in their resource allocation purposes (e.g., raising capital and exchanging tokens). Properly designed AMMs should implement mechanisms to solve the TMP.
Various AMM designs were developed that implement different approaches to support organizations in their resource allocation. For example, constant function market makers (e.g., Uniswap v2) use mathematical conservation functions to discover adequate token prices [14, 15]. Proactive market makers (e.g., DODO), adopt token prices from external price oracles [14, 16, 17, 18]. However, external price oracles cannot provide adequate token prices because they are unknown thin markets [12]. Thus, proactive market makers appear not to be able to solve the TMP. In contrast, Uniswap v2 discovers token prices based on buy and sell transactions that change the token reserves but require market participants to provide liquidity [19]. Uniswap v2 seems to solve the TMP if sufficient liquidity is provided. Apparently, AMM designs can strongly influence the efficacy of solving the TMP.
Understanding AMM designs is of particular value for the identification and the targeted use of solution approaches for the TMP. However, publications presenting a wide variety of AMM designs are scattered across various sources (e.g., blogs, whitepapers, and scientific databases). This makes it difficult to deduce AMM design characteristics that account for solution approaches for the TMP. Therefore, the design of such approaches can hardly be identified, inhibiting their targeted use.
To understand AMM designs and their characteristics, extant research [14, 20, 21, 22] presents conceptualizations of AMM designs, covering basic AMM design characteristics, such as liquidity sensitivity and path independence. Although being of great value for understanding AMM designs, extant AMM conceptualizations mainly focus on a few AMM designs, such as constant function market makers [14, 15, 21]. Additional characteristics of other AMM designs, such as the price adoption of proactive market makers [16, 18], translation invariance of LMSR market makers [23, 24, 25], and token supply-sovereignty continuous liquidity market makers [9], are neglected. This makes it hard to understand the key characteristics of AMM designs and how they account for tackling the TMP. A conceptualization of AMM designs is needed to understand AMM design characteristics and to identify and compare solution approaches for the TMP implemented in AMMs. We ask the following research questions:
_RQ1: What are the key characteristics of AMM designs?_
_RQ2: What are principal solution approaches for the TMP implemented in AMMs?_
We applied a three-step research approach. First, we developed an AMM taxonomy based on 122 scientific publications and 110 AMMs following the method of taxonomy development of Nickerson et al. [26]. Second, we utilized the AMM taxonomy to identify important design characteristics for tackling the TMP. Based on the identified design characteristics, we developed AMM archetypes that implement common solution approaches for the TMP. Third, we assessed the efficacy of the developed AMM archetypes to solve the TMP.
Our work has the following main contributions. First, we contribute to the understanding of AMM designs by presenting an AMM taxonomy. The AMM taxonomy can be used to guide AMM development as it points out dimensions that need to be considered by developers and offers options to implement the dimensions. Moreover, the AMM taxonomy is useful for the systematic comparison of AMM designs. Second, by presenting AMM archetypes (i.e., Price-discovering LP-based AMM, Price-adopting LP-based AMM, Price-discovering Supply-sovereign AMM), we support the understanding of the basic functioning of common AMM designs that are used for specific resource allocation purposes (e.g., token issuance). The AMM archetypes can be used as abstract blueprints that can be refined to develop AMMs that meet resource allocation purposes. Third, by explaining the commonly used solution approaches for the TMP implemented in AMMs and the efficacy of these approaches, we support the anticipation of the TMP by proper AMM designs.
The remainder of this work is organized into six sections. In Section II, we introduce cryptoeconomic systems, AMMs, and principal typical uses of AMMs to meet resource allocation purposes. Moreover, we explain the TMP in more detail. In Section III, we describe how we developed the AMM taxonomy and the AMM archetypes. In Section IV, we present the groups, dimensions, and characteristics of our AMM taxonomy and demonstrate its applicability based on 110 AMMs. Section V describes the developed AMM archetypes and their solution approaches for the TMP. In Section VI, we discuss our principal findings and describe the contributions and limitations of this work. Moreover, we outline future research directions on AMMs. In Section VII, we conclude with our personal takeaways and thoughts on the development of AMMs for cryptoeconomic systems.
## II Background and Related Research
### _Cryptoeconomic Systems and Distributed Ledger Technology_
Cryptoeconomic systems (e.g., based on the Bitcoin system or the Ethereum system) are sociotechnical systems that enable agents (e.g., individuals, organizations, and software artifacts) to manage ownership of assets, including claims, rights, and securities, by using principles of cryptographic systems [9, 27]. In this section, we introduce the foundations of cryptographic and economic systems combined in cryptoeconomic systems. Building on those foundations, we introduce DLT and its role in the operation of cryptoeconomic systems.
Cryptoeconomic SystemsCryptoeconomic systems combine principles of cryptographic systems and economic systems. Cryptographic systems are suits of cryptographic algorithms that are used to reach certain security levels, such as in terms of confidentiality [28, 29]. The basic functions to be fulfilled by cryptographic algorithms in cryptographic systems are
key generation, encryption, and decryption. Key generation algorithms produce secrets, also called keys, that can be used to encrypt and decrypt data. For the authentication of identities in computer systems, such as DLT systems, by digital signatures, asymmetric key techniques are commonly used [30]. Digital signatures allow for the authentication of identities in computer systems [30, 31, 32] to authorize asset transfers as required in economic systems. For example, market participants in stock markets must authenticate against brokers to initiate asset transfers.
Economic systems are social systems in which market participants allocate resources in order to enable the trading of products and services. A prevalent form of economic system in modern times is the market economy [33]. In market economies, prices and production are determined by the interaction of supply and demand from all market participants (e.g., producers, consumers, investors, and traders) [34]. In these markets, market participants come together to exchange assets, such as goods and services. To enable an asynchronous exchange of assets, exchanges operate order books that record buy and sell offers and fulfill them when a matching counterparty is found. Basically, there are two types of orders. First, limit orders which are instructions to buy or sell assets at a specified price but without the guarantee of immediate execution [35]. Limit orders are stored in order books. As the counterpart to limit orders, market orders are instructions to buy or sell assets immediately at a given price [35]. Exchanges match limit orders and market orders in markets to settle assets to be traded. The immediately available volume to settle market orders (e.g., through limit orders) is the available liquidity in a market [36].
Market makers are used to enable smooth asset trading by providing liquidity to markets. A market maker is a rational market participant who quotes bid (buy) and ask (sell) prices for trading pairs [37]. A trading pair refers to two different assets that can be traded against each other, for example, Bitcoin against USD or Wheat against USD. Market makers actively place limit orders in the order book committing their willingness to trade at bid/ask prices to market participants, consequently providing liquidity to the market [37].
Market makers can apply different approaches to determine bid/ask prices for their limit orders. A simple approach is to incorporate bid/ask prices stated by market participants into a pricing function. Such a pricing function can compute the market maker maker's asset prices by averaging all bid/ask quotes of all investors. The pricing function of market makers is usually private and not known to other market participants [37, 38]. Market makers leverage bid/ask spreads by continuously buying and selling assets with added surcharges [10, 38].
As a prerequisite for the exchange of assets, market makers must continuously hold balanced amounts of all assets in their inventory that are offered in trading pairs to market participants (e.g., arbitrageurs and investors) [37]. Market makers ideally sell a number of assets (e.g., USD) and simultaneously buy the equivalent number of assets of the same kind. When arbitrageurs and investors buy more underpriced assets of one kind, the market maker sells more of this asset than it buys. Thereby, the market maker becomes subject to inventory imbalances that can render the market maker unable to trade asset pairs [38, 39].
Cryptoeconomic systems combine principal techniques of cryptographic systems to enable the safe and secure operation of economic systems [5, 9].
Cryptoeconomic Systems based on Distributed Ledger TechnologyDLT is often used to instantiate crypteconomic systems. DLT enables the operation of distributed ledgers, a kind of distributed database that stores records of transactions. Often, these transactions represent resource allocations of the economic system. DLT systems usually implement techniques and functionalities to operate the infrastructure of crypteconomic systems [5, 9], for example, asymmetric cryptography for the authentication of identities by digital signatures and a database management system for distributed ledgers. The DLT-based infrastructure is governed by an economic system in terms of the creation, allocation, and distribution of assets that are represented by digital tokens in digital economies. In DLT-based cryptoeconomic systems, tokens are typically specified and managed in smart contracts (e.g., ERC-20 standard) that map token balances to unique identifiers of the market participants (e.g., externally owned addresses in the Ethereum system) [40]. Smart contracts are software programs that allow for the automated execution of transaction logic [41, 42]. Transactions can manipulate the token mapping, enabling asset ownership transfers.
### _Automated Market Makers_
In cryptoeconomic systems, AMMs are market makers implemented as software agents that commonly trade tokens with market participants at self-determined prices in an automated manner. In contrast to conventional market makers (e.g., trading organizations), AMMs execute the token settlement and use trading strategies that are based on mathematical formulas and are transparent to all market participants.
AMM designs commonly incorporate a _price discovery component_, _price determination component_, a _parameter component_, a _token settlement component_, _token management components_, and an _liquidity provider (LP) token management component_. An exemplary AMM component overview of Uniswap v2 is shown in Figure 1.
The _price discovery component_ implements the logic to discover the token price. Typically, the price discovery component is part of the AMM and uses parameters of the parameter component for price discovery. The token price is passed to the price determination component.
The _price determination component_ implements a price determination mechanism to calculate the bid/ask token prices. The price determination component adjusts the token price based on the parameters of the parameter component. Market participants
can exchange tokens at the stated bid/ask token prices of the price determination component. When market participants execute transactions, the price determination component calculates the amount of bought tokens the market participant will receive in return for the amount of tokens sold to the AMM. Both token amounts are passed to the token settlement component.
The _parameter component_ stores parameters the AMM uses to determine prices, settle transactions, and govern parameter changes of the AMM. Exemplary parameters of the parameter component are trading fees, amount of token reserves, token weights, and recent prices [19]. A set of parameters defines the state of an AMM. State transitions are carried out by trades of market participants with the AMM. The price discovery component and price determination component calculate the amount of tokens the market participant will receive in return for the provided amount of tokens. The calculation is based on the parameters in the AMM, the input parameters of the transaction, and parameters external from the AMM. The token price results from the amount of received tokens divided by the amount of provided tokens (from the market participant's perspective).
The _token settlement component_ calls the token management component of the individual token keepers to initiate the actual token transfer through a token keeper. A token keeper is a software agent, often implemented as a smart contract, that controls the token management component. Token keepers are part of at least one cryptoeconomic system. They are often external to AMMs. AMMs can be token keepers themselves. Within each token keeper, a _token management component_ manages the tokens of market participants, including AMMs in inventories. Token management components maintain account books that map token balances to unique identifiers (e.g., account addresses) of market participants. Token management components updates account books to _transfer_, _mint_, and _burn_ tokens. AMMs have token inventories managed by at least one token keeper. Inventories of AMMs are called liquidity pool [19, 20, 14]. To settle transactions, AMMs instruct token keepers that manage tokens involved in the transactions to transfer tokens by updating their account books. For example, a market participant exchanges 1 WETH for 1000 USDC. To settle this transaction, the AMM instructs the ERC-20 smart contract of WETH to transfer 1 WETH from the market participant to the AMM's liquidity pool. In the ERC-20 smart contract of WETH, the market participant's token balance is decreased by one. The token balance of the AMM's liquidity pool is increased by one. Vice versa, the ERC-20 smart contract of USDC is instructed to increase the market participant's token balance by 1,000 and decrease the token balance of the AMM's liquidity pool by 1,000.
The _LP token management component_ is an optional AMM-internal token management component that is used to handle LP tokens. Market participants can deposit tokens into the liquidity pools of AMMs. Such market participants are called liquidity providers (LPs) [19, 43]. When depositing tokens into the liquidity pools, LPs receive LP tokens. LP tokens represent a claim on a share of the liquidity pool that allows the liquidity providers to withdraw their share of the liquidity pool. LP tokens cannot be traded via the AMM [19, 44, 14]. The LP token management component stores and administrates the account book that maps LP tokens of the AMM to the unique identifiers of the LPs.
Building on the introduced components, different AMM designs were presented. Constant function market makers implement the predominant AMM design used by Uniswap v2 [19], PancakeSwap [45], and SushiSwap [46]. Constant function market makers implement mathematical conservation functions for price discovery. The price discovery mechanism adjusts token prices based on buy and sell transactions of market participants that change the AMM's token reserves [10, 47, 22, 43, 15]. Proactive market makers, such as DODO [16] and WooFi [49], use external price oracles that incorporate price discovery
Fig. 1: Exemplary AMM component overview of Uniswap v2 [19]
components. Proactive market makers do not discover token prices on their own. Instead, they adopt token prices from external price oracles that are often operated by third parties [14, 16, 17, 18].
### _Principal Purposes of Automated Market Makers_
AMMs are used to meet two principal purposes: _decentralized exchange (DEX)_ and _token issuance_. These purposes can be further nuanced into six subordinate ones, as described in the following.
#### Ii-C1 Decentralized Token Exchange
Decentralized token exchanges allow market participants to swap tokens between market participants without the need for central authorities (e.g., brokers) [50, 51]. There are four different types of decentralized token exchanges: _correlated tokens_, _uncorrelated tokens_, _non-fingible tokens (NFTs)_, and _prediction tokens_.
Correlated TokensCorrelated tokens are tokens with linked prices. When the price of one token increases (or decreases), correlated token prices also increase (or decrease). Strongly correlated token pairs are supposed to be exchanged at a constant rate [52]. To enable cost-efficient exchanges at a constant rate, the market must be highly liquid at this exchange rate [53]. Exemplary correlated token pairs are Circle's USDC and Tether's USDT. Both tokens are paired with each other and exchangeable at a one-to-one ratio.
Uncorrelated TokensUncorrelated token exchanges enable trades of tokens whose token prices are weakly or not correlated to each other. Uncorrelated token pairs require liquidity in wider price ranges because those tokens cannot be exchanged at a constant rate [21]. Instead, the exchange rate varies due to volatility and price fluctuations. An exemplary uncorrelated token pair is Bitcoin paired with a stable token, such as Circle's USDC.
Non-fingible TokensVia NFT exchanges, market participants can swap NFTs with fungible tokens. Each NFT has its individual value due to its inherent uniqueness. Therefore, NFTs are often non-interchangeable. However, to enable interchangeability, all NFTs of one collection are treated equally and assumed to be inter-changeable [54]. For example, NFTs of the Bored Ape Yacht Club collection could be paired with a rather stable token such as Ether. NFTs in the Bored Ape Yacht Club collection are treated equally and do not have a unique value assigned.
Prediction tokensPrediction tokens are used in prediction markets that enable market participants to bet on outcomes of future events, for example, the outcomes of elections or company stock prices at a specific future point in time [55]. Market participants can place their bets into AMMs. When market participants place bets, they deposit tokens into liquidity pools of corresponding AMMs. The occurrence of events on which market participants have bet triggers the closure of the prediction market. When the prediction market is closed, AMMs evaluate the outcomes of triggered events against the bets of market participants. The AMM initiates payouts of tokens deposited by market participants depending on the event outcome [56, 57, 58]. For example, market participants who bet on the outcomes of events receive tokens; other market participants lose their deposits. Exemplary prediction markets are offered by Augur [57] and Zeitgeist [59]. Those prediction markets enabled market participants to bet on events such as the outcome of the 2020 U.S. presidential election.
#### Ii-C2 Token Issuance
Token issuance refers to the process of minting and distributing tokens of cryptoeconomic systems to market participants. In token issuance, AMMs create a functional relationship between the price of a token and the corresponding token supply [13]. Practically speaking, the token price is mapped to the token supply. The use case of token issuance can be grouped into _curation tokens_ and _initial token offerings_.
Curation TokensVication tokens are used to curate market participant perceptions of the value of an asset, such as data sets, machine learning (ML) models, or artworks. AMMs are used to adjust token prices to update token prices according to market participants' perceptions continuously [60, 61]. For example, the Ocean Protocol offers an AMM for curation markets that issues tokens for data sets that are used to value the provided data sets in regard to their quality of training ML models. It curates the perceptions of market participants regarding the quality of the data set [62].
Initial Token OfferingIn initial token offerings, token issuers (e.g., individuals and organizations) collect funding to finance endeavors by selling shares of the endeavor represented in the form of tokens [63]. Such endeavors include new infrastructure, new Dapps, or other projects (e.g., The Dao, Fetch.ai, Bancor) [13, 64, 65]. AMMs support initial token offerings by providing a thick market that enables market participants to buy and sell the cryptoeconomic system's tokens. Exemplary initial token offerings were conducted to finance the development of the Ethereum system in 2014 [66] and the Tezos system in 2017 [67].
### _Principal Challenges of Automated Market Makers_
AMMs can become subject to the Liquidity Accumulation Problem (LAP) and the Price Determination Problem (PDP), which must be solved to tackle the TMP. The following briefly introduces the LAP, the PDP, and the TMP and their relationships.
Liquidity Accumulation Problem: The LAP refers to the accumulation of token reserves that can be used to settle transactions of market participants.
To settle transactions with market participants, AMMs draw on token reserves deposited in liquidity pools. When insufficient token reserves are available, AMMs are low-liquid and become oversensitive to transactions. For example, low-volume transactions can lead to large token price changes, and transactions cannot be settled because of insufficient token reserves [43]. Consequently, the AMM is rendered unattractive to trade with. Market participants shy away from using the AMM because of its ineffectiveness. Sufficient token reserves must be available for AMMs to tackle the TMP successfully.
Price Determination Problem: The PDP refers to the reliable determination of adequate token prices based on market information that may be hard to interpret in an automated manner.
AMMs must determine adequate token prices for which market participants buy or sell tokens. Adequate token prices represent the current cumulative perception of market participants. The adequate token price approximates the efficient token price but does not equal the efficient token price because it is unknown in inefficient markets according to the efficient market hypothesis [68, 69]. When AMMs set inadequate token prices (e.g., diverging from other markets), AMMs sell tokens at prices that are too low or buy tokens at prices that are too high, resulting in financial losses for the respective AMM. For example, a news report increases the actual value of tokens from the perspectives of market participants. The AMM may not be able to directly incorporate information from that news report into its pricing mechanism because the AMM is unable to interpret the market information [20, 70]. Thus, the AMM has incomplete market information and lags behind the market participants' perceptions of the adequate token price. This leads the AMM to sell tokens too cheaply.
AMMs that adjust token prices based on market information solve the PDP. Thus, these AMMs can offer tokens at adequate prices in cryptoconomic system markets. Having market information incorporated, the AMM can state adequate token prices approaching the efficient token price that enables the AMM to tackle the pricing issue of the TMP successfully.
Thin Market Problem: The TMP refers to the unreliable asset pricing caused by the low liquidity of corresponding markets.
When buyers and sellers infrequently make bid-and-ask quotes, orders seldom match, which is a symptom of markets with low liquidity [12, 13]. Low liquidity can make markets very sensitive to transactions. Even transactions with little volume can strongly influence the token price. Increasing market sensitivity, which may even render markets over-sensitive, ultimately decreases the reliability of token prices [12, 71]. Large deviations from adequate token prices characterize unreliable token prices. The unreliability of token prices can be amplified by the exploitation of market over-sensitivity for market manipulations. Oversensitive thin markets can usually be successfully exploited at low cost [71, 72]. For example, in traditional financial markets, penny stocks that are typically prone to the TMP are often used for pump-and-dump schemes [72]. In a pump-and-dump schema, the market manipulator artificially increases the asset price (pump) to attract other buyers. Afterward, the bought assets are sold at the artificially pumped price to realize profits (dump) [73, 74] In summary, the TMP arises from low trading volume available to market participants, leading to high bid-ask spreads and unreliable token prices.
Various AMM designs were developed that implement different solution approaches for the LAP, PDP, and TMP. Such approaches have individual efficacy in solving the TMP. In this work, we describe the characteristics of AMM designs and corresponding principal solution approaches implemented in AMMs to tackle the LAP, the PDP, and the TMP. Moreover, we offer an explanation of their efficacy in solving the TMP.
## III Methods
We applied a three-step method to understand the characteristics that constitute AMM designs and to develop AMM archetypes that implement principal solution approaches for the TMP. First, we developed an AMM taxonomy based on literature and AMM implementations following Nickerson et al. [26]. Second, we used the AMM taxonomy to develop AMM archetypes. Third, we analyzed scientific publications, gray literature, and AMM implementations to extract solution approaches implemented in AMM archetypes for the TMP.
### _AMM Taxonomy Development_
To understand the differences between AMM designs, we developed an AMM taxonomy following the method proposed by Nickerson et al. [26]. First, we determined our meta-characteristic as _AMM design characteristics_ to get an exhaustive overview of different AMM designs. Second, we specified six ending conditions (see Table I) that helped us recognize when
the taxonomy reached a sufficient quality level to terminate the taxonomy development. Third, we applied the conceptual-to-empirical approach and the empirical-to-conceptual approach (see Table II) in five iterations. In the conceptual-to-empirical approach, we conceptualized the dimensions of the AMM taxonomy without examining implementations of AMMs. In this deductive process, we used our knowledge of and experience with AMMs and cryptoeconomic systems and judgment to create relevant dimensions by analyzing literature. We analyzed scientific and non-scientific publications on cryptoeconomic systems and AMMs to extract AMM dimensions and corresponding characteristics for our taxonomy. In the empirical-to-conceptual approach, we analyzed implementations AMMs to extract relevant characteristics for the classification of AMM designs. Table II presents an overview of our taxonomy development process. The taxonomy development process comprises five iterations. In total, we analyzed 122 publications in the conceptual-to-empirical approach and 110 implementations of AMMs in the empirical-to-conceptual approach. We describe each iteration in the taxonomy development process in more detail in the following.
Conceptual-to-empirical (Iteration 0)To develop our initial version of the AMM taxonomy, we applied a conceptual-to-empirical approach based on the analysis of literature on AMMs. We started the conceptual-to-empirical approach with a search for literature that is relevant to the development of the AMM taxonomy. To assess the relevance of publications on AMMs, we defined inclusion and exclusion criteria (see Table III). Then, we compiled a set of potentially relevant publications for the development of the AMM taxonomy. We selected publications we deemed particularly relevant for the development of the AMM taxonomy. This selection resulted in an initial set of 31 publications, including peer-reviewed and gray literature, potentially relevant for the AMM taxonomy development. After applying our inclusion and exclusion criteria, we excluded twelve publications because those did not include AMM characteristics or did not describe concrete AMM designs. Our final set of literature to be analyzed to develop an initial AMM taxonomy comprised 19 publications.
After the literature search, we read the full texts of the 19 publications. Then, we analyzed them by applying open coding [75, 76] to extract dimensions describing AMM designs and corresponding characteristics. We recorded a name, description, original source, and corresponding characteristics for each dimension. Our initial coding resulted in 41 preliminary characteristics associated with 31 preliminary dimensions. We resolved ambiguities and inconsistencies between the preliminary characteristics and dimensions in three refinement rounds. For example, we merged the characteristics _sufficient funds_, _path deficiency_, and _non-depletion_ into _path deficiency_. After refining the preliminary characteristics and associated dimensions, our initial version of the AMM taxonomy comprised 15 AMM dimensions and 34 AMM characteristics.
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|c|} \hline
**Approach** & **Type** & **Iter. 0** & **Iter. 1** & **Iter. 2** & **Iter. 3** & **Iter. 4** & **Summary** \\ \hline \multirow{3}{*}{\begin{tabular}{l} Conceptual- \\ to-empirical \\ \end{tabular} } & Confirmatory publications & 5 & 25 & n.a. & 63 & n.a. & 93 \\ \cline{2-7} & Conflicting publications & 14 & 8 & n.a. & 7 & n.a. & 29 \\ \cline{2-7} & Overall publications & 19 & 33 & n.a. & 70 & n.a. & 122 \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Empirical-to- \\ conceptual \\ \end{tabular} } & Confirmatory AMMs & n.a. & n.a. & 46 & n.a. & 47 & 93 \\ \cline{2-7} & Conflicting AMMs & n.a. & n.a. & 14 & n.a. & 3 & 17 \\ \cline{1-1} \cline{2-7} & Overall AMMs & n.a. & n.a. & 60 & n.a. & 50 & 110 \\ \hline \end{tabular}
\end{table} TABLE II: Overview of the development of the AMM taxonomy
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Type** & **Name** & **Description** \\ \hline \multirow{3}{*}{\begin{tabular}{l} Inclusion \\ \end{tabular} } & AMM Design Description & The publication describes at least one AMM design \\ \cline{2-4} & English Language & The publication must be in English language \\ \cline{2-4} & Topic Fit & The publication must describe at least one AMM design \\ \cline{2-4} & Uniqueness & The publication must not be already included in the set of literature \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Exclusion \\ \end{tabular} } & Books & The publication is a book \\ \cline{2-4} & Duplicate & The publication is already included in the set of relevant literature \\ \cline{1-1} \cline{2-4} & Not English & The publication is in a non-English language \\ \cline{1-1} \cline{2-4} & Off-topic & The publication does not deal with AMMs \\ \hline \end{tabular}
\end{table} TABLE III: Inclusion and exclusion criteria for literature
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Type** & **Name** & **Description** \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Objective \\ \end{tabular} } & Exhaustiveness & The characteristics and dimensions collectively exhaustively describe AMM designs \\ \cline{2-4} & Mutual Exclusiveness & Characteristics (and dimensions) do not semantically overlap \\ \cline{2-4} & Relevance & Each characteristic of each dimension is required for the classification of at least one AMM design in the taxonomy \\ \cline{2-4} & Representativeness & A selection of publications and AMMs representative of AMM designs were incorporated into the taxonomy \\ \cline{2-4} & Robustness & No changes were made to the taxonomy in the last iteration \\ \hline Subjective & Conciseness & The taxonomy includes a limited number of relevant dimensions and characteristics to describe AMM designs \\ \hline \end{tabular}
\end{table} TABLE I: Ending conditions for the taxonomy development
Conceptual-to-empirical (Iteration 1)To gather a set of AMMs to be classified into the initial version of the AMM taxonomy, we conducted a backward- and forward search based on the previously analyzed 19 publications. The first round of backward and forward searches yielded 1,086 additional publications (i.e., 243 publications by backward search and 843 publications by forward search). We applied our inclusion and exclusion criteria (see Table III) to the meta-information (e.g., title, keywords, abstract) to the potentially relevant publications. We excluded 291 duplicate publications and 692 publications because they were off-topic or lacked AMM design descriptions. Finally, we added 103 relevant publications to the set of relevant literature on AMMs as objects for taxonomy development. In Iteration 1, we randomly selected 33 of 103 publications for analysis. In our coding in iteration 1, we resolved ambiguities and inconsistencies in four refinement rounds. For example, we merged the dimensions _liquidity reversibility_ and _liquidity invariance_ into _liquidity changeability_. We ended our analysis when no refinements of the AMM taxonomy were required for the last ten analyzed publications. This refined version of the AMM taxonomy resulted in a total of 29 AMM dimensions and 68 characteristics.
Empirical-to-conceptual (Iteration 2)After the analysis of 52 relevant publications, we applied an empirical-to-conceptual approach to test the AMM taxonomy using implementations of AMMs. We selected a sample of the 100 largest existing AMMs based on their 24-hour trading volume reported on _www.coinmarketcap.com_. We treated AMMs with identical designs but deployments to different DLT systems (e.g., Uniswap v3 based in the Ethereum system, Uniswap v3 based in the Polygon system) as a single AMM and selected only one implementation of such AMMs for analysis. We treated different versions of AMMs as different AMMs (e.g., Uniswap v2 and Uniswap v3). We excluded order-book protocols, derivative protocols, and aggregator protocols that do not implement AMMs. Moreover, we added ten AMMs whose designs strongly differ from those of the 100 AMMs selected from _www.coinmarketcap.com_ to increase the exhaustiveness of the AMM taxonomy. To classify the selected AMMs, we used corresponding official documentation, whitepapers, yellowpapers, and code repositories. If we could not extract all the necessary information from such official sources, we extended our search to gray literature and contacted the developers to gather the necessary information. We selected the largest 60 AMMs from the set of 110 AMMs to have two batches for AMM classification and used the second batch to test the robustness of the AMM taxonomy. We classified the selected AMMs into the AMM taxonomy. During our classification of the 60 AMMs, we decided to drop pure mathematical descriptions of (parts of) AMMs because they did not match our meta-characteristics. For example, we removed the _curve monotony_, _curve scaling_, and _curve differentiability_ dimensions because we decided that those detailed mathematical descriptions are not concise and implicitly covered by other dimensions such as _translation invariance_. After having classified 60 AMMs into our taxonomy, no refinements of the AMM taxonomy were required for the last 10 analyzed AMMs. After iteration 2, the AMM taxonomy included 18 dimensions and 45 characteristics.
Conceptual-to-empirical (Iteration 3)We applied the conceptual-to-empirical approach to incorporate the remaining 70 publications into the AMM taxonomy. We analyzed the selected publications by applying open coding. We refined the AMM taxonomy when we recognized the need to add dimensions and characteristics to the AMM taxonomy or to redefine existing ones. For example, we merged the dimensions _dynamic trading fees_ into _asset risk management_ because of the latest publications providing new insights on Loss-Versus-Rebalancing [77]. We did not recognize the need to refine the AMM taxonomy in the analysis of the last 34 of the 70 publications. Overall, 63 of the 70 publications confirmed the preliminary AMM taxonomy. Eventually, the AMM taxonomy was comprised of 19 AMM dimensions and 50 AMM characteristics.
Empirical-to-conceptual (Iteration 4)To test the robustness of the AMM taxonomy, we analyzed the remaining 50 AMMs and classified them into the AMM taxonomy. For the analysis and classification, we proceeded as described in Iteration 2. Among the 50 AMMs, 3 AMMs required minor refinements of the AMM taxonomy. For example, we added the AMM dimension _token price source_ and added _constant-power-sum_ to the _price discovery mechanism_ dimension. Our final AMM taxonomy consists of 21 AMM dimensions and 53 AMM characteristics. To improve the comprehensibility of the AMM taxonomy and its usableness, we assigned the 21 AMM dimensions to four groups (i.e., governance, liquidity, pricing, and trading). We inductively developed these groups from the dimensions included in the AMM taxonomy. After the fourth iteration, we met our ending conditions (see Table I) and, thus, decided that our AMM taxonomy is final.
### _AMM Archetype Development_
To develop AMM archetypes, we discussed the AMMs classified in the taxonomy in terms of their different solution approaches for the LAP, PDP, and TMP. Based on our discussion, we rated the influence the dimensions have on solving the LAP, PDP, and TMP as high, medium, and negligible. We rated the influence of a dimension on the solution of the LAP, PDP, and TMP as high if there is a direct relation to the provision of liquidity, to the price determination, or to the interpretation and acquisition of market information. We rated the influence of a dimension on the solution of the LAP, PDP, and TMP as medium if indirectly related to the provision of liquidity order to price order to interpret and acquire market information. We rated the influence of a dimension on the solution of the LAP, PDP, and TMP as negligible if no relation to price determination or to the interpretation and acquisition of market information was evident.
We made unanimous decisions on the relevance of all dimensions for solving the LAP, PDP, and TMP. We eventually evaluated the influence of four dimensions on the solution of the LAP, PDP, and TMP as high, that of six dimensions as medium, and that of eleven dimensions as negligible. For example, we rated the influence of the _source of liquidity_ dimensions as very high on solving the LAP, PDP, and TMP because its characteristics could solve the LAP by design. We rated the influence of the _liquidity changeability_ dimension as a medium because the characteristics influence the available liquidity. We rated the influence of the _path independence_ dimension as negligible because the transaction sequencing was assessed as negligible for the LAP, PDP, and TMP.
Next, we removed dimensions presented in the taxonomies from the set of important dimensions for solving the LAP, PDP, and TMP if they have a negligible or medium influence on the LAP, PDP, and TMP. This gave us a set of four dimensions with a total of 15 characteristics.
Last, we grouped the AMMs based on the four dimensions with high influence on the solution of the LAP, PDP, and TMP. We identified 22 combinations of AMM characteristics that formed our preliminary AMM archetypes. We recognized that those 22 preliminary AMM archetypes implement similar solution approaches for the LAP, PDP, and TMP. To ensure the discriminatory power of the AMM archetypes, we decided to use the dimensions _token price source_ and _source of liquidity_ to separate the AMM archetypes clearly. The selected dimensions have a particularly strong influence on the solutions to the LAP, PDP, and TMP and define the ownership of the AMM components. Based on those two dimensions and four characteristics, we identified four AMM archetypes by building the cross-product. We dropped one AMM archetype because we were not able to identify a _Price-adopting Supply-sovereign AMM_ among the 110 analyzed AMMs. We concluded with three AMM archetypes that implement individual solutions for solving the LAP, PDP, and TMP and reflect AMMs used in actual cryptoconomic systems.
## IV Our Taxonomy of Automated Market Makers in Cryptoeconomic Systems
This section presents the AMM dimensions and AMM characteristics of the AMM taxonomy. A shortened overview is given in Supplementary Material VIII. We classified 14 exemplary AMMs to demonstrate the applicability of the AMM taxonomy in Table IV.
### _Groups, Dimensions, and Corresponding Characteristics of Automated Market Makers_
The AMM taxonomy comprises four groups of AMM dimensions: governance, liquidity, pricing, and trading. These groups include 21 dimensions which cover 53 characteristics of AMM designs. We describe each group, its associated dimensions, and corresponding characteristics in the following.
#### Iv-A1 Governance
The rules and processes for adjusting the AMM parameters, including trading fees, token inventory weights, and bid/ask spreads.
Governance ModelThe distribution of accountabilities and decision rights and the resources involved to decide on adjustments of AMM parameters.
AMMs can implement centralized or decentralized governance models. Centralized governance models restrict the selection of market participants that are allowed to participate in the decision process (e.g., vote on parameter adjustments), for example, the developers that maintain the software of an AMM. Decentralized governance models allow various market participants to participate in the decision process.
Parameter AdjustmentThe functionality to change AMM parameters.AMMs can have fixed parameters (e.g., token weights), allow for manual parameter adjustments, or allow for automatic parameter adjustments. Fixed parameters are held constant over time [14, 78]. Manual adjustment allows a selection of market participants to change the parameters. AMMs with automatic adjustment have mechanisms implemented that automatically adjust parameters.
Price discovery and the available liquidity can depend on AMM parameters. For example, the token weights influence the amounts of individual tokens held in liquidity pools of AMMs. This can influence the available liquidity for the individual tokens. Thereby, parameter adjustments can eventually influence the efficacy of AMMs to solve the LAP and PDP.
Trading Fee AdjustmentThe adjustment of trading fees charged by an AMM.AMMs charge trading fees for transaction processing [14, 79]. AMMs can charge an adjustable trading fee or a static trading fee [80, 81].
The trading fee adjustment of an AMM can influence the bid/ask spread of token prices. The trading fee is subtracted/added to the token price leading to the bid/ask spread. Furthermore, trading fee adjustments can be used to mitigate potential attacks on AMMs. For example, front-running attacks during high token price volatility can be reduced if the trading fee can be increased in relation to the token price's volatility [14].
2 Liquidity
The availability, constraints, and source of liquidity in the AMM.
Liquidity Changeability: Liquidity changeability indicates whether the available liquidity for an AMM can vary.
The liquidity changeability can be constant or variable. For example, AMMs that rely on external liquidity providers often have variable liquidity because liquidity providers can withdraw and deposit liquidity over time. Supply-sovereign AMMs typically have constant liquidity because liquidity is sourced internally.
The liquidity changeability of an AMM influences the stability of the liquidity provided by the AMM in cryptoeconomic system markets. For example, if the liquidity available to AMMs is low, the AMMs can provide less liquidity to the cryptoeconomic system market. AMMs with prescribed liquidity can provide a guaranteed amount of liquidity to cryptoeconomic system markets leading to more stable liquidity.
Liquidity Provider Permission: The specification of market participants allowed to deposit tokens into liquidity pools.
AMMs can have open liquidity pools that allow any market participant to provide liquidity. On the other side, AMMs can have permissioned liquidity pools that are exclusive for certain market participants to provide liquidity or that are exclusive to internal sources of liquidity.
Liquidity provider permissions can influence the amount of liquidity that can be provided to cryptoeconomic system markets. AMMs that restrict liquidity providers are less likely to accumulate sufficient liquidity and can consequently fail to solve the LAP.
Number of Tokens per Liquidity Pool: The variety of tokens that can be deposited in one liquidity pool.The number of tokens per liquidity pool can be exactly two or more.
The number of tokens per liquidity pool influences the flexibility to exchange tokens. For example, market participants can exchange a token \(A\) for a token \(B\) and a token \(B\) for a token \(A\) in a liquidity pool with two tokens. In a liquidity pool with three tokens, market participants can exchange \(A\) for \(B\), \(A\) for \(C\), \(B\) for \(C\), \(B\) for \(A\), \(C\) for \(A\), and \(C\) for \(A\). This increases the amount of possible transactions.
Risk Management: The mechanisms to manage the risk of holding eventually volatile tokens.AMMs are exposed to the risk of inventory imbalance (see Section II-B). AMMs can have no risk management, imbalance surcharges, or loss insurance. AMMs with no risk management do not reduce the risk of inventory imbalance for liquidity providers and entirely pass the risk to the liquidity providers. AMMs with imbalance surcharges add surcharges to stated bid/ask token prices based on the current inventory imbalance. Market participants are disincentivized to execute transactions that further increase the imbalance. In addition, bid/ask spreads increase, increasing market-making profits. AMMs that implement loss insurance compensate losses of liquidity providers with external tokens (e.g., tokens of the AMM's cryptoeconomic system).
The risk management of AMMs influences the attractiveness of liquidity providers to provide liquidity to AMMs. This indirectly influences how AMMs can solve the LAP and how much liquidity AMMs can provide to the cryptoeconomic system market.
Source of Liquidity: The source of available liquidity for an AMM can be external from liquidity providers or internal by token supply sovereignty.
In external liquidity provision, external liquidity providers deposit tokens into liquidity pools of the AMM. In return, liquidity providers are commonly rewarded with a share of the trading fee. In internal liquidity provision, AMMs have supply sovereignty over tokens and do not depend on external liquidity providers. AMMs with internal liquidity provisions mint new tokens when market participants buy and burn tokens when market participants sell the tokens to the AMM.
An AMM's liquidity source influences how the AMM can solve the LAP. AMMs that rely on external liquidity provision need to solve the LAP because the AMM must attract sufficient liquidity. A token supply-sovereign AMM solves the LAP by design because liquidity is sourced through mint/burn actions of the tokens with token supply sovereignty.
Supported Token Pairs: The token pairs that can be traded against each other using an AMM.
AMMs can be open to all tokens and allow arbitrary token trading pairs. AMMs can have restricted token pairs that enforce any token to be paired with one certain token (e.g., USDC paired with any other token) or restrict the tokens that can be paired (e.g., USDC, BTC, and WETH but arbitrary token pairs of the tokens' cross-product).
The supported token pairs of an AMM influence the flexibility with which an AMM is used. If the token pairs are restricted, this eventually affects the demand for individual tokens. For example, suppose a cryptoeconomic system has its own token. All token pairs in the AMM must be paired with the proprietary token. In that case, this increases the demand for this token because liquidity providers must provide all tokens of the liquidity pool.
#### Iii-A3 Pricing
The functionalities and properties that are required for asset pricing.
Information Expressiveness:_The degree to which market information is incorporated into prices._
The price discovery mechanism can either be expressive or inexpressive. Market information can be incorporated based on liquidity that requires the traders to purchase tokens to adjust adequate token prices to approximate the efficient token price [25, 48, 82]. The token price adjustment is correlated with the transaction volume. An inexpressive price discovery mechanism does not adjust token prices based on the transactions of market participants. Thus, their perception of efficient token prices is not included [25].
The information expressiveness of AMMs influences the ability of the AMM to adjust token prices. Information-inexpressive AMMs do not adjust token prices based on the market participants' transactions. The token prices must be adjusted from an external token price source or the AMM must implement some other price discovery mechanism. During our literature and AMM analysis, no price discovery mechanism was found that adjusts token prices in other ways than based on market participants' transactions or price adoption. Information-expressive AMMs adjust token prices based on market participants' transactions.
Liquidity Sensitivity:_The strength of the influence of liquidity in the AMM on the magnitude of token price changes._
If an AMM is liquidity-sensitive, high liquidity mitigates the token price change, while low liquidity amplifies the price changes at a constant volume. For liquidity-insensitive AMMs, token price changes are not influenced by the available liquidity in the AMM [80, 22, 83].
The liquidity sensitivity of AMMs can influence the token price changes caused by transactions. Liquidity-sensitive AMMs adjust token prices in correlation to the available liquidity. Therefore, liquidity-sensitive AMMs can adapt the token price changes to the available liquidity. Liquidity-insensitive AMMs cannot adapt token price changes to the available liquidity. Thereby, liquidity-insensitive AMMs adjust token prices independently of the available liquidity. Token price changes must then be configured based on other parameters of the transaction, such as transaction volume.
Liquidity Concentration:_The distribution of liquidity at different price levels._
The liquidity concentration in an AMM can be function-based liquidity concentration, LP-based liquidity concentration, or autonomous liquidity concentration [84, 14, 80]. AMMs with function-based liquidity concentration concentrate the liquidity at a certain token price. A mathematical function gives the token price with the highest liquidity and does not change over time. AMMs with LP-based liquidity concentration concentrate the liquidity at price ranges that are given by their liquidity providers [85]. The liquidity provider can decide in which price region its liquidity should be concentrated. AMMs with autonomous liquidity concentration automatically concentrate the liquidity at a certain token price [86]. The token price with the highest liquidity is chosen by some automatism. Typically, time-weighted average prices are used to determine the token price that the liquidity is concentrated on [80, 22].
The liquidity concentration of AMMs influences the efficacy of how AMMs solve the PDP because it influences the available liquidity at different token price levels. Therefore, the liquidity concentration can influence the token price evolution. For example, when liquidity for a correlated token pair (e.g., USDC/USDT) is concentrated at a token price of 1 USDC per 1 USDT, the AMM can settle large transaction volumes with small token price changes at this level. If the token price diverges to 10 USDC per 1 USDT, there is low liquidity. Token prices become more sensitive to changes. The token price can be more or less pinned to a token price of 1 USDC per 1 USDT. The way liquidity is concentrated thereby also determines how the token price is pinned. Function-based liquidity concentration has a static token price pinning. LP-based liquidity concentration allows users to decide where the token price is pinned. Autonomous liquidity concentration implements a mechanism determining where the token price is pinned.
Path Deficiency:_The feasible transactions in relation to the token reserve of an AMM._
Path deficiency guarantees that the token reserves are always bounded from below (e.g., token reserves cannot shrink) for any set of transactions. Optimal transactions transition to the minimal reserve set that satisfies the lower bound. Transactions above the lower bound either receive less output or add more input as the optimal transaction. This guarantees that an AMM's token reserves cannot be depleted [20]. Strict path deficiency indicates that all transactions must be sub-optimal for the market participant. The market participant must receive less output or add more input than the optimal transaction. This ensures that the AMM's token reserves are increasing at any trade. Path-independent AMMs are path deficient by definition [20]. Strictly path-deficient AMMs are not path independent [20] because market participants must overpay the AMM to increase its token reserves. The AMM transitions to different states when a buy transaction and a sell transaction with equal volume are executed.
Price Bounding:_The functionality to limit token prices of the AMM._
The price bounding of a pricing mechanism can be bounded from above, bounded from below, and bounded from above and below. Bounded token prices can move in certain price ranges. These price ranges are: bounded from above \([-\inf...j]\), bounded from below \([i...\inf]\), and bounded from above and below \([i...j]\)[87, 17].
The price bounding influences the possible token prices of an AMM. Price bounding limits the available liquidity to a certain token price range. This can cause market liquidity to decrease sharply when the adequate token price exits this price range. Furthermore, price bounding can pin token prices in prescribed ranges if there are no alternative markets to which market participants can switch.
Price Discovery: The definition of the process of discovering adequate token prices.Pricing algorithms implemented in AMMs can be the constant product algorithm, geometric mean algorithm, constant sum algorithm, constant product-sum algorithm, constant power sum algorithm, logarithmic market scoring algorithm, exponential function, and price adoption algorithm.
A constant product algorithm uses a conservation function based on a constant product \(c=\Pi_{i=1}^{n}r_{i}\), with \(r\) being the amount of token \(i\) in reserve and \(n\) being the number of tokens that are considered for pricing [14, 19, 20].
Geometric mean algorithms add a weight \(w\) to the reserves of tokens and can be expressed as \(c=\Pi_{i=1}^{n}r_{i}^{w_{i}}\). Token reserves can be weighted to have a constant imbalance [44, 88].
Constant sum algorithms use a conservation function that is based on a constant sum \(c=\sum_{i=1}^{n}r_{i}\)[20]. The constant sum algorithm allows for constant exchange rates that are not adjusted over time [14, 21, 88].
Constant product-sum algorithms use a conservation function based on a constant sum of products. The conservation function can be expressed as \(\chi D^{x_{i}}+\prod x_{i}=\chi D^{n}+(\frac{D}{n})^{n}\) with \(D\) being the total amount of coins and \(\chi\) being a leverage factor which is defined as \(\chi\in\mathbb{R}|0\leq\chi\). For \(\chi=0\), the constant product-sum algorithm is a constant product. For \(\chi=\infty\), the constant-sum algorithm is a constant sum [52, 53].
Constant power-sum algorithms use a conservation function based on a constant sum of multiple powers. The conservation function can be expressed as: \(c=\sum_{i=1}^{n}r_{i}^{1-t}\) with \(t\) being a parameter to change the curvature of the conservation function [89].
Logarithmic market scoring algorithms use a cost function \(C\) of the total of assets in the market that can be expressed as follows: \(C(q)=b\cdot log(\sum_{j=1}^{n}exp(q_{j}/b))\) with \(q\) being the vector of quantities, \(b\) being a strictly positive parameter to control liquidity in the market and \(n\) the number of assets [23, 58, 90, 91].
Price adoption algorithms use external price oracles to adopt token prices, which are then adjusted by a mathematical function [16]. Typically, token prices are adjusted based on the token reserve imbalance. The token reserve imbalance can be expressed as: \(\Delta R_{i}=\frac{r_{0,\text{target}}-r_{0,\text{current}}}{r_{0,\text{target}}}\) with \(r_{0,\text{target}}\) being the targeted token reserves of \(r_{0}\) and \(r_{0,\text{current}}\) being the current token reserves of \(r_{0}\). The offered token price can be expressed as follows: \(P_{i}=p_{\text{adopted}}+p_{\text{adopted}}k(\Delta R_{i})\) with \(k\) being a parameter to configure the magnitude of token price adjustments, which is defined as \(k\in\mathbb{R}|0\leq k\leq 1\)[14, 16].
Exponential function pricing algorithms use an exponential conservation function that is based on a constant exponent \(c=S_{a}^{\infty}/r_{b}\), with \(S_{a}\) being the total supply of token a, \(r_{b}\) being the amount of token b in reserve and \(\kappa\) being a parameter for curvature [92, 93, 93].
Path Independence: The independence of AMMs' state transitions from the order of buy and sell transactions with identical cumulative volume.Pricing mechanisms can be path-dependent or path-independent. Rearranging the sequential order of buy and sell transactions with identical cumulative volumes leads path-dependent pricing mechanisms to transition to different AMM states. Because AMM states often form the foundation of AMMs to self-determine token prices, path-dependent pricing mechanisms can state different token prices for the same transaction depending on its position in the transaction sequence [15, 83, 22]. For example, executing one buy transaction with 100 USDT volume increases the token price of the trading pair by 1, while executing ten transactions with each 10 USDT volume increases the token price of the pair only by \(0.5\). Path-independent pricing mechanisms transition to the same state for different sequential orders of transactions with identical cumulative volumes. For example, following the previous example, buy transactions with a cumulative volume of 100 USDT always result in a token price increment of 1, no matter how the transactions are split.
The path independence of an AMM can influence transaction volumes. For path-independent AMMs, transactions are likely to be executed with their full volume in one transaction to save execution costs of the underlying infrastructure (e.g., gas fees in the Ethereum system). For path-dependent AMMs, market participants are likely to split up transactions if transactions with little volume are beneficial (e.g., decrease overall transaction cost). For example, if the AMM charges a transaction fee that is quadratic to the volume of the transactions, it is beneficial for market participants to split up their transactions into smaller-volume transactions to save on transaction costs. In practice, there would be some equilibrium between optimized transaction costs and the execution costs of the infrastructure.
Token Price Source: The source of adequate token prices that the AMM uses to quote bid/ask token prices.The token price source can be internal or external. AMMs with internal token price sources incorporate a price discovery component that discovers the adequate token price [20, 21]. AMMs with external token price sources outsource the token price discovery to a price oracle that provides adequate token prices to the AMMs [16].
The token price source can influence the cost-efficiency of AMMs to adjust token prices to adequate token prices. External token price sources are often more cost-efficient than internal token price sources because the price discovery is outsourced.
The AMM adopts adequate token prices in the absence of transactions. AMMs with internal token price sources are often less cost-efficient because token prices are updated based on buy/sell transactions that indicate an increase/decrease of the adequate token price. Therefore, AMMs sell undervalued or buy overvalued tokens to adjust the token prices. This decreases the cost-efficiency of AMMs.
Translation InvarianceThe payoff from a portfolio consisting of equal amounts of each asset.Translation invariant AMMs always charge the same cost for the same amount of each asset at each AMM state [47, 83, 91, 94]. For example, a translation invariant AMM charges 0.5 USD for token \(A\) and 0.5 USD for token \(B\). The cost of one token \(A\) and one token \(B\) (same amount) is 1 USD. Later, AMM's token prices changed to 0.8 USD for token \(A\) and 0.2 USD for token \(B\). The cost of one token \(A\) and one token \(B\) is 1 USD. Therefore, the translation invariant AMM always charges 1 USD for one token \(A\) and one token \(B\) at any state of the AMM. Non-translation invariant AMMs charge different costs for the same amount of each asset at different states of the AMM [47]. A non-translation invariant AMM charges, for example, 0.5 USD for token \(A\) and 0.5 USD for token \(B\). The cost of one token \(A\) and one token \(B\) (same amount) is 1 USD. Later in time, AMM's token prices changed to 0.8 USD for token \(A\) and 0.3 USD for token \(B\). The cost of one token \(A\) and one token \(B\) is 1.10 USD. Therefore, the non-translation invariant AMM charges different costs for one token \(A\) and one token \(B\) at different states of the AMM.
#### V-A4 Trading
The types, functionalities, and properties of trade execution in the AMM.
InteroperabilityThe capability of an AMM to execute transactions across multiple infrastructures (e.g., multiple DLT systems) of cryptoeconomic systems.
AMMs can be interoperable or non-interoperable. For example, interoperable AMMs enable market participants to transact across DLT systems [5]. Non-interoperable AMMs settle transactions in cryptoeconomic systems built on a single DLT system. Senders and recipients of tokens must be part of the same DLT systems. The interoperability of an AMM influences the ease of transaction execution for market participants, as they can switch infrastructures simultaneously through one transaction.
Limit Order FunctionalityThe functionality to create limit orders.Limit orders are instructions to buy or sell tokens at specified token prices but without the guarantee of immediate execution [35]. The instruction is triggered when the AMM's token price strikes the specified token price of the limit order. The AMM can either provide limit order functionality or not. The limit order functionality of AMMs can influence market participants' usage of AMMs. Limit order functionality can be essential for larger organizations to have conditional transaction execution.
Price GuaranteeThe process that determines token prices in cryptoeconomic system markets.Token prices may change in the time between issuing transactions to a cryptoeconomic system and the settlement of those transactions. Market participants may thus sell or buy tokens at a different price than actually intended. AMMs can be designed to deal with changing token prices in three ways: give no price guarantee, guarantee price ranges, or guarantee exact prices. First, AMMs without a token price guarantee do not guarantee any price for transaction settlement. The settlement price is unknown at transaction issuance. Second, AMMs can implement ranged price determination that guarantees transaction settlement only in a specified price range, known as slippage. The slippage of token prices can usually be set per transaction. Third, AMMs can guarantee exact price determination. The token price at transaction issuance equals the token price at transaction settlement.
The price guarantee of an AMM influences the ease of transaction execution for market participants. Configurable price ranges and exact price guarantees reduce the risk of transaction execution for market participants. Little price guarantees increase financial risks for market participants because transactions could be settled at unprofitable prices.
Volume DependencyThe dependency of token prices on the transaction volume in a cryptoeconomic system market.The volume dependency of a pricing mechanism can be dependent and independent. Volume-dependent pricing mechanisms differ in the mean price of transactions based on their transaction volume. Volume-independent pricing mechanisms state the equivalent mean prices for different transaction volumes. Volume dependency leads AMMs to have different token price changes, while volume-independent AMMs have equal or no token price changes resulting from transactions.
### _Applicability Demonstration of the AMM Taxonomy_
We demonstrate the applicability of the AMM taxonomy by illustrating the classification of exemplary AMMs in the AMM taxonomy in Table IV. Each AMM has exactly one characteristic of each dimension. The demonstration of the AMM taxonomy shows its mutual exclusiveness because all AMMs have exactly one characteristic per dimension. The relevance and representativeness are shown because all characteristics are occupied by at least one AMM design. The limited number of dimensions and characteristics shows the conciseness and exhaustiveness of the AMM taxonomy.
\begin{table}
\begin{tabular}{|
## V AMM Archetypes and Their Solution Approaches for the Thin Market Problem
We developed three AMM archetypes (i.e., Price-discovering LP-based AMM, Price-adopting LP-based AMM, and Price-discovering Supply-sovereign AMM) that can be distinguished by two dimensions of the AMM taxonomy (i.e., price discovery and source of liquidity). Figure 2 illustrates the designs of the three AMM archetypes. Corresponding to their different characteristics associated with the dimensions _price discovery_ and _source of liquidity_, the three archetypes implement different solution approaches to tackle the LAP, PDP, and TMP. In this section, we describe the three AMM archetypes in more detail.
### _Price-discovering LP-based AMM_
The Price-discovering LP-based AMM (see Figure 2a) incorporates a price discovery component, a price determination component, a token settlement component, a parameter component, and an LP token management component. The Price-discovering LP-based AMM trades tokens of token management components that at least one external token keeper operates. Following the AMM taxonomy, the Price-discovering LP-based AMM is characterized by an _internal price discovery_ and _LP-based source of liquidity_.
When the Price-discovering LP-based AMM is set up, its price discovery component is initialized with a token price determined by the first liquidity provider that initializes the AMM. After the token price initialization, the price discovery component determines token prices based on a prescribed mathematical function that incorporates market participant perceptions of the token price as follows. Market participants only buy undervalued tokens and sell overvalued tokens to take profits. When a market participant buys/sells tokens at a specific token price, it is assumed that the rational market participant knows information by which the new adequate token price diverges from the stated token price. The Price-discovering LP-based AMM increases the adequate token price when market participants buy tokens because it is assumed that market participants perceive the adequate token price as higher than the token price stated by the Price-discovering LP-based AMM. Conversely, the Price-discovering LP-based AMM decreases the adequate token price when market participants sell tokens because it is assumed that market participants perceive the adequate token price as lower than its stated token price [20, 44].
The magnitude of price changes depends on the liquidity available to the Price-discovering LP-based AMM. High liquidity leads to smaller token price changes. Low liquidity leads to larger token price changes (see Section II-D0c). Because the price discovery mechanism of the Price-discovering LP-based AMM is typically deterministic, price changes of transactions with a given volume are predictable. The price determination component of the Price-discovering LP-based AMM calls the price discovery component to fetch the current token price, adjusts it by adding trading fees provided by the parameter component, and passes the determined token amounts to the token settlement component.
The token settlement component calls external token management components to transfer tokens between the Price-discovering LP-based AMM and market participants. Token management components record the token balances of the AMM and other market participants. The balances of the Price-discovering LP-based AMM in the token management components form the token reserves of its liquidity pool. The token management components are controlled by token keepers that can be part of multiple cryptoeconomic systems at the same time. Thus, tokens managed by token keepers may be accessed by multiple AMMs and other market participants.
Third-party liquidity providers deposit tokens into liquidity pools in the token management components of external token keepers. In exchange for depositing tokens, liquidity providers receive LP tokens in return for their token deposits. LP tokens are issued by the LP token keeper component of the Price-discovering LP-based AMM and represent a claim on a share of tokens in the Price-discovering LP-based AMM's liquidity pool. Using the LP tokens, liquidity providers can withdraw their share of tokens from liquidity pools. By depositing and withdrawing tokens, the liquidity offered by the Price-discovering LP-based AMM can change over time.
The Price-discovering LP-based AMM implements an incentive mechanism to accumulate sufficient liquidity and keep liquidity providers motivated not to withdraw deposited tokens. Common incentive mechanisms distribute revenues for token deposits to liquidity providers. Such revenues correspond to shares of the transaction fees charged by the AMM for transaction settlement. The following describes how the Price-adopting LP-based AMM can solve the PDP, LAP, and TMP.
Price Determination ProblemThe Price-discovering LP-based AMM incorporates available market information via trade indications into the token prices to solve the PDP by discovering adequate token prices. Transactions of market participants with the Price-discovering LP-based AMM convey such market information. When new market information (e.g., fundamentals, regulatory news, new products) is available, the token price stated by the Price-discovering LP-based AMM can diverge from the adequate token price. This divergence incentivizes rational market participants to collect and analyze market information to get the chance to buy temporarily underpriced tokens and sell them in the future to realize profits. Rational market participants sell temporarily overpriced tokens. Buying and selling tokens feeds the Price-discovering LP-based AMM with market information required to adjust its stated token price to the adequate token price. The buy and sell transactions of the market participants function as indicators for the Price-discovering LP-based AMM. The Price-discovering LP-based AMM can solve the PDP depending on the market information provided by the market participants.
## Appendix A
Fig. 2: Overview of the designs of the developed AMM archetypes
Liquidity Accumulation Problem
To solve the LAP, the Price-discovering LP-based AMM must accumulate sufficient liquidity. The Price-discovering LP-based AMM incentivizes liquidity providers to deposit tokens in its liquidity pools by distributing token rewards. Liquidity providers receive a share of the collected trading fee that is charged when market participants trade with the AMM. Rational liquidity providers are likely to deposit tokens into the liquidity pools of the Price-discovering LP-based AMM if they perceive the token rewards as profitable. Otherwise, rational liquidity providers will not deposit tokens, which can render the Price-discovering LP-based AMM unable to settle transactions or oversensitive to low-volume transactions. The Price-discovering LP-based AMM can solve the LAP depending on its ability to incentivize liquidity providers to deposit tokens.
Thin Market ProblemTo solve the TMP, the Price-discovering LP-based AMM must solve the LAP to provide sufficient liquidity to the cryptoeconomic system market and solve the PDP to state reliable token prices. It must be distinguished between two cases of the efficacy of how the LAP is solved by the Price-discovering LP-based AMM to evaluate if the TMP can be solved.
In the first case, the Price-discovering LP-based AMM implements suitable incentives for accumulating sufficient liquidity to solve the LAP. High liquidity leads to less sensitive token prices. Thus, market participants can buy/sell larger amounts of tokens when the stated token price diverges from the adequate token price. This increases the possible profit for market participants. Market participants are more likely to collect and analyze all available market information to profit from the token price divergence. The increased number of settled transactions provides more market information to the Price-discovering LP-based AMM to adjust its stated token price to the new adequate token price. This increases the reliability and timeliness of token price adjustments by the Price-discovering LP-based AMM.
In the second case, the Price-discovering LP-based AMM offers too few incentives to solve the LAP and accumulates insufficient liquidity. Low liquidity increases the sensitivity of token price adjustments. Thus, small amounts of tokens are required to adjust the stated token price to the new adequate token price. Consequently, market participants can only buy/sell smaller amounts of tokens when the token price stated by the Price-discovering LP-based AMM diverges from the adequate token price. This decreases the possible profit for market participants. Market participants become unlikely to collect and analyze market information may not exceed the possible profit. Market participants trade less frequently against the Price-discovering LP-based AMM, which decreases its reliability and timeliness of token price adjustments. Thus, the Price-discovering LP-based AMM cannot solve the TMP if the incentives for liquidity providers to solve the LAP are insufficient.
In summary, if the Price-discovering LP-based AMM solves the LAP, it can solve the TMP. To solve the LAP, the Price-discovering LP-based AMM must implement incentives that motivate market participants to provide sufficient liquidity.
Purposes of the Price-discovering LP-based AMMThe Price-discovering LP-based AMM is used for the purposes of _decentralized token exchange_ and _token issuance_. Common purposes for decentralized token exchange are correlated token markets, uncorrelated token markets, non-fungible token markets, and prediction markets. Depending on the tokens traded by the Price-discovering LP-based AMM, different designs of price discovery mechanisms and liquidity sensitivity are implemented. For example, constant product price discovery mechanisms are often used with user-based liquidity concentration suitable for exchanging uncorrelated tokens because liquidity can be concentrated around the current exchange rate [45, 46, 52]. To exchange correlated tokens, constant product-sum price discovery mechanisms are often used in combination with function-based liquidity concentration to concentrate liquidity at a fixed exchange rate [46, 100]. For prediction tokens, the token prices of Price-discovering LP-based AMM are typically bounded in the price range of 0 to 1 to reflect the occurrence probability of the event that is adjusted by the price discovery mechanism.
Price-discovering LP-based AMM can be used for token issuance, more specifically for initial token offerings and curation tokens. In both purposes, the token issuer (e.g., developers of a cryptoeconomic system) is the first liquidity provider that deposits the tokens to issue in the liquidity pool. The Price-discovering LP-based AMM trades these tokens with market participants to bring them into circulation.
### _Price-adopting LP-based AMM_
The Price-adopting LP-based AMM (see Figure 2b) incorporates a price determination component, a token settlement component, a parameter component, and an LP token management component. The Price-adopting LP-based AMM uses at least one external price discovery component that is part of the token keeper of eventually other cryptoeconomic systems. The traded tokens are managed by external token management components. Following the AMM taxonomy, the Price-adopting LP-based AMM is characterized by an _external price discovery_ and _LP-based source of liquidity_.
The price determination component fetches token prices from at least one external price discovery component that is operated by an external _price oracle_ (e.g., Chainlink). The price oracle is configured when the AMM is set up and discovers adequate token prices in a proprietary way (e.g., retrieving token prices from Binance or Coinbase). Retrieved token prices can be modified by the price determination component based on parameters of the parameter component. For example, the Price-adopting LP-based AMM commonly adds a surcharge to the token price when its token reserves become imbalanced. Given
equal-sized transaction volumes, the token reserves are getting less imbalanced in percentage terms when having high liquidity compared to low liquidity. Thus, in the case of high liquidity, the surcharge remains smaller. In consequence, high liquidity decreases the token price sensitivity. Low liquidity increases the token price sensitivity. The adjusted token price is subsequently passed from the price determination component to the token settlement component. The Price-adopting LP-based AMM's token settlement component interacts with external token management components that manage tokens and record balances of market participants. The LP token management component issues LP tokens to the liquidity providers, presenting a claim on a share of tokens in the liquidity pool of the Price-adopting LP-based AMM. Thus, the offered liquidity can change over time. The Price-adopting LP-based AMM implements an incentive mechanism that distributes revenues for token deposits to incentivize market participants to deposit tokens into the liquidity pools of the Price-adopting LP-based AMM. The following describes how the Price-adopting LP-based AMM can solve the PDP, LAP, and TMP.
Price Determination ProblemThe Price-adopting LP-based AMM outsources the price discovery by using a price discovery component operated by an external price oracle. The external price oracle must solve the PDP and employ a price discovery mechanism. For example, external price oracles provide token prices that are typically derived from thick markets such as Coinbase or Binance. However, these thick markets are assumed to manifest the adequate token price but are mainly centralized. Consequently, the Price-adopting LP-based AMM does not solve the PDP itself but, in practice, relies on a lead market to resolve the PDP on its behalf. Thus, the Price-adopting LP-based AMM can never become the lead market itself and fails to replace centralized markets.
Liquidity Accumulation ProblemTo solve the LAP, the Price-discovering LP-based AMM must accumulate sufficient liquidity. As it is for the Price-discovering LP-based AMM, the Price-adopting LP-based AMM incentivizes liquidity providers to deposit tokens in its liquidity pools and rewards them with a share of the collected trading fee. Thus, rational liquidity providers are likely to deposit tokens into the liquidity pools if they perceive the token rewards as profitable. Consequently, the Price-adopting LP-based AMM can solve the LAP depending on its ability to incentivize liquidity providers to deposit tokens.
Thin Market ProblemThe Price-adopting LP-based AMM does not incorporate a price discovery component. Thus, it depends on external price oracles to discover adequate token prices. Consequently, the Price-adopting LP-based AMM cannot create a cryptoconomic system lead market. In most cases, price oracles only provide token prices for lead markets. The Price-adopting LP-based AMM cannot provide liquidity to a cryptoeconomic system market that is subject to the TMP. Only if an omniscient price oracle provides adequate token prices in thin markets, the Price-adopting LP-based AMM can solve the TMP. The efficacy of the Price-adopting LP-based AMM to solve the TMP depends on the reliability of external liquidity providers in providing tokens to solve the LAP.
In summary, the Price-adopting LP-based AMM can only solve the TMP and become a lead market if an omniscient price oracle is available. If an omniscient price oracle is available, the Price-adopting LP-based AMM must implement suitable incentives for liquidity providers to deposit sufficient tokens to solve the LAP.
Purposes of the Price-adopting LP-based AMMThe Price-adopting LP-based AMM is used for the purpose of _decentralized token exchange_, in particular, correlated tokens, uncorrelated tokens, non-fungible tokens, and prediction markets [16, 49]. The principal design of the components in the Price-adopting LP-based AMM is equal for all types of tokens that can be traded. However, the parameters in the parameter component vary based on the token price correlation of the traded tokens. For example, correlated tokens are likely to have smaller surcharges when having a token reserve imbalance because correlated tokens do not diverge in their price development. There is a low risk of quantitatively holding more of the less valuable tokens. Vice versa, uncorrelated tokens are likely to have higher surcharges when having token reserve imbalance because there is a higher risk of holding more of the less valuable tokens because uncorrelated tokens are likely to diverge in their price development.
### _Price-discovering Supply-sovereign AMM_
The Price-discovering Supply-sovereign AMM (see Figure 2c) incorporates a price discovery component, a price determination component, a token settlement component, and a token management component. The Price-discovering Supply-sovereign AMM controls at least one internal token management component and, thus, can transfer, mint, and burn tokens independent from external token leepers. In contrast to the previous two AMM archetypes, the Price-discovering Supply-sovereign AMM does not incorporate an LP token management component. Following the AMM taxonomy, the Price-discovering supply-sovereign AMM is characterized by _internal price discovery_ and _supply-sovereign source of liquidity_.
When the Price-discovering Supply-sovereign AMM is set up, the creator (e.g., AMM developer) defines a supply curve for the issued token within the price discovery component. The supply curve is a mathematical function that maps the circulating token supply to a token price. Although arbitrary supply curve shapes would be possible, it is best practice to use monotonically increasing supply curve shapes. As for the Price-discovering LP-based AMM, the market participant perceptions of the token
price are incorporated. Market participants can buy/sell undervalued/overvalued tokens to profit from price divergences. The price discovery component changes token prices based on the prescribed supply curve. However, when market participants buy/sell tokens, the circulating token supply increases/decreases because tokens are minted/burned by the internal token management component. The magnitude of price changes depends on the slope of the supply curve. A steep slope for a given transaction volume leads to larger token price changes. A flat slope leads to smaller token price changes.
Established cryptoeconomic systems are operated with a fixed token supply or slowly increasing/decreasing token supply via inflation/deflation. Perceived value changes of the entity (e.g., the cryptoeconomic system, organization, product, right, or claim) that is represented by the token result in token price adaption. For the Price-discovering Supply-sovereign AMM, token prices and the amount of circulating tokens change in a concerted way, defined by the supply curve. Therefore, perceived value changes of the entity lead to the adaption of token price and circulating token supply. For example, if the perceived value increases, the supply curve of the Price-discovering Supply-sovereign AMM increases the token price and the token supply. An increasing token supply decreases the value of each individual token. The increasing token supply attenuates token price changes. In consequence, the token price changes can be decoupled from value changes.
Because the supply curve is prescribed, the price discovery is typically deterministic. Thus, price changes of transactions with a given volume are predictable. The price discovery component of the Price-discovering Supply-sovereign AMM passes the adequate token price to the price determination component. The price determination component adjusts the token price based on parameters of the parameter component (e.g., trading fees). The adjusted token price is subsequently passed to the token settlement component.
The token settlement component calls the internal token management component and external ones. The internal token management component is instructed to burn/mint tokens of/for market participants. The external token management components transfer tokens between the Price-discovering Supply-sovereign AMM and market participants.
The Price-discovering Supply-sovereign AMM sources liquidity by burning/minting tokens in the internal token management component. In the case that zero tokens have been issued, there is no market participant that can sell a token. Thus, no token reserves on external token management components are required. When market participants buy the first tokens, these tokens are minted by the internal token management component. Market participants pay with tokens of external token management components. The Price-discovering Supply-sovereign AMM accumulates token reserves on the external token management components that back the issued tokens and are later used when market participants sell tokens back to the AMM. If there is no supply limit for the token of the internal token management component, the Price-discovering Supply-sovereign AMM is always liquid. In the case of a token supply limit, the Price-discovering Supply-sovereign AMM could get illiquid when the upper supply limit is reached. No new tokens can be minted.
In contrast to the Price-discovering LP-based AMM and the Price-adopting LP-based AMM, the Price-discovering Supply-sovereign AMM does not depend on liquidity providers to deposit tokens. Instead, the liquidity of the Price-discovering Supply-sovereign AMM is prescribed by the supply curve that gives the price changes and the token supply limit. Because the supply curve is prescribed and the price discovery is deterministic, the available liquidity is also prescribed at any point in time. In the following, we describe how the Price-adopting LP-based AMM can solve the PDP, LAP, and TMP.
Price Determination ProblemTo solve the PDP by discovering adequate token prices, the Price-discovering Supply-sovereign AMM incorporates available market information via trade indications into the stated token prices. Correspondingly to the Price-discovering LP-based AMM, stated token prices can diverge from the adequate token price. Thus, rational market participants collect and analyze market information to profit from token price divergences and induce token price changes through buy/sell transactions. The Price-discovering Supply-sovereign AMM can solve the PDP depending on the market information provided by the market participants.
Liquidity Accumulation ProblemTo solve the LAP, the Price-discovering Supply-sovereign AMM creates its own liquidity by burning/minting the tokens of its internal token management component. As described above, the Price-discovering Supply-sovereign AMM is always liquid as long as an eventual token supply limit is not reached. The available liquidity is prescribed by the supply curve of the price discovery component. In consequence, the Price-discovering Supply-sovereign AMM solves the LAP by design.
Thin Market ProblemThe efficacy with which the Price-discovering Supply-sovereign AMM can solve the TMP depends on the supply curve chosen by the issuer. The supply curve indicates the direction of future price evolution based on the value of the underlying entity. Consequently, choosing a suitable supply curve for the Price-discovering Supply-sovereign AMM is difficult. In most cases, issuers are interested in rising token prices. We assume that issuers would choose a monotonously increasing supply curve to issue tokens. To evaluate whether the TMP can be solved, we distinguish between three cases of supply curves.
In the first case, the issuer chooses a supply curve for the Price-discovering Supply-sovereign AMM that is too steep. The steep supply curve leads to a steep price evolution. Early investors (e.g., developer team and seed-investors) have high profit margins. They can buy tokens very cheaply and profit from the steep price evolution. The steep supply curve results
in low liquidity because token prices change rapidly for small-volume transactions that burn/mint tokens. Consequently, the Price-discovering Supply-sovereign AMM cannot solve the TMP because of high token price fluctuation and low liquidity but allows for high profit margins.
In the second case, the issuer chooses a supply curve that is too flat. Early investors have little profits because of the flat price evolution. The flat supply curve results in high liquidity because token prices change very little, even for large-volume transactions. Consequently, the Price-discovering Supply-sovereign AMM can solve the TMP by providing stable token prices and high liquidity but attenuating profit margins.
In the third case, the issuer chooses a suitable slope for the supply curve and suitable profits for early investors, suitable liquidity, and suitable price evolution. Consequently, the Price-discovering Supply-sovereign AMM can solve the TMP with reasonable profit margins.
In summary, there is a dependency between the supply curve slope and the efficacy with which the Price-discovering Supply-sovereign AMM can solve the TMP. Steep supply curve slopes attract more investors because of eventually high profits, but the TMP cannot be solved. A flat slope attracts fewer investors because of low profits, but the TMP can be solved. Consequently, issuer's must choose a suitable supply curve within this trade-off to attract sufficient investors while solving the TMP.
Purposes of the Price-discovering Supply-sovereign AMMThe Price-discovering Supply-sovereign AMM is exclusively used for _token issuance_ because supply-sovereignty is needed by design. Common purposes for token issuance are initial token offerings [13] and curation tokens [61, 62]. In both purposes, issuer's must define the supply curves implemented in the Price-discovering Supply-sovereign AMM.
## VI Discussion
### _Principal Findings_
In this work, we present an AMM taxonomy including four groups (i.e., pricing, liquidity, trading, and governance) comprising 21 dimensions with a total of 53 characteristics. Building on the design specifications of AMMs presented in the AMM taxonomy, we introduced three AMM archetypes (i.e., the Price-discovering LP-based AMM, the Price-adopting LP-based AMM, and the Price-discovering Supply-sovereign AMM) implementing distinct solution approaches for the TMP by tackling the LAP and PDP.
In our literature analysis, we recognized the dominance of gray literature, technical documentation, and blog articles in the field of AMMs. The development of AMMs seems to be mainly driven by practice and only gradually gaining ground in science. In the AMM taxonomy development, we recognized that boundaries between AMMs and other parts of cryptoeconomic systems are often not clearly defined. For example, Uniswap is described as a peer-to-peer protocol for decentralized token exchange that utilizes an AMM [100]. In contrast, the official Uniswap v3 whitepaper says that Uniswap is a non-custodial AMM [85]. Such inconsistent wording makes drawing clear system boundaries for AMMs and their uses in cryptoeconomic systems difficult. We argue that AMMs are market makers implemented as software agents that trade tokens with market participants at self-determined prices in an automated manner. The purposes of AMMs include decentralized token exchanges and token issuance. A decentralized exchange (DEX) is a decentralized marketplace that is the decentralized equivalent of traditional exchanges. DEXs allow market participants to trade tokens with each other. Consequently, AMMs are market participants in DEXs that continuously offer token trades at stated bid/ask token prices with other market participants. DEXs can contain several types and instances of AMMs. Each instance of AMM is responsible for exactly one liquidity pool that allows other market participants to exchange the tokens included in its liquidity pool.
In the classification of AMMs into the AMM taxonomy, we recognized that the price discovery mechanisms implemented in AMMs strongly differ depending on the purposes of the AMMs. There is a strong influence of the price discovery mechanism on the token price evolution. Some price discovery mechanisms are better suited for certain purposes (e.g., correlated or uncorrelated token exchange) than others. For example, constant-product price discovery mechanisms are frequently used in exchanges of uncorrelated tokens because token prices are changed almost equally at all token prices [14]. Constant-product-sum price discovery mechanisms are used in exchanges of correlated tokens because token prices are less changed at a specified exchange rate than constant-produce price discovery mechanisms [14]. Such dynamics in the price evolution of different price discovery mechanisms on token price evolution are needed to enable AMMs to meet their individual purposes.
Many AMM designs are customized forks of code repositories of established AMMs, such as Uniswap v2 and Uniswap v3. Such forks of AMM code bases are partially modified (e.g., SushiSwap adding staking and liquidity mining to Uniswap v2 or PancakeSwap running on Binance Smart Chain instead of the Ethereum system). Such modifications lead to a large variance between AMMs classified into the AMM taxonomy (see Table IV). For example, the dimensions _allowed trading pairs_, _trading fee adjustment_, and _parameter adjustment_ are only slightly modified in most forks of the AMM designs (e.g., Sushiswap, PancakeSwap). The solution approaches for the LAP, PDP, and TMP remain largely unchanged. Since AMM operators usually issue their own tokens (e.g., UNI, SUSHI), the operators can profit by selling these issued tokens. We suspect that this monetary interest has created a large number of new AMMs that have modified existing AMMs with little effort to create their own AMMs to profit from selling custom tokens.
Despite the arbitrary variance of characteristics that are easy to modify in AMM designs, the applicability demonstration of the AMM taxonomy (see Table IV) shows that single characteristics per dimension have become dominant in AMM designs. For example, most AMM designs are non-translation invariant and information expressive based on liquidity [19, 45, 101]. Only a few AMM designs with constant-sum price discovery are translation invariant and information inexpressive [14, 21, 97]. The dimensions _information expressiveness_, _translation invariance_, and _volume dependency_ appear to depend on the dimension _price discovery_. For example, AMMs with constant-sum price discovery are _information inexpressive_, _translation invariant_, and _volume independent_. Such AMMs have known weaknesses for the purpose of decentralized token exchange. For example, due to _information in-expressiveness_ and _volume independence_, they cannot adjust token prices [21, 47]. In addition, AMMs with constant-sum price discovery can deplete and become illiquid, which can make them unable to trade. mStable is the only AMM that used constant-sum price discovery until 2021 when they switched to constant-product-sum because of the weaknesses of constant-sum price discovery [21]. It is unclear whether this characteristic will still be relevant in the future.
We identified the Price-discovering LP-based AMM as the predominant archetype that is widespread with 98 occurrences in our analysis of 110 AMMs (see Table IV). Extant literature also focuses on Price-discovering LP-based AMM. The Price-discovering Supply-sovereign AMM and Price-adopting LP-based AMM are much less represented in extant literature. The Price-discovering LP-based AMM appears frequently due to forks of well-known AMMs, such as Uniswap v2, Uniswap v3, and Curve, which are attributed to the Price-discovering LP-based AMM. Because those forks change a few characteristics with minor influences on the solution of the LAP, PDP, and TMP, there is a large variance in the analyzed AMM designs.
The Price-adopting LP-based AMM is less used in practice, presumably because of its dependency on external price oracles and its incapability of solving the TMP. However, our results indicate that the Price-adopting LP-based AMM can enable a cost-efficient decentralized token exchange if they are used in thick markets with reliable external price oracles. AMM archetypes with internal price discovery mechanisms suffer from financial losses if token prices diverge from adequate token prices because tokens are sold under price or bought over price. In contrast, the Price-adopting LP-based AMM does not sell tokens under price or buy tokens over price if the price oracle reliably provides adequate token prices. In this case, token prices are adjusted without selling tokens under price or buying tokens over price.
The Price-discovering Supply-sovereign AMM is exclusively used for the purpose of token issuance because it requires supply sovereign over at least one token. Our findings indicate that the Price-discovering Supply-sovereign AMM is most suitable for the purpose of token issuance because issuance because issuers can prescribe the supply curve of the token. Thereby, token issuers can guide price evolution to increase its predictability for the issuers and facilitate the assessment of financial risks for investors. However, the Price-discovering Supply-sovereign AMM is scarcely researched. Specific effects on price evolution still remain unclear. We suppose that Price-discovering Supply-sovereign AMMs offer a new way of token issuance. To the best of our knowledge, no equivalent concept in economics considers the issuance of tokens or stocks with unlimited supply. The mechanism to burn/mint tokens based on a defined supply curve allows issuers to control or even dictate the price evolution based on the total value of the entity the token represents. Because the Price-discovering Supply-sovereign AMM backs issued tokens according to the supply curve, the tokens can be sold at any time using the token prices calculated from the supply curve. This creates a guaranteed minimum token price for investors. The guaranteed value of the underlying entity is given by the tokens that back the issued tokens.
### _Contributions to Practice and Research_
Our study bridges the gap between practice and research and informs about the dimensions and characteristics of AMM designs and commonly implemented solution approaches for the TMP. We contribute to research in three ways. First, we contribute to the understanding of AMM designs by presenting an AMM taxonomy. The AMM taxonomy can be used to guide AMM development as it points out dimensions that need to be considered by developers and offers options to implement the dimensions in the form of characteristics. Moreover, the AMM taxonomy is useful for the systematic comparison of AMM designs, for example, to identify design differences.
Second, by presenting AMM archetypes (i.e., Price-discovering LP-based AMM, Price-adopting LP-based AMM, Price-discovering Supply-sovereign AMM), we support the understanding of the basic functioning of common AMM designs that are used for specific resource allocation purposes (e.g., token issuance). The AMM archetypes can be used as abstract blueprints that can be refined to develop AMMs that meet resource allocation purposes. For example, AMM developers can fine-tune AMM archetypes by selecting characteristics of the AMM taxonomy.
Third, by explaining the principal solution approaches implemented in AMMs to tackle the TMP and the efficacy of these solution approaches, we contribute to understanding how the TMP can be addressed through AMM designs. Moreover, we explain how the individual solution approaches to tackling the TMP depend on market dynamics, such as the frequency of buy and sell orders. This understanding is useful to support practitioners in predicting financial risks arising from the TMP and taking corresponding actions to avert such risks. For example, investors can precisely predict the price impact caused by buying and selling larger amounts of tokens.
### _Limitations and Future Research_
Extant research is mainly concerned with Price-discovering LP-based AMMs (e.g., constant-function market makers [14, 21, 102]) and Price-adopting LP-based AMMs (e.g., proactive market makers [16, 21, 72]). Research on Price-discovering Supply-sovereign AMM is still in its infancy. Only a few publications on Price-discovering Supply-sovereign AMMs are available [9, 103, 104]. Influences on cryptoeconomic system markets emerging from an eventually unlimited token supply are not addressed yet and are still unclear. In the description of the Price-discovering Supply-sovereign AMM (see Section V-C), we addressed the influences of the unlimited token supply to the best of our knowledge. Such assumptions should be evaluated in a future study to provide evidence on the influence of unlimited token supply in cryptoeconomic system markets.
We analyzed AMMs based on related official whitepapers (e.g., [62, 65, 85]), blog articles (e.g., [60, 93, 103]), and official documentation (e.g., [16, 100]). Most of those publications are not peer-reviewed. Moreover, many publications do not present sufficiently detailed information to classify AMMs into the AMM taxonomy. To still classify all AMMs into our taxonomy, we made informed guesses based on source code. For example, if the AMM builds on a fork of established AMMs, we used publications on the original AMM design to complete the classification. We discussed such assumptions to complete AMM design classifications but cannot guarantee the validity of all assumptions.
We identified three AMM archetypes with individual solution approaches to tackle the LAP, PDP, and TMP. The AMM archetypes are differentiated by two AMM dimensions (_token price source_ and _source of liquidity_), which most influence the solution approaches of AMM designs to tackle the LAP, PDP, and TMP. Among the 21 AMM dimensions included in the AMM taxonomy, we identified _price discovery mechanism_ and _source of liquidity_ as particularly relevant to differentiate AMM designs with respect to their efficacy in solving the LAP, PDP, and TMP. The cross-product of all characteristics per dimension resulted in four archetypes. The Price-adopting Supply-sovereign AMM archetype could not be identified in our analysis of 110 AMMs. We dropped this AMM archetype, but it may become relevant in the future.
Our taxonomy includes additional characteristics, such as liquidity concentration and price discovery, that can also influence the efficacy of solution approaches for the LAP, PDP, and TMP, including _liquidity concentration_, _price discovery mechanism_, _token price source_, and _source of liquidity_. We neglected these characteristics to increase the discriminatory power of the AMM archetypes. In future research, more granular AMM archetypes could be elaborated.
We extracted the solution approaches implemented in AMM archetypes for the LAP, PDP, and TMP based on literature and AMMs used in cryptoeconomic systems. In Section V, we explain how the AMM archetypes solve the LAP, PDP, and TMP. Quantitative analyses could be initiated that eventually yield further evidence supporting our qualitative results. Quantitative analyses may be based on simulations to investigate how using the three AMM archetypes presented in this work influences cryptoeconomic system markets regarding metrics like token price volatility, price elasticity, and market depth.
## VII Conclusion
Cryptoeconomic systems can offer a decentralized option for organizations to allocate resources based on digital tokens, for example, by issuing tokens representing assets and trading such tokens. In cryptoeconomic systems, AMMs are often used to issue and trade tokens and to solve the TMP. The efficacy with which AMMs solve the TMP is strongly influenced by their designs.
The proper design of AMMs for cryptoeconomic systems is difficult because the characteristics of AMM designs are unclear. Improperly designed AMMs can lead to the failure of cryptoeconomic systems due to the TMP. Since AMM design characteristics are unclear, the designs of solution approaches for the TMP that are implemented in AMM designs remain unclear. This eventually hinders the development of proper AMM designs that can solve the TMP. To support the development of proper AMM designs, this manuscript presents an AMM taxonomy with 21 dimensions and 53 characteristics. The AMM taxonomy includes dimensions shared by any AMM design and characteristics that inform how the dimensions can implemented in AMM designs. Thereby, the AMM taxonomy can guide the development of AMM designs. Leveraging the AMM taxonomy, we developed three AMM archetypes (i.e., Price-discovering LP-based AMM, Price-adopting LP-based AMM, and Price-discovering Supply-sovereign AMM) that reflect common AMM designs with different solution approaches for the TMP. We explain the implemented solution approaches and their efficacy in solving the LAP, PDP, and TMP for each AMM archetype. The three AMM archetypes can support the design and selection of AMM designs to meet individual purposes of cryptoeconomic systems, such as decentralized token exchanges and token issuance.
Research on AMM designs with internal token supply sovereignty is still in its infancy. Our work indicates that AMM designs with internal token supply sovereignty can enable organizations to decouple value changes from token price changes to reduce volatility and prescribe token price evolution, liquidity, and price elasticity. We believe that AMMs with internal token supply sovereignty can establish independent and continuous lead markets. The prescribed supply curve allows for a transparent representation of value and increases predictability in terms of price evolution and price elasticity. The dependency of price evolution and price elasticity on the supply curve allows developers to specify the price development of the token already during the development of AMMs. This allows AMM developers to actively define the token price developments when issuing tokens and eventually avoid severe token price fluctuations. We believe AMMs with internal token supply sovereignty will greatly support the viable operation of future cryptoeconomic systems. |
2308.16764 | On the stability of string-hole gas | Focusing on a string-hole gas within the pre-big bang scenario, we study the
stability of its solutions in the phase space. We firstly extend the analysis
present in the literature relaxing the ideal-gas properties of the string-hole
gas, taking into account a (bulk-)viscosity term. Then we consider the case of
a theory described by a complete O(d,d)-invariant action up to all orders in
$\alpha^{\prime}$-corrections (the Hohm-Zwiebach action), studying the
stability of the string-hole gas solution with or without the introduction of
the viscosity term. Furthermore, the bulk viscosity is also considered for two
different first order $\alpha^{\prime}$-corrected actions: the
Gasperini-Maggiore-Veneziano-action and the Meissner-action. The results
obtained show how the viscosity can help to stabilize the string-hole gas
solution, obtaining constraints on the equation of state of the gas. | Denis Bitnaya, Pietro Conzinu, Giovanni Marozzi | 2023-08-31T14:33:11Z | http://arxiv.org/abs/2308.16764v2 | # On the stability of string-hole gas
###### Abstract
Focusing on a string-hole gas within the pre-big bang scenario, we study the stability of its solutions in the phase space. We firstly extend the analysis present in the literature relaxing the ideal-gas properties of the string-hole gas, taking into account a (bulk-)viscosity term. Then we consider the case of a theory described by a complete O(d,d)-invariant action up to all orders in \(\alpha^{\prime}\)-corrections (the Hohm-Zwiebach action), studying the stability of the string-hole gas solution with or without the introduction of the viscosity term. Furthermore, the bulk viscosity is also considered for two different first order \(\alpha^{\prime}\)-corrected actions: the Gasperini-Maggiore-Veneziano-action and the Meissner-action. The results obtained show how the viscosity can help to stabilize the string-hole gas solution, obtaining constraints on the equation of state of the gas.
###### Contents
* 1 Introduction
* 2 Definition of a String-Hole gas
* 3 Non-perfect string-hole fluid
* 3.1 Low-energy action
* 4 Complete O(d,d)-invariant action to all orders in \(\alpha^{\prime}\)-corrections with and without viscosity
* 4.1 Stability analysis of the fixed point
* 5 Gasperini-Maggiore-Veneziano and Meissner actions: solution and stability
* 5.1 Evaluating the stability in general case
* 5.2 Numerical Results
* 6 Discussion and Conclusions
* A Details for the study at first order in \(\alpha^{\prime}\)
* A.1 Meissner case
* A.2 Gasperini-Maggiore-Veneziano case
## 1 Introduction
The construction of cosmological scenarios embedded in string theory is a main open issue within modern cosmology. To this purpose, the use of symmetries, usually called dualites in string theory, is crucial [1; 2; 3]. In particular, the T-duality [1; 2] states that given a universe with radius \(R\), the considered theory is invariant under the transformation \(R\to l_{s}^{2}/R\), where \(l_{s}\) is the string length. This provides a minimum length (\(R\sim l_{s}\)) and eliminates the singularity issue, leading naturally to pre-big bang scenarios [4; 5; 6; 7; 8]. In a recent study [9], it was proposed that the final state of the pre-big bang epoch is dominated by a black-hole gas in a quasi-static Hagedorn phase [10]. Such a gas of black (string)-hole could naturally form by instabilities in a contracting universe (see e.g. [11; 12; 13]). In particular, in [9] was described the stability of the black-hole gas solution in the presence of a dilaton potential and considered the first \(\alpha^{\prime}-\)correction contribution for two different actions: the Gasperini-Maggiore-Veneziano (GMV)-action [14] and the Meissner-action [15], with \(\alpha^{\prime}=l_{s}^{2}/2\pi\) the dimensionful parameter associated with the string length \(l_{s}\). These \(\alpha^{\prime}-\)corrections account for the leading-order corrections to the classical equations of motion from the non-zero string length. Therefore they are essential for a more accurate description of the pre-big bang phase, when the curvature reaches the string scale.
Following this seminal work, we are going here to extend the results present in [9] in different ways. First, we will include \(\alpha^{\prime}-\)corrections up to all orders, using the duality-invariant action presented in [16] (see also [17]) to describe the string-hole gas matter. As mentioned, higher-order corrections are necessary when \(H^{-1}\sim l_{s}\), in this case all corrections become comparable and one cannot truncate the expansion. In [16] Hohm and Zwiebach addressed this issue by assuming that the O(d,d)-duality holds to all orders, obtaining a simple and systematic action for these \(\alpha^{\prime}-\)corrections. This duality-invariant action is a significant advancement in the context of string cosmology, allowing for a more consistent and robust description of the cosmological evolution. Furthermore, to enhance the stability of the solution, we will also consider a (bulk-)viscosity term to relax the ideal-gas properties of the string-hole gas. The introduction of a viscosity term in this framework is motivated by several factors. First of all, it can be seen as an attempt to improve the description of the fluid itself. Considering a realistic description of the dynamic of a gas, the existence of dissipative terms can be traced back
to an interaction between the gas particles. For example, in the context of a black-hole gas the authors of [18] have explicitly shown how a viscosity terms can emerge from gravitational interaction. Moreover, it is important to note that a perfect fluid description is not stable under general O(d,d) transformations [7], indeed a general transformation could induce bulk/shear viscosity terms. Here, we will focus only on the contribution of the bulk viscosity term, since we are considering spatial homogeneous and isotropic backgrounds (but see, for example, [19; 20] for possible effects of shear viscosity in anisotropic scenarios). We leave the investigation of more general description with shear-viscosity for future analysis. This approximation should be appropriate for a preliminary analysis and serves as a basis for more comprehensive studies.
The manuscript is organized as follows. In Sect. 2 we review the properties of a string-hole gas (SHG). In Sect. 3 we introduce the non-perfect fluid description and then we study the SHG solution in the presence of bulk viscosity at \(0\)-order in \(\alpha^{\prime}\)-correction (low-energy action). In Sect. 4 we study the SHG-solution in a complete O(d,d)-invariant action up to all orders in \(\alpha^{\prime}\), with and without a bulk viscosity term. Finally, in Sec. 5 we study the impact of a bulk viscosity term at first-order in \(\alpha^{\prime}-\)corrections for the GMV-action and for the Meissner-action. In Sect. 6 we present our final remarks and conclusions, while in Appendix A we give some useful relations.
## 2 Definition of a String-Hole gas
We recall that a string-hole [21; 22] is a Schwarzschild black-hole [23] confined within a radius given by the string length (\(R_{SH}=l_{\rm s}\)), such that its mass is
\[M_{SH}\sim\frac{R_{SH}^{D-3}}{G}\sim\frac{l_{\rm s}^{D-3}}{G}\,, \tag{1}\]
with \(D\) the total space-time dimension. By taking in consideration the following tree level relation [7]
\[\left(\frac{l_{\rm Pl}}{l_{\rm s}}\right)^{D-2}=\left(\frac{M_{\rm s}}{M_{\rm Pl }}\right)^{D-2}=e^{\phi}=g_{\rm s}^{2}\ll 1\, \tag{2}\]
where \(\phi\) is the dilaton field, \(M_{\rm s}=l_{\rm s}^{-1}\) the string mass and \(M_{\rm Pl}^{2-D}=8\pi G\) the Planck mass, one can easily see that the string-hole mass can be rewritten as
\[M_{SH}\sim M_{\rm s}g_{\rm s}^{-2}\,, \tag{3}\]
which is the so-called correspondence curve [24], where strings and black-holes share remarkable similarities (see e.g. [9; 24]). Therefore, the appropriate description of an universe populated with a dense gas of black-holes at the string scale must be a string-hole gas (SHG).
Let us now consider a dense gas composed by N string-holes with single energy \(E_{\rm SH}=M_{\rm SH}\) and entropy \(S_{\rm SH}=l_{\rm s}E_{\rm SH}\)[9]. The energy and entropy of this SHG will then be provided by
\[E_{gas}=E=NE_{\rm SH}\sim Nl_{\rm s}^{-1}e^{-\phi}\,,\qquad S_{gas}=S=NS_{\rm SH }\sim Ne^{-\phi}\,. \tag{4}\]
The physical volume of the gas can be written as
\[V_{gas}\equiv V=\gamma NV_{\rm SH}\,, \tag{5}\]
where \(\gamma\) quantify the "density" of the gas, and \(V_{\rm SH}\sim l_{\rm s}^{D-1}\) is the volume of a single string-hole. Thus the number \(N\) is given by \(N\sim Vl_{s}^{1-D}\), such that the energy and entropy densities are 1
Footnote 1: Here we have used the following relation: \(e^{\phi}\sim Gl_{\rm s}^{2-D}\).
\[\rho=\frac{E}{V}\sim l_{\rm s}^{-D}e^{-\phi}\sim l_{\rm s}^{-2}G^{-1}\,, \qquad s=\frac{S}{V}\sim l_{\rm s}^{1-D}e^{-\phi}\sim l_{\rm s}^{-1}G^{-1}\sim l _{\rm s}\rho. \tag{6}\]
On the other hand, the generalized second law of thermodynamics in the presence of dilaton charge \(\sigma\) reads
\[TdS=dE+\bar{p}_{I}dV-\frac{\bar{\sigma}}{2}d\Phi\,, \tag{7}\]
where \(\Phi=\phi-d\log a\) is the shifted dilaton, a quantity \(\tilde{A}\) is defined as \(\tilde{A}\equiv a^{d}A\) and we also define \(p_{I}\equiv p-\frac{\sigma}{2}\). As a consequence one obtains that
\[p_{I}=T\frac{\partial S}{\partial V}\Big{|}_{\Phi,E}\,,\qquad\qquad\frac{\bar{ \sigma}}{2}=-T\frac{\partial S}{\partial\Phi}\Big{|}_{V,E}\,. \tag{8}\]
Furthermore, using the relation \(\frac{1}{T}=(\frac{\partial S}{\partial E})_{V,\Phi}\) one finds that the temperature is proportional to the Hagedorn temperature \(T_{Hag}\sim l_{\rm s}^{-1}\,\).
Considering a FLRW-universe with scale factor \(a\), and requiring \(\rho\sim a^{-d}\), we have that \(a\sim G^{\frac{1}{2}}\sim e^{\frac{\sigma}{2}}\) and obtain the following Hubble factor:
\[H=\frac{\dot{a}}{a}=\frac{\dot{\phi}}{d}\,, \tag{9}\]
that equivalently can be expressed as \(\dot{\Phi}=0\). On the other hand, using the continuity equation
\[\dot{\bar{\rho}}+dH\bar{p}_{I}=\frac{\bar{\sigma}}{2}\dot{\Phi}\,, \tag{10}\]
one then obtains
\[p_{I}=0\,, \tag{11}\]
which gives us the relation \(\sigma=2p\,\).
Therefore, if the string-hole radius \(R_{SH}\sim l_{S}\) is proportional to the Hubble radius \(R_{SH}\propto H^{-1}\), the evolution of a string-hole gas in the string frame corresponds to a constant Hubble parameter equal to the string mass and a linearly growing dilaton field [9].
## 3 Non-perfect string-hole fluid
Due to gravitational attraction a gas of black-hole can be seen as a viscous fluid (see, for example, [18]). Therefore, we go now beyond the perfect fluid approximation, adding the viscosity to the model considered.
To begin we briefly review the description of a generic fluid in presence of viscosity (we follow the presentation of [18]). Given a time-like fluid 4-velocity \(u^{a}\), such that \(g_{ab}u^{a}u^{b}=-1\), one can define the correspondent induced metric on the spatial hypersurface by
\[h_{ab}=g_{ab}+u_{a}u_{b}\,,\]
and the extrinsic curvature by
\[K_{ab}=h_{a}^{c}h_{b}^{d}\nabla_{d}u_{c}=D_{b}u_{a}=\nabla_{b}u_{a}+u_{b}\dot{ u}_{a}\,,\]
where \(\dot{u}_{a}=u^{c}\nabla_{c}u_{a}\). The extrinsic curvature can be then decomposed in its symmetric and antisymmetric part as
\[K_{ab}=\Theta_{ab}+\omega_{ab}\,,\]
where the symmetric part \(\Theta_{ab}\) is called the expansion tensor, while the antisymmetric part \(\omega_{ab}\) is called the vorticity tensor. For simplicity we will assume in the follow a vanishing antisymmetric part \(\omega_{ab}=0\). We can then further expand \(\Theta_{ab}\) in the following irreducible representations:
\[\Theta_{ab}=\frac{1}{2}\Theta h_{ab}+\sigma_{ab}\,, \tag{12}\]
with \(\Theta\) the trace part of the expansion tensor and \(\sigma_{ab}\) its traceless part called shear tensor, i.e.
\[\Theta=g^{ab}\Theta_{ab}=D_{a}u^{a}=\nabla_{a}u^{a}\qquad,\qquad g^{ab}\sigma_ {ab}=0\,. \tag{13}\]
The energy-momentum tensor for a generic fluid can then be written in terms of this expansion tensor (neglecting heat transfer) as [9]
\[T_{ab}=\rho\,u_{a}u_{b}+(p-\zeta\Theta)h_{ab}-2\eta\sigma_{ab}\,, \tag{10}\]
where the coefficients \(\eta\) e \(\zeta\) are respectively the shear viscosity and the bulk viscosity. Here we are not interested in anisotropies and so, for simplicity, we will restrict our analysis to a zero shear viscosity. Considering only the bulk viscosity we can then define an effective pressure \(p_{\rm eff}\) in the following way
\[p_{\rm eff}=p-\zeta\Theta\,. \tag{11}\]
Our purpose is to describe a SHG (in the S-frame), including the presence of a bulk viscosity \(\zeta\), as a fluid with effective pressure \(p_{\rm eff}=\sigma/2\), without constraints on the equation of state \(p=\omega\rho\). This is realized, in a homogeneous FLRW background (where \(\Theta=dH\)), still assuming \(\rho\sim a^{-d}\) such that \(\dot{\Phi}=0\) (see Eq. (9)). In fact, using the continuity equation
\[\dot{\bar{\rho}}+dH(\bar{p}_{I}-dH\bar{\zeta})=\frac{\bar{\sigma}}{2}\dot{ \Phi}\,, \tag{12}\]
we obtain
\[p_{I}-dH\zeta=0\,, \tag{13}\]
which gives \(p_{\rm eff}=\sigma/2\).
Note that, as we can directly verify combining Eq. (7) and Eq. (10), in the presence of bulk viscosity it holds (see [25] for similar conclusions)
\[T\dot{S}=d^{2}H^{2}\zeta\,, \tag{14}\]
such that we are in a period of entropy production. Therefore, during the evolution it is expected that at some point, when we saturate the entropy bounds, \(\zeta\to 0\). For our purposes it is enough to consider a particular regime of constant viscosity, which can be seen as a good approximation assuming a slow evolution of the viscosity term. In this way we describe, de facto, only a first period of the full evolution.
### Low-energy action
Let us start our analysis applying the non-perfect fluid description (considering only a bulk viscosity term) to the low-energy action and, following [9], considering the contribution of a generic dilaton potential
\[S=-\frac{1}{2l_{\rm s}^{d-1}}\int d^{d+1}x\sqrt{|g|}e^{-\dot{\phi}}(R+g^{\mu \nu}\nabla_{\mu}\phi\nabla_{\nu}\phi+2l_{\rm s}^{d-1}U(\phi))+S_{m}\,, \tag{15}\]
where we explicitly use \(D=d+1\), with \(d\) the space dimensions. It should be noted that the matter action \(S_{m}\) in Eq. (15) is just a formal action, since hydrodynamics, especially in the presence of dissipative terms, it is usually formulated directly in terms of equations of motion instead of an action principle (see e.g. [26]).
The main effect of the bulk viscosity \(\zeta\) is to modify the pressure, such that the equations of motion for a FLRW-metric become the following
\[d(d-1)H^{2}+\dot{\phi}^{2}-2dH\dot{\phi}= 2l_{\rm s}^{d-1}(e^{\phi}\rho+U(\phi))\,,\] \[\dot{H}-H\dot{\phi}+dH^{2}= l_{\rm s}^{d-1}\left(e^{\phi}\left(p_{eff}-\frac{\sigma}{2} \right)-U_{,\phi}\right)\,,\] \[2\ddot{\phi}-\dot{\phi}^{2}+2dH\dot{\phi}-2d\dot{H}-d(d+1)H^{2}= 2l_{\rm s}^{d-1}(e^{\phi}\frac{\sigma}{2}-U(\phi)+U_{,\phi})\,. \tag{16}\]
Assuming a barotropic equation of state \(p=\omega\rho\) and recalling the string-hole solution
\[\rho=\rho_{0}a^{-d}=Cl_{\rm s}^{-d-1}e^{-\phi}\,,\qquad H=\frac{\dot{\phi}}{d }={\rm const.}\,,\qquad\sigma=2p_{eff}\,, \tag{17}\]
where \(C\) is a constant, the equations of motion become
\[-dH^{2} =2l_{\rm s}^{d-1}(e^{\phi}\rho+U)\,,\] \[0 =l_{\rm s}^{d-1}U_{,\phi}\,,\] \[-dH^{2} =2l_{\rm s}^{d-1}(e^{\phi}p_{eff}-U)\,. \tag{21}\]
Assuming that \(H^{-1}\) is of order of the string length, from the first equation of Eq. (21), the potential \(U\) is fixed to be
\[U\sim-l_{\rm s}^{-d-1}\left[C+\frac{d}{2}\right]\,, \tag{22}\]
which is equivalent to the addition of a negative cosmological constant \(\Lambda\sim{\cal O}(l_{\rm s}^{-d-1})\) to the problem, as it was already seen in [9] without a bulk viscosity term. As a consequence, the low energy action seems not sufficient to describe a SHG, at least without adding a fine-tuned negative cosmological constant in the potential. Therefore, hereafter we will extend this scenario taking into account also \(\alpha^{\prime}-\)corrections.
Complete O(d,d)-invariant action to all orders in \(\alpha^{\prime}\)-corrections with and without viscosity
In the follow we extend the results obtained in [9] to the case of an action valid to all order in \(\alpha^{\prime}\) (see e.g. [16; 17; 19; 27]), studying the stability of the SHG solution. As mentioned, there are several motivations for considering \(\alpha^{\prime}-\)corrections to all orders. On one hand, when we approach string-curvature scales \(H^{-1}\sim l_{S}\), all higher-order corrections in \(\alpha^{\prime}\) become comparable to each other. On the other hand, if we try to truncate the series in \(\alpha^{\prime}\), for example at first order, one can find a large amount of actions related by a simple fields redefinition [7], with the consequent ambiguity regarding the right choice to adopt. Fortunately, such ambiguity vanishes when we include all the \(\alpha^{\prime}\) corrections. Hohm and Zwiebach adressed this issue by assuming that the O(d,d)-duality [3] holds to all orders, obtaining a simple and systematic action for the \(\alpha^{\prime}-\)corrections [16; 28]. This duality-invariant action is a significant advancement in the context of string cosmology, allowing for a more consistent and robust description of the cosmological evolution [16; 17; 19; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36].
Let us briefly recall the notation we will use. The massless Neveu-Schwarz sector of all superstring theories is composed by 3 fields: the symmetric spacetime metric tensor \(G_{\mu\nu}\), the antisymmmetric Kalb-Ramond tensorial field \(b_{\mu\nu}\) and the dilaton field \(\phi\). For cosmological purpose we consider a purely time-dependent (d+1)-dimensional string background
\[G_{\mu\nu}=\begin{pmatrix}-n^{2}(t)&0\\ 0&g_{ij}(t)\end{pmatrix}\,,\qquad\qquad b_{\mu\nu}=\begin{pmatrix}0&0\\ 0&b_{ij}(t)\end{pmatrix}\,,\qquad\qquad\phi=\phi(t). \tag{23}\]
We consider then the following string action, up to all orders in \(\alpha^{\prime}\)-corrections, coupled to matter (see e.g. [19; 27])
\[S[\Phi,n,S,\chi]=\frac{1}{2\kappa^{2}}\int d^{d}xdtne^{-\Phi}\left[-({\cal D} _{t}\Phi)^{2}+\sum_{k=1}^{\infty}(\alpha^{\prime})^{k-1}c_{k}{\rm tr}\left(({ \cal D}_{t}{\cal S})^{2k}\right)\right]+S_{m}[\Phi,n,S,\chi]\,, \tag{24}\]
where \(\chi\) is a generic matter field, \(\kappa\propto l_{s}^{d-1}\) defines the D-dimensional Newton constant, \({\cal D}_{t}\equiv 1/n\,\partial_{t}\), \(\Phi\) is the shifted dilaton defined as
\[\Phi=\phi-\log\sqrt{g}\,, \tag{25}\]
and \({\cal S}\) is an O(d,d) invariant \(2d\times 2d\) matrix given by 2
Footnote 2: Note that \(g=det(g_{\mu\nu})\), \(G=det(G_{\mu\nu})\) and we have \(\sqrt{-G}=n\sqrt{g}\).
\[{\cal S}=\left(\begin{array}{cc}bg^{-1}&g-bg^{-1}b\\ g^{-1}&-g^{-1}b\end{array}\right)\,, \tag{26}\]
namely this is constructed by the two-form field \(b_{ij}\), the spatial metric \(g_{ij}\) and its inverse \(g^{ij}\), such that \(\mathcal{S}^{2}=1\). We can always define the energy-momentum tensor and charge density of the scalar field as
\[T_{\mu\nu}=-\frac{2}{\sqrt{-G}}\frac{\delta S_{m}}{\delta g^{\mu\nu}}\,,\qquad \qquad\sigma=-\frac{2}{\sqrt{-G}}\left(\frac{\delta S_{m}}{\delta\Phi}\right)\,, \tag{10}\]
and we consider a generic isotropic fluid, with eventually a bulk viscosity term, such that the pressure \(p\) can in general contains the viscosity term as in Eq. (11).
Let us assume, for simplicity, the case of a flat FLRW background
\[n(t)=1\,,\qquad g_{ij}(t)=a(t)^{2}\delta_{ij}\qquad b_{ij}=0\,,\]
such that the equations of motion take the following form (see e.g. [17])
\[\dot{\Phi}^{2}+HF^{\prime}(H)-F(H)= Y\rho\,,\] \[\dot{H}F^{\prime\prime}(H)-\dot{\Phi}F^{\prime}(H)= -Yd\left(\bar{p}-\frac{\bar{\sigma}}{2}\right)\,,\] \[2\ddot{\Phi}-\dot{\Phi}^{2}+F(H)= Y\frac{\bar{\sigma}}{2}\,, \tag{11}\]
while the continuity equation is given by 3
Footnote 3: Naturally, these equations are invariant under the duality transformation \(a\to\frac{1}{a}\)
\[H\to-H\qquad\Phi\to\Phi\qquad F(H)\to F(H),\qquad\bar{\rho}\to\bar{\rho}, \qquad\bar{p}\to-\bar{p},\qquad\bar{\sigma}\to\bar{\sigma}. \tag{12}\]
\[\dot{\bar{\rho}}+dH\bar{p}_{I}-\frac{1}{2}\dot{\Phi}\bar{\sigma}=0\,, \tag{13}\]
where we define \(Y\equiv 2\kappa^{2}e^{\Phi}\), and the function \(F(H)\) as 4
Footnote 4: The coefficient \(c_{1}=-\frac{1}{8}\) is fixed by the low energy description while the higher coefficients are only partially known [16; 17; 28].
\[F(H)=2d\sum_{k=1}^{\infty}(-\alpha^{\prime})^{k-1}c_{k}2^{2k}H^{2k}\,, \tag{14}\]
with \((\cdots)^{\prime}\) that means derivation w.r.t. \(H\).
Imposing the SHG solution \(\dot{\Phi}=0=\dot{\phi}-dH\) the above equations become
\[HF^{\prime}-F=Y\bar{\rho}\,,\] \[0=-Yd(\bar{p}-\frac{\bar{\sigma}}{2})\ \Rightarrow\bar{p}=\frac{\bar{ \sigma}}{2}\,,\] \[F=Y\frac{\bar{\sigma}}{2}\,, \tag{15}\]
and we finally obtain the two conditions
\[F= Y\bar{p}\,,\] \[F^{\prime}= \frac{Y}{H}(\bar{\rho}+\bar{p})\,. \tag{16}\]
In particular, assuming a barotropic relation \(p=\omega\rho\), considering the case \(\omega=0\) and in absence of viscosity (\(\zeta=0\)), we obtain from the first of Eqs. (16) that
\[\bar{p}=\frac{\bar{\sigma}}{2}=0\quad\implies\quad F=0\,, \tag{17}\]
while from the second of Eqs. (4.11) it holds
\[F^{\prime}=\frac{Y\bar{\rho}}{H}\,. \tag{4.13}\]
It follows a direct constraint on \(F\) and on the first derivative \(F^{\prime}\), with the last one completely fixed by the density of the gas. On the other hand, for \(\omega=0\) but with a non zero bulk viscosity, we obtain \(\bar{\sigma}=-2d\bar{\zeta}H\) such that
\[F= -Yd\bar{\zeta}H\,,\] \[F^{\prime}= \frac{Y}{H}(\bar{\rho}-d\bar{\zeta}H)\,. \tag{4.14}\]
As a consequence, also in this case, we find a direct relation among \(F(H)\), its first derivative and the density and viscosity of the gas.
### Stability analysis of the fixed point
Here, our purpose is to investigate the stability of the previously obtained string-hole gas solution. Specifically, we aim to establish whether this solution behaves as an attractor within the phase space of the theory. To this aim, we firstly rewrite the equations of motion as:
\[F^{\prime}= \frac{1}{H}[Y\bar{\rho}+F-\dot{\Phi}^{2}]\,,\] \[\dot{H}= \frac{-Yd(\bar{p}-\frac{\bar{\sigma}}{2})+F^{\prime}\dot{\Phi}^{ 2}}{F^{\prime\prime}}\,,\] \[\ddot{\Phi}= \frac{Y\frac{\bar{\sigma}}{2}+\dot{\Phi}^{2}-F}{2}\,, \tag{4.15}\]
where we are again using \(p\) as a generic pressure that eventually can contains the viscosity contribution. Let us note that we can eliminate \(F^{\prime}\) using the first equation, such that the reduced system reads
\[\dot{H}= \frac{1}{F^{\prime\prime}}\left[-Yd\left(\bar{p}-\frac{\bar{ \sigma}}{2}\right)+\frac{\dot{\Phi}}{H}(Y\bar{\rho}+F-\dot{\Phi}^{2})\right]\,,\] \[\ddot{\Phi}= \frac{Y\frac{\bar{\sigma}}{2}+\dot{\Phi}^{2}-F}{2}\,. \tag{4.16}\]
Finally, imposing the condition \(\sigma=2p\) one obtains
\[\dot{H}= \frac{1}{F^{\prime\prime}}\left[\frac{\dot{\Phi}}{H}(Y\bar{\rho} +F-\dot{\Phi}^{2})\right]\,,\] \[\ddot{\Phi}= \frac{Y\bar{p}+\dot{\Phi}^{2}-F}{2}\,. \tag{4.17}\]
The Jacobian of the above system is thus given by
\[\mathcal{J}=\begin{pmatrix}\partial_{H}\dot{H}&\partial_{\dot{\Phi}}\dot{H}\\ \partial_{H}\dot{\Phi}&\partial_{\dot{\Phi}}\ddot{\Phi}\end{pmatrix}\,, \tag{4.18}\]
with
\[\partial_{H}\dot{H}= \frac{1}{F^{\prime\prime 2}}[(-\frac{\dot{\Phi}}{H^{2}}(Y\bar{ \rho}+F-\dot{\Phi}^{2})+\frac{\dot{\Phi}}{H}F^{\prime})F^{\prime\prime}-\frac {\dot{\Phi}}{H}(Y\bar{\rho}+F-\dot{p}^{2})F^{\prime\prime\prime}]\,, \tag{4.19a}\] \[\partial_{H}\ddot{\Phi}= \frac{Y\bar{p}^{\prime}-F^{\prime}}{2}\,,\] (4.19b) \[\partial_{\dot{\Phi}}\ddot{\Phi}= \dot{\Phi}\,,\] (4.19c) \[\partial_{\dot{\Phi}}\dot{H}= \frac{1}{HF^{\prime\prime}}[Y\bar{\rho}+F-3\dot{\Phi}^{2}]\,. \tag{4.19d}\]
Now, imposing the solution in Eq. (4.11) for \(F\) and \(F^{\prime}\), it follows that for any \(\bar{p}\)
\[\partial_{H}\dot{H}= 0\,,\] \[\partial_{H}\ddot{\Phi}= -\frac{Y}{2H}\bar{\rho}(1+\omega)\,,\] \[\partial_{\dot{\Phi}}\dot{H}= \frac{Y}{HF^{\prime\prime}}[\bar{\rho}+\bar{p}]\,,\] \[\partial_{\dot{\Phi}}\ddot{\Phi}= 0\,, \tag{4.20}\]
and the trace and determinant of the Jacobian are the following
\[tr\mathcal{J}=\partial_{H}\dot{H}+\partial_{\dot{\phi}}\ddot{\Phi}=0\,, \tag{4.21}\]
\[det\mathcal{J}=\partial_{H}\dot{H}\partial_{\dot{\Phi}}\ddot{\Phi}-\partial_{ \dot{\Phi}}\dot{H}\partial_{H}\ddot{\Phi}=-\frac{Y^{2}}{2H^{2}F^{\prime\prime }}(\bar{\rho}+\bar{p})(1+\omega)\bar{\rho}\,. \tag{4.22}\]
We can note that the trace is independent of \(\bar{p}\), while the determinant when \(\zeta=0\) reduce to
\[det\mathcal{J}=-\frac{Y^{2}}{2H^{2}F^{\prime\prime}}[\bar{\rho}(1+\omega)]^{ 2}\,. \tag{4.23}\]
Thus we can study the eigenvalues of the system distinguishing the cases with and without the viscosity contribution.
* **Case with no viscosity:** for \(\zeta=0\), \(det\mathcal{J}>0\) implies \(F^{\prime\prime}<0\). Therefore, we have a null trace and a positive determinant under the constraint \(F^{\prime\prime}<0\). We can also rewrite these conditions in terms of the two eigenvalues \(\lambda_{1}\), \(\lambda_{2}\) of the jacobian \(\mathcal{J}\), as \[\begin{cases}\lambda_{1}+\lambda_{2}=0\quad\rightarrow\quad\lambda_{1}=- \lambda_{2}\\ \lambda_{1}\cdot\lambda_{2}=-\frac{Y^{2}}{2H^{2}F^{\prime\prime}}[\bar{\rho}( 1+\omega)]^{2}>0\end{cases}\,,\] (4.24) such that the eigenvalues are \[\lambda_{1,2}=\pm i\frac{Y}{H\sqrt{2|F^{\prime\prime}|}}\bar{\rho}(1+\omega)\,.\] (4.25)
* **Case with viscosity:** on the other hand, when \(\zeta\neq 0\), the eigenvalues \(\lambda_{1}\) and \(\lambda_{2}\) are given by \[\lambda_{1,2}=\pm iY\frac{\sqrt{(\omega+1)\bar{\rho}}\sqrt{dH\bar{\zeta}-( \omega+1)\bar{\rho}}}{H\sqrt{2F^{\prime\prime}}}\,,\] (4.26) and the stability requires that \[det\mathcal{J}=-\frac{Y^{2}}{2H^{2}F^{\prime\prime}}\bar{\rho}(1+\omega)[\bar{ \rho}(1+\omega)-d\bar{\zeta}H]>0\,.\] (4.27) Therefore, assuming \(H\) and \(\zeta\) both positives, we have the following two cases that satisfies Eq.(4.27): \[-1<\omega<-1+\frac{d\zeta H}{\rho}\,,\qquad\qquad\text{if}\qquad F^{\prime \prime}>0\,,\] (4.28) \[\omega<-1\,\vee\,\omega>-1+\frac{d\zeta H}{\rho}\,,\qquad\text{if} \qquad F^{\prime\prime}<0\,.\] (4.29)
In the particular case \(\omega=0\) and \(\zeta\neq 0\), we have
\[detJ=-\frac{Y^{2}}{2H^{2}F^{\prime\prime}}\bar{\rho}[\bar{\rho}-d\bar{\zeta}H]>0\,, \tag{4.30}\]
which implies that
\[\bar{\rho}<d\bar{\zeta}H\,,\qquad\text{if}\qquad F^{\prime\prime}>0\,, \tag{4.31}\]
\[\bar{\rho}>d\bar{\zeta}H\,,\qquad\text{if}\qquad F^{\prime\prime}<0\,. \tag{4.32}\]
In summary, if we consider the SHG solution at all orders in \(\alpha^{\prime}\), with the Hohm-Zwiebach action, we find that the general form of the function \(F(H)\) and its first derivative are constrained at the fixed point. In this case we have a "center" of the trajectory,: since the real part of both eigenvalues is zero, we have circular orbits in the phase space, without touching the center of the circle. Therefore, the fixed point is not a real attractor, but a somekind of "general" stability is anyway achieved. To conclude, for \(\zeta=0\) we have to require \(F^{\prime\prime}<0\), independently from the equation of state, while for \(\zeta\neq 0\) we have more freedom, depending on the particular equation of state.
## 5 Gasperini-Maggiore-Veneziano and Meissner actions: solution and stability
Hereafter, following [9], we study the evolution of the SHG at first order in \(\alpha^{\prime}\). As said before, at first order in \(\alpha^{\prime}\) we have an ambiguity in the choice of the right action. Here we will consider two particular cases, the Gasperini-Maggione-Veneziono (GMV) first order corrected-action [14] (see also [7; 9; 37; 38]), given by
\[S_{\alpha^{\prime}}=\frac{k\alpha^{\prime}}{8\ell_{\text{s}}^{d-1}}\int \mathrm{d}^{d+1}x\,\sqrt{|g|}e^{-\phi}\left(\mathcal{G}-(\nabla_{\mu}\phi\nabla ^{\mu}\phi)^{2}\right)\, \tag{5.1}\]
where \(\mathcal{G}\equiv R_{\mu\nu\kappa\lambda}R^{\mu\nu\kappa\lambda}-4R_{\mu\nu}R^ {\mu\nu}+R^{2}\) is the Gauss-Bonnet invariant. And the following manifestly O(d,d)-invariant action (from now on Meisnerr-action), firstly introduced by Meissner [15],
\[S_{\alpha^{\prime}}=\ \frac{k\alpha^{\prime}}{8\ell_{\text{s}}^{d-1}}\int \mathrm{d}^{d+1}x\,\sqrt{|g|}e^{-\phi}\Big{(}\mathcal{G}-(\nabla_{\mu}\phi \nabla^{\mu}\phi)^{2}-4G^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi+2(\nabla_{ \mu}\phi\nabla^{\mu}\phi)\Box\phi\Big{)}\, \tag{5.2}\]
where \(G_{\mu\nu}\equiv R_{\mu\nu}-Rg_{\mu\nu}/2\) is the Einstein tensor and we use the d'Alembertian \(\Box\equiv g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\).
Those two actions are related by a field redefinition and, while the Meissner-action preserves the O(d,d) invariance, the GMV-action no. This is a direct evidence that in order to preserve this duality in an univocally way, it is necessary to take into account all the orders in \(\alpha^{\prime}\).
We then couple those actions to a SHG fluid, with eventually a bulk viscosity term. Having that the equations of motion for both actions are the same in form [7; 9], we firstly study the solution in a general way.
The equations of motion are
\[\rho =Cl_{\text{s}}^{-1-d}e^{-\phi}=\frac{1}{2}l_{\text{s}}^{1-d}e^{- \phi}\left(\dot{\phi}^{2}+d(d-1)H^{2}-2dH\dot{\phi}-\frac{3}{4}k\alpha^{ \prime}\mathcal{F}_{\rho}(H,\dot{\phi})\right)\,,\] \[\sigma =-l_{\text{s}}^{1-d}e^{-\phi}\left(-2\ddot{\phi}+2d\dot{H}+\dot {\phi}^{2}+d(d+1)H^{2}-2dH\dot{\phi}+\frac{k\alpha^{\prime}}{4}\mathcal{F}_{ \sigma}(H,\dot{\phi},\dot{H},\ddot{\phi})\right)\,,\] \[p_{\text{eff}} =\frac{1}{2d}l_{\text{s}}^{1-d}e^{-\phi}\left(-2d(d-1)\dot{H}+2 d\ddot{\phi}-d^{2}(d-1)H^{2}+2d(d-1)H\dot{\phi}-d\dot{\phi}^{2}+\frac{k\alpha^{ \prime}}{4}\mathcal{F}_{p}(H,\dot{\phi},\dot{H},\ddot{\phi})\right)\,, \tag{5.3}\]
where \(p_{\text{eff}}\) is the effective pressure defined in Eq. (3.4). The functions \(\mathcal{F}_{\rho}\,,\,\mathcal{F}_{\sigma}\,,\,\mathcal{F}_{p}\), which are a combination of \(\ddot{\phi}\), \(\dot{\phi}\), \(\dot{H}\) and \(H\) which depends on the particular action, are defined in Appendix A.
Imposing the constraint given by the string-hole solution of Eq. (20), i.e. \(\dot{\phi}=dH=\text{const.}\) and \(\sigma=2p_{eff}\), a simpler system of equations follows as
\[\rho= -\frac{\tilde{Y}}{2}\left[dH^{2}+\frac{3}{4}k\alpha^{\prime} \mathcal{F}_{\rho}\right]\,,\] \[\sigma=2p_{\text{eff}}= -\tilde{Y}\left[dH^{2}+\frac{k\alpha^{\prime}}{4}\mathcal{F}_{ \sigma}\right]\,,\] \[p_{\text{eff}}= -\frac{\tilde{Y}}{2}\left[dH^{2}-\frac{k\alpha^{\prime}}{4d} \mathcal{F}_{p}\right]\,, \tag{45}\]
where we define \(\tilde{Y}\equiv e^{-\phi}l_{\text{s}}^{1-d}\). Recalling that from Eqs. (14) one has \(\mathcal{F}_{\sigma}=\mathcal{F}_{\rho}\) and \(\mathcal{F}_{\rho}=-dH^{4}\Delta\), we obtain after some manipulations the following two equations
\[2\rho=2C\tilde{Y}= \tilde{Y}(-dH^{2}+d\frac{3}{4}k\alpha^{\prime}\Delta H^{4})\,, \tag{46a}\] \[2\omega\rho-6\zeta H= -\tilde{Y}(dH^{2}-k\frac{\alpha^{\prime}}{4}d\Delta H^{4})\,. \tag{46b}\]
By solving Eq. (46a) for \(H\) in terms of \(C\) it follows
\[H=\sqrt{\frac{d+\sqrt{d^{2}+8Cd\frac{3}{4}k\alpha^{\prime}\Delta}}{d\frac{3}{ 2}k\alpha^{\prime}\Delta}}\,. \tag{47}\]
On the other hand, from Eq. (46b) we have that the viscosity parameter \(\zeta\) is such that5
Footnote 5: Note that, for the sake of simplicity, we have chosen to represent the bulk viscosity in terms of the Hubble parameter. However, it should be emphasized that the reverse transformation is also achievable.
\[\zeta=\frac{\tilde{Y}}{3}\left(\frac{d}{2}H(1-\omega)-dH^{3}\frac{\Delta\alpha ^{\prime}}{8}k(1-3\omega)\right)\,. \tag{48}\]
### Evaluating the stability in general case
We now want to compute the stability of the above solution. To this aim, we should compute the Jacobian of the system \((\dot{H}(H,\dot{\phi}),\,\ddot{\phi}(H,\dot{\phi}))\), evaluated at the SHG solution.
We begin by writing the functions \(\mathcal{F}_{\rho},\mathcal{F}_{\sigma},\mathcal{F}_{p}\) as
\[\mathcal{F}_{\rho} =\mathcal{F}_{\rho}(\dot{\phi},H)\,,\] \[\mathcal{F}_{\sigma} =\mathcal{F}_{\sigma}(\ddot{\phi},\dot{H},\dot{\phi},H)\equiv \dot{H}\mathcal{F}_{\sigma,1}(\dot{\phi},H)+\ddot{\phi}\mathcal{F}_{\sigma,2 }(\dot{\phi},H)+\mathcal{F}_{\sigma,3}(\dot{\phi},H)\,,\] \[\mathcal{F}_{p} =\mathcal{F}_{p}(\ddot{\phi},\dot{H},\dot{\phi},H)\equiv\dot{H} \mathcal{F}_{p,1}(\dot{\phi},H)+\ddot{\phi}\mathcal{F}_{p,2}(\dot{\phi},H)+ \mathcal{F}_{p,3}(\dot{\phi},H)\,, \tag{49}\]
and the equations of motions of Eqs. (44) in the following form:
\[\rho =\frac{\tilde{Y}}{2}R(\dot{\phi},\dot{H})\,,\] \[\sigma =-\tilde{Y}\left[\dot{H}E_{2,1}+\ddot{\phi}E_{2,2}+E_{2,3}\right]\,,\] \[p_{eff} =\frac{\tilde{Y}}{2d}\left[\dot{H}E_{3,1}+\ddot{\phi}E_{3,2}+E_{3,3}\right]\,, \tag{50}\]
where the values of \(F_{\alpha,n}\) and \(E_{\alpha,n}\) are reported in Appendix A.
It is then simple to see that the first equation of Eqs. (44) can be used as a constraint, such that the equations of motion can be written as
\[\dot{H}E_{3,1}+\ddot{\phi}E_{3,2} =C_{1}\,,\] \[\dot{H}E_{2,1}+\ddot{\phi}E_{2,2} =C_{2}\,, \tag{51}\]
where we use \(\sigma=2p_{eff}\) and \(p_{eff}=p-dH\zeta=\omega\rho-dH\zeta\), and we define
\[C_{1} =-2d^{2}\zeta H\tilde{Y}^{-1}+\omega dR-E_{3,3}\,,\] \[C_{2} =2d\zeta H\tilde{Y}^{-1}-\omega R-E_{2,3}\,. \tag{49}\]
The solutions for the system are straightforward
\[\dot{H}=\frac{C_{1}E_{2,2}-C_{2}E_{3,2}}{E_{3,1}E_{2,2}-E_{3,2}E_{2,1}}\equiv \frac{r_{1}}{r}\,,\qquad\ddot{\phi}=\frac{C_{2}E_{3,1}-C_{1}E_{2,1}}{E_{3,1}E_ {2,2}-E_{3,2}E_{2,1}}\equiv\frac{r_{2}}{r}\,, \tag{50}\]
and the Jacobian is given by:
\[\mathcal{J}=\begin{pmatrix}\partial_{H}\dot{H}\,,&\partial_{\dot{\phi}}\dot{H }\\ \partial_{H}\ddot{\phi},&\partial_{\dot{\phi}}\dot{\phi}\end{pmatrix}=\frac{1} {r^{2}}\begin{pmatrix}r_{1}^{\prime}r-r_{1}r^{\prime}\,,&\dot{r}_{1}r-r_{1} \dot{r}\\ r_{2}^{\prime}r-r_{2}r^{\prime}\,,&\dot{r}_{2}r-r_{2}\dot{r}\end{pmatrix}\,, \tag{51}\]
here we use \(\partial_{H}(\ldots)\equiv(\ldots)^{\prime}\) and \(\partial_{\dot{\phi}}(\ldots)\equiv(\dot{\ldots})\).
Evaluating the Jacobian at the fixed point, in order to have an attractor, we should have both eigenvalues negative, or equivalently, we should have
\[\mathrm{Tr}\mathcal{J}|_{(H_{*},\dot{\phi}_{*})}<0\quad,\quad\mathrm{Det} \mathcal{J}|_{(H_{*},\dot{\phi}_{*})}>0\,. \tag{52}\]
In order to solve such inequalities we can ignore the positive factor \(\frac{1}{r^{2}}\) in the Jacobian.
### Numerical Results
For graphical reasons, in the following we have rescaled the Hubble constant and the viscosity parameter \(\zeta\) as \(H=\frac{h}{l_{*}}\) and \(\zeta=\tilde{\zeta}\frac{\tilde{Y}}{l_{*}}\), using the following values for the parameters: \(k=1\), \(d=3\) and \(2\pi\alpha^{\prime}=l_{*}^{2}\). We then impose the inequalities in Eqs. (52) for trace and determinant (for the explicit calculation see Eq. (48) and Eq. (49) in appendix A). We made then the substitution \(\tilde{\zeta}=\tilde{\zeta}(h)\), in both the determinant and the trace and impose the constraint \(\tilde{\zeta}>0\). Moreover, we impose the condition \(C>0\) that, using Eq. (47), implies for the GMV case \(h>\sqrt{\frac{8\pi}{57}}\), and for the Meissner case \(h>\sqrt{\frac{8\pi}{3}}\), where we have used the values of \(\Delta\) in Eq. (40) and Eq. (30) respectively.
Those conditions should be solved in terms of \(w\) and \(h\), in such a way to obtain a region of stability in the plane \((\omega,h)\). The results for both GMV and Meissner cases are showed in the Fig. 1 (left and right panel respectively). We illustrate both cases with and without a bulk viscosity (the orange region and the black line respectively). In particular we find a large interval where the stability is possible for certain intervals of \(\omega\) and \(h\). For the GMV-case, as we can see from the left panel of Fig. 1, the admitted values of \(\omega\) are in the following interval \(-1\lesssim\omega\lesssim 3\) while \(h\) should be such that \(0.72\lesssim h\lesssim 1.14\). On the other hand, the stability region for the Meissner case is such that we have the follows allowed interval \(0\lesssim\omega\lesssim 3.5\) and \(h\gtrsim 3.5\) (see right panel of Fig. 1).
We find in both cases a large stability region. Even if the bulk viscosity \(\zeta\) is negligible or zero we still have allowed values of the parameter space. Moreover, we note that, following the black line in the region plot that represents \(\zeta=0\), we are in agreement with the results obtained in [9] for \(\omega=0\). Indeed, we do not find any stability points for such a value of the equation of state.
## 6 Discussion and Conclusions
In this manuscript we analysed a gas of string-hole as the final stage of the string phase in the pre-big bang scenario looking at the stability of the related solution. In particular, we extend the previous work [9], firstly studying the string-hole gas (SHG) solution also in the case of a complete O(d,d)-invariant action at all orders in \(\alpha^{\prime}\). Furthermore, we considered the possibility to characterize the SHG as a fluid with a bulk viscosity term, without constraints on the equation of state. We analysed this possibility in the contexts of the low-energy action, first order in \(\alpha^{\prime}\) for the Gasperini-Maggiore-Veneziano and Meissner actions, and to all orders in \(\alpha^{\prime}\). We found that in the case of the low-energy
action, with a local dilaton potential, the contribution of the viscosity does not change the conclusion obtained in [9]. In all other cases the presence of viscosity allows us to have stable solutions. For the particular case of the the Hohm-Zwiebach action, we found that the solution can be reached requiring some condition for the general function \(F(H)\), however only a sort of general stability is obtained. Indeed, the fixed point represent a "center" of the trajectory: we have circular orbits in the phase space, without touching the center of the circle. We found also that, relaxing the constraint on the equation of state for the first order in \(\alpha^{\prime}\) (in [9] only the case \(\omega=0\) was considered), stable solutions are allowed even in the case of negligible (or zero) bulk viscosity, as we can easily see in Fig. 1 for first order in \(\alpha^{\prime}\) corrected action.
Certainly it remains to study and describe with more accuracy the string-hole/black-hole formation and how the stringy nature of the black-holes manifest when they achieve the non-perturbative regime of string theory. Indeed, this manuscript is just a first step into this direction, with the attempt to take into account the non-ideal features of the SHG. As we have already mentioned, we are here considering a particular regime, where we are assuming a constant bulk viscosity. But a some point, when the SHG saturates the entropy bounds ([2, 37]), such viscosity should go to zero. Therefore, we need to include a dynamical evolution of the parameters of the theory, to describe a complete evolution of the fluid. We postpone those analysis to future works. Furthermore, it would be very interesting to study the dynamical evolution of the SHG within the recent "hamiltonian" formalism described in [35, 36].
## Acknowledgements
We are very thankful to Robert H. Brandenberger, Giuseppe Fanizza, Maurizio Gasperini and Gabriele Veneziano for useful discussions and feedback on the manuscript. We are supported in part by INFN under the program TAsP (_Theoretical Astroparticle Physics_).
Figure 1: We show the stability region in the plane \((\omega,h)\), for the GMV case (on the left panel) and Meissner case (on the right panel). We plot the allowed region for \(\zeta\neq 0\) (orange region) and for \(\zeta=0\) (black line).
Details for the study at first order in \(\alpha^{\prime}\)
In this appendix we report the functions introduced in Sec. 5.1. We use the following parametrization for the functions \(\mathcal{F}_{\alpha}\)
\[\mathcal{F}_{\rho} =\mathcal{F}_{\rho}(\dot{\phi},H)\,,\] \[\mathcal{F}_{\sigma} =\mathcal{F}_{\sigma}(\ddot{\phi},\dot{H},\dot{\phi},H)\equiv\dot{ H}\mathcal{F}_{\sigma,1}(\dot{\phi},H)+\ddot{\phi}\mathcal{F}_{\sigma,2}(\dot{ \phi},H)+\mathcal{F}_{\sigma,3}(\dot{\phi},H)\,,\] \[\mathcal{F}_{p} =\mathcal{F}_{p}(\ddot{\phi},\dot{H},\dot{\phi},H)\equiv\dot{H} \mathcal{F}_{p,1}(\dot{\phi},H)+\ddot{\phi}\mathcal{F}_{p,2}(\dot{\phi},H)+ \mathcal{F}_{p,3}(\dot{\phi},H)\,. \tag{100}\]
The function \(R\) of Eqs. (100) is given by
\[R=\dot{\phi}^{2}+d(d-1)H^{2}-2dH\dot{\phi}-\frac{3}{4}k\alpha^{\prime} \mathcal{F}_{\rho}\,, \tag{101}\]
where the values of the functions \(E_{\alpha,n}\) are
\[E_{2,1} =2d+\frac{k\alpha^{\prime}}{4}\mathcal{F}_{\sigma,1}\,,\] \[E_{2,2} =-2+\frac{k\alpha^{\prime}}{4}\mathcal{F}_{\sigma,2}\,,\] \[E_{2,3} =\dot{\phi}^{2}+d(d+1)H^{2}-2dH\dot{\phi}+\frac{k\alpha^{\prime}} {4}\mathcal{F}_{\sigma,3}\,, \tag{102}\]
and
\[E_{3,1} =-2d(d-1)+\frac{k\alpha^{\prime}}{4}\mathcal{F}_{p,1}\,,\] \[E_{3,2} =2d+\frac{k\alpha^{\prime}}{4}\mathcal{F}_{p,2}\,,\] \[E_{3,3} =-d^{2}(d-1)H^{2}-d\dot{\phi}^{2}+2d(d-1)H\dot{\phi}+\frac{k \alpha^{\prime}}{4}\mathcal{F}_{p,3}\,, \tag{103}\]
where, for simplicity, we omit the arguments of the functions \(\mathcal{F}_{a}\).
The parameters \(c_{1}\) and \(c_{3}\) which, as we will see shortly, appear in the functions \(\mathcal{F}_{\alpha}\) are given by
\[c_{1} =-\frac{d}{3}(d-1)(d-2)(d-3)\,,\] \[c_{3} =\frac{4d}{3}(d-1)(d-2)\,. \tag{104}\]
Moreover for the particular case of the SHG solution (\(\dot{\phi}=dH=\text{const.}\)) we have the following relations between \(\mathcal{F}_{\rho}\,,\mathcal{F}_{\sigma}\,,\mathcal{F}_{p}\).
\[\mathcal{F}_{\rho}= \mathcal{F}_{\sigma}=-d\Delta H^{4}\,, \tag{105a}\] \[\mathcal{F}_{p}= d^{2}\Delta H^{4}=-d\mathcal{F}_{\rho}\,. \tag{105b}\]
We now report the various functions and parameters for the particular cases under consideration: the Meissner and GMV actions.
### Meissner case
The particular functions \(\mathcal{F}_{\alpha,n}\) used in (100) are
\[\mathcal{F}_{\rho}(\dot{\phi},H) =c_{1}H^{4}+c_{3}H^{3}\dot{\phi}-2d(d-1)H^{2}\dot{\phi}^{2}+\frac{ 4}{3}dH\dot{\phi}^{3}-\frac{1}{3}\dot{\phi}^{4}\,,\] \[\mathcal{F}_{\sigma,1}(\dot{\phi},H) =3c_{3}H^{2}-8d(d-1)H\dot{\phi}+4d\dot{\phi}^{2}\,,\] \[\mathcal{F}_{\sigma,2}(\dot{\phi},H) =\,-4d(d-1)H^{2}+8dH\dot{\phi}-4\dot{\phi}^{2}\,,\] \[\mathcal{F}_{\sigma,3}(\dot{\phi},H) =(c_{1}+dc_{3})H^{4}-4d^{2}(d-1)H^{3}\dot{\phi}+2d(3d-1)H^{2} \dot{\phi}^{2}-4dH\dot{\phi}^{3}+\dot{\phi}^{4}\,,\] \[\mathcal{F}_{p,1}(\dot{\phi},H) =12c_{1}H^{2}+6c_{3}H\dot{\phi}-4d(d-1)\dot{\phi}^{2}\,,\] \[\mathcal{F}_{p,2}(\dot{\phi},H) =3c_{3}H^{2}-8d(d-1)H\dot{\phi}+4d\dot{\phi}^{2}\,,\] \[\mathcal{F}_{p,3}(\dot{\phi},H) =3dc_{1}H^{4}-2(2c_{1}-dc_{3})H^{3}\dot{\phi}-(3c_{3}+2d^{2}(d-1) )H^{2}\dot{\phi}^{2}+4d(d-1)H\dot{\phi}^{3}-d\dot{\phi}^{4}\,, \tag{101}\]
In this case, the quantity \(\Delta\) appearing in Eqs. (100), and in the solutions (100) and (101) is given by \(\Delta=d-2\), such that
\[\mathcal{F}_{\rho}= \mathcal{F}_{\sigma}=-d(d-2)H^{4}\,, \tag{102a}\] \[\mathcal{F}_{p}= d^{2}(d-2)H^{4}\,. \tag{102b}\]
The explicit form of trace and determinant, in terms on the variables \(h\) and \(\omega\), and on the function \(\tilde{\zeta}\), which we have used in the inequalities in Eqs. (101) to study the stability of the SHG solution, are
\[l_{s}r^{2}\left.\mathrm{Tr}\mathcal{J}\right|_{(H_{s},\dot{\phi} _{*})}= \frac{1}{16\pi^{4}}(27h(3h^{8}(51w-52)-240\pi h^{6}(2w-3)-648\pi h^ {5}\tilde{\zeta}+\] \[+16\pi^{2}h^{4}(9w+35)+192\pi^{2}h^{3}\tilde{\zeta}+128\pi^{3}h^ {2}(w-9)+384\pi^{3}h\tilde{\zeta}+768\pi^{4})\,, \tag{103}\]
\[l_{s}^{2}r^{4}\left.\mathrm{Det}\mathcal{J}\right|_{(H_{s},\dot{ \phi}_{*})}= -\frac{1}{1024\pi^{8}}243h(21h^{4}-24\pi h^{2}+16\pi^{2})(189h^{ 13}(21w^{2}-286w-19)+\right.\] \[-72\pi h^{11}(375w^{2}-4818w-713)-504\pi h^{10}(21w+65)\tilde{ \zeta}+\] \[+48\pi^{2}h^{9}(1461w^{2}-14814w-4451)+3168\pi^{2}h^{8}(19w+67) \tilde{\zeta}+\] \[-384\pi^{2}h^{7}(\pi(234w^{2}-2368w-746)+147\tilde{\zeta}^{2})-3 84\pi^{3}h^{6}(393w+473)\tilde{\zeta}+\] \[+1536\pi^{3}h^{5}(\pi(37w^{2}-446w-131)-3\tilde{\zeta}^{2})+\] \[+1536\pi^{4}h^{4}(85w-3)\tilde{\zeta}-2048\pi^{4}h^{3}(2\pi(3w^{ 2}-76w-15)-21\tilde{\zeta}^{2})+\] \[-4096\pi^{5}h^{2}(9w-25)\tilde{\zeta}-8192\pi^{5}h(8\pi w+3 \tilde{\zeta}^{2})-65536\pi^{6}\tilde{\zeta})\,. \tag{104}\]
### Gasperini-Maggiore-Veneziano case
The particular functions \(\mathcal{F}_{\alpha,n}\) used in (100) are
\[\mathcal{F}_{\rho}(H,\dot{\phi}) \equiv c_{1}H^{4}+c_{3}H^{3}\dot{\phi}-\dot{\phi}^{4}\,\] \[\mathcal{F}_{\sigma,1}(\dot{\phi},H) =3c_{3}H^{2}\,,\] \[\mathcal{F}_{\sigma,2}(\dot{\phi},H) =\,-12\dot{\phi}^{2}\,,\] \[\mathcal{F}_{\sigma,3}(\dot{\phi},H) =\,(c_{1}+dc_{3})H^{4}-4dH\dot{\phi}^{3}+3\dot{\phi}^{4}\,,\] \[\mathcal{F}_{p,1}(\dot{\phi},H) =12c_{1}H^{2}+6c_{3}H\dot{\phi}\,,\] \[\mathcal{F}_{p,2}(\dot{\phi},H) =3c_{3}H^{2}\,,\] \[\mathcal{F}_{p,3}(\dot{\phi},H) =3dc_{1}H^{4}-2(2c_{1}-dc_{3})H^{3}\dot{\phi}-3c_{3}H^{2}\dot{ \phi}^{2}+d\dot{\phi}^{4}\,. \tag{105}\]
Here we have \(\Delta=2d^{2}+d-2\), such that
\[\mathcal{F}_{\rho}= \mathcal{F}_{\sigma}=-d(2d^{2}+d-2)H^{4}\,,\] (A.12a) \[\mathcal{F}_{p}= d^{2}(2d^{2}+d-2)H^{4}\,.\] (A.12b)
The trace and the determinant, functions of \(\omega,h\) and of the function \(\tilde{\zeta}\) are the following:
\[l_{s}r^{2}\ \mathrm{Tr}\mathcal{J}|_{(H_{*},\dot{\phi}_{*})}= -\frac{1}{16\pi^{4}}(27h(57h^{8}(3783w-2381)+\] \[+48\pi h^{6}(139w+2725)-26928\pi h^{5}\tilde{\zeta}-64\pi^{2}h^{4 }(195w+683)+\] \[-22368\pi^{2}h^{3}\tilde{\zeta}+64\pi^{3}h^{2}(16w+141)+\] \[+3648\pi^{3}h\tilde{\zeta}-768\pi^{4}))\,,\] (A.13)
\[l_{s}^{2}r^{4}\ \mathrm{Det}\mathcal{J}|_{(H_{*},\dot{\phi}_{*})}= -\frac{1}{16\pi^{8}}243h(42h^{4}-15\pi h^{2}+2\pi^{2})(114912h^{ 13}(399w^{2}+995w-152)+\] \[-9\pi h^{11}(6150813w^{2}+6771378w-1634779)-306432\pi h^{10}(21w-1 )\tilde{\zeta}+\] \[+12\pi^{2}h^{9}(1231395w^{2}+717546w-315673)+288\pi^{2}h^{8}(54835 w-33881)\tilde{\zeta}+\] \[-96\pi^{2}h^{7}(\pi(9741w^{2}-7952w-3557)+\] \[+18816\tilde{\zeta}^{2})-96\pi^{3}h^{6}(21933w-63143)\tilde{\zeta }-192\pi^{3}h^{5}(\pi(641w^{2}+3143w+74)+\] \[+4332\tilde{\zeta}^{2})-192\pi^{4}h^{4}(1157w+6449)\zeta+512\pi^{ 4}h^{3}(\pi(27w^{2}+\] \[+223w-6)-291\tilde{\zeta}^{2})+512\pi^{5}h^{2}(81w+301)\tilde{ \zeta}+\] \[-1024\pi^{5}h(8\pi w-27\tilde{\zeta}^{2})-8192\pi^{6}\tilde{\zeta })\,.\] (A.14)
|
2310.12990 | Wave-informed dictionary learning for high-resolution imaging in complex
media | We propose an approach for imaging in scattering media when large and diverse
data sets are available. It has two steps. Using a dictionary learning
algorithm the first step estimates the true Green's function vectors as columns
in an unordered sensing matrix. The array data comes from many sparse sets of
sources whose location and strength are not known to us. In the second step,
the columns of the estimated sensing matrix are ordered for imaging using
Multi-Dimensional Scaling with connectivity information derived from
cross-correlations of its columns, as in time reversal. For these two steps to
work together we need data from large arrays of receivers so the columns of the
sensing matrix are incoherent for the first step, as well as from sub-arrays so
that they are coherent enough to obtain the connectivity needed in the second
step. Through simulation experiments, we show that the proposed approach is
able to provide images in complex media whose resolution is that of a
homogeneous medium. | Miguel Moscoso, Alexei Novikov, George Papanicolaou, Chrysoula Tsogka | 2023-09-22T01:28:15Z | http://arxiv.org/abs/2310.12990v1 | # Wave-informed dictionary learning for high-resolution imaging in complex media
###### Abstract
We propose an approach for imaging in scattering media when large and diverse data sets are available. It has two steps. Using a dictionary learning algorithm the first step estimates the true Green's function vectors as columns in an unordered sensing matrix. The array data comes from many sparse sets of sources whose location and strength are not known to us. In the second step the columns of the estimated sensing matrix are ordered for imaging using Multi-Dimensional Scaling with connectivity information derived from cross correlations of its columns, as in time reversal. For these two steps to work together we need data from large arrays of receivers so the columns of the sensing matrix are incoherent for the first step, as well as from sub-arrays so that they are coherent enough to obtain connectivity needed in the second step. Through simulation experiments, we show that the proposed approach is able to provide images in complex media whose resolution is that of a homogeneous medium.
High-resolution imaging in complex media faces challenges due to wavefront distortion caused by scattering from inhomogeneities. In this paper we introduce a new approach for imaging in inhomogeneous, random media involving
two basic components. The first is a sparse dictionary learning algorithm in order to estimate Green's function vectors between focal or source points in the image window and receiver locations on the array. The second is a Multi-Dimensional Scaling (MDS) algorithm to convert information about correlations of Green's function vectors into positions of the focal points in the image window.
To accomplish the first step, we use a sparsity promoting modification of the Method of Optimal Directions (MOD) [13] to learn an (unordered) dictionary of Green's function vectors that characterize the propagation of signals from a set of focal points, or sources in the image window, to the array. Here unordered means that we do not know which focal points are associated with the estimated column vectors of the dictionary. In this step we assume that an abundance of sensing measurements is available. Specifically, we have access to measurements for multiple signals emanating from many sparse sets of sources, but we do not have prior knowledge of their locations or amplitudes. This dictionary learning method enables us to estimate Green's function vectors with high accuracy under the condition that these vectors are sufficiently incoherent, which means that their normalized inner product is sufficiently small. Given a configuration of sources or focal points in the image window, this implies that the receiver array must be large enough. We present this dictionary learning step in Section 2.
The goal of the second step is to associate each Green's function vector with its corresponding focal point in the image window, which means that we want to find the correct order of the columns in the estimated matrix of Green's function vectors. We could back propagate these vectors into the image window using a reference homogeneous medium. This is the Kirchhoff's migration approach that only works well when the fluctuations are weak [7]. For a given set of Green's function vectors we could also try to estimate the position of their focal points using source localization algorithms, commonly used in wireless communications [4, 21, 23, 22]. However, these algorithms use information based on distances between sources and receivers and are therefore very sensitive to noise. They are not suitable for imaging in media with strong fluctuations.
Instead of estimating the distance between a focal point and a receiver, we can obtain a much more accurate estimate of the distance between two nearby focal points. The correlation of the estimated Green's function vectors gives such an estimate but, of course, we do not know where their focal points are located in the image window. By cross correlating each Green's function vector with all the others we can identify its nearest neighbors. This is a key observation that allows us to generate a proxy distance be
tween column or Green's function vectors by counting the smallest number of neighborhoods that connect them. This provides a connectivity-based proxy distance between all pairs of column vectors that we can use with the Multidimensional Scaling (MDS) algorithm [9] for identifying Green's function vectors with their focal points up to a rotation, translation and scaling. The resulting relative configuration of points can be spatially fixed with a few (two or three) known reference points in the two dimensional image window. The use of a proxy metric based on connectivity is done by the MDS-MAP algorithm [20, 18]. Constructing the connectivity-based proxy distance using cross-correlations is described in Section 3.
## 1 Imaging problem setup
Suppose that an array of \(N\) receivers records waves generated by sources located over a region of interest, called the image window. The receivers are located at points \(\vec{\boldsymbol{y}}_{j}\), and the sources at unknown locations \(\vec{\boldsymbol{x}}_{i}\). In Figure 1, it is assumed that the array is one-dimensional. The coordinates parallel to this array are the cross-range coordinates, and the ones orthogonal to it the range coordinates. The medium between the array and the unknown sources fluctuates randomly in space as illustrated in Figure 1. The Green's function that characterizes wave propagation in the random medium of a signal of frequency \(\omega\) from a point \(\vec{\boldsymbol{x}}\) to a point \(\vec{\boldsymbol{y}}\) satisfies the wave equation
\[\Delta\widehat{G}(\vec{\boldsymbol{x}},\vec{\boldsymbol{y}})+\kappa^{2}\,n^{2 }(\vec{\boldsymbol{x}})\,\widehat{G}(\vec{\boldsymbol{x}},\vec{\boldsymbol{y} })=\delta(\vec{\boldsymbol{x}}-\vec{\boldsymbol{y}}), \tag{1}\]
where \(\kappa=\omega/c_{0}\) is the wavenumber with \(c_{0}\) a constant reference wave speed. The random index of refraction is \(n(\vec{\boldsymbol{x}})=c_{0}/c(\vec{\boldsymbol{x}})\) with local wave speed \(c(\vec{\boldsymbol{x}})\). In a homogeneous medium, \(c(\vec{\boldsymbol{x}})\equiv c_{0}\) for any location \(\vec{\boldsymbol{x}}\) and, in this case, \(\widehat{G}(\vec{\boldsymbol{x}},\vec{\boldsymbol{y}})=\widehat{G}_{0}(\vec{ \boldsymbol{x}},\vec{\boldsymbol{y}})\), where
\[\widehat{G}_{0}(\vec{\boldsymbol{x}},\vec{\boldsymbol{y}})=\frac{\exp(i\, \kappa\,|\vec{\boldsymbol{x}}-\vec{\boldsymbol{y}}|)}{4\pi|\vec{\boldsymbol{ x}}-\vec{\boldsymbol{y}}|}\,. \tag{2}\]
In random media, however, the wave speed \(c(\vec{\boldsymbol{x}})\) depends on the position \(\vec{\boldsymbol{x}}\). We consider a variable wave speed satisfying
\[\frac{1}{c^{2}(\vec{\boldsymbol{x}})}=\frac{1}{c_{0}^{2}}\bigg{(}1+\sigma\mu( \frac{\vec{\boldsymbol{x}}}{l})\bigg{)}\,, \tag{3}\]
where \(l\) is the correlation length of the inhomogeneities that is characteristic of their size. In (3), \(\sigma\) determines the strength of the fluctuations around the
constant speed \(c_{0}\), and \(\mu(\cdot)\) is a stationary random process with zero mean and normalized autocorrelation function \(R(|\vec{\mathbf{x}}_{i}-\vec{\mathbf{x}}_{i^{\prime}}|)=\mathbb{E}(\mu(\vec{\mathbf{x}}_{i}) \mu(\vec{\mathbf{x}}_{i^{\prime}}))\), so \(R(0)=1\).
We write the data received on the array of \(N\) receivers in vector form with Green's function vector
\[\widehat{\mathbf{g}}(\vec{\mathbf{x}})=[\widehat{G}(\vec{\mathbf{y}}_{1},\vec{\mathbf{x}}), \widehat{G}(\vec{\mathbf{y}}_{2},\vec{\mathbf{x}}),\ldots,\widehat{G}(\vec{\mathbf{y}}_{N}, \vec{\mathbf{x}})]^{T} \tag{4}\]
and we introduce the \(N\times K\) sensing matrix
\[\mathbf{\mathcal{G}}=[\widehat{\mathbf{g}}(\vec{\mathbf{x}}_{1})\,\cdots\,\widehat{\mathbf{g} }(\vec{\mathbf{x}}_{K})] \tag{5}\]
defined on a grid \(\{\vec{\mathbf{x}}_{i}\}_{i=1,\ldots,K}\) spanning the image window. The sensing matrix in (5) maps a distribution of sources in the image window to the (single frequency) data received on the array. The multi-frequency case is described in Section 4. For a given configuration of sources on the grid represented by the vector \(\mathbf{x}\in\mathbb{C}^{K}\), the data recorded on the array is given by
\[\mathbf{y}=\mathbf{\mathcal{G}}\,\mathbf{x}\,, \tag{6}\]
where \(\mathbf{y}\in\mathbb{C}^{N}\). Here, \(\mathbf{x}\) is a vector whose \(j\)th component represents the complex amplitude of the source at location \(\vec{\mathbf{x}}_{j}\) in the image window, \(j=1,\ldots,K\).
Because the medium is random, the sensing matrix \(\mathbf{\mathcal{G}}\) in (6) is not known. Our imaging problem is to estimate this matrix from a set of \(M\) samples or observations \(\{\mathbf{y}_{i}\}_{i=1,\ldots,M}\), with \(\mathbf{y}_{i}=\mathbf{\mathcal{G}}\,\mathbf{x}_{i}\). The number of observations is
large with respect to the dimension of the vectors \(\mathbf{x}_{i}\), i.e., \(M\gg K\). Note that \(\mathbf{x}_{i}\) is also unknown but we assume that it is sparse, implying that the samples \(\mathbf{y}_{i}\) can be represented as a linear combination of a small number of columns of the unknown sensing matrix \(\mathbf{\mathcal{G}}\). Since we do not know \(\mathbf{x}_{i}\), both the locations and the amplitudes of the sources are unknown. We assume that for the few sources that are active for every sample the modulus of their amplitude takes values in a bounded interval away from zero.
We note that for the imaging problem, the coherence between the columns of the sensing matrix \(\mathbf{\mathcal{G}}\) increases as the grid in the image window becomes finer. This can be challenging for the sparse dictionary learning algorithm, described in the next section, as its convergence is guaranteed under incoherence or restricted isometry property assumptions [1]. Coherence is defined as the maximum of the normalized inner product between different columns of the matrix, _i.e._,
\[\nu=\max_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{K}\frac{|\widehat{\mathbf{g}}(\mathbf{\bar{x}}_{i})^{*}\widehat {\mathbf{g}}(\mathbf{\bar{x}}_{j})|}{||\widehat{\mathbf{g}}(\mathbf{\bar{x}}_{i})||\ ||\widehat{\mathbf{g}}(\mathbf{\bar{x}}_{j})||}. \tag{7}\]
Given the size of the recording array and the bandwidth, in order to limit
Figure 2: Refocused spots with \((a)\) a large array used in the first step of the algorithm and \((b)\) a small array used in the second step.
the coherence between the columns of \(\boldsymbol{\mathcal{G}}\) we assume that the grid in the image window is not finer than the support of the point spread function in a reference homogeneous medium; see Figure 2-(a). This is essential for the first step of the approach, the dictionary learning step. The point spread function of an imaging system is the image one obtains when the signal from a single source is used as input.
The first step of our imaging algorithm uses dictionary learning and allows us to recover the columns of the sensing matrix up to a permutation. Although in most of the applications of dictionary learning the order of the columns is not an issue, it is essential in the imaging case. That is because even though we recover the Green's function vectors, the imaging problem is still not solved as we do not know their correspondence with the grid points in the image window. To create an image we need to associate each column of the estimated matrix \(\boldsymbol{\hat{\mathcal{G}}}\) with the corresponding focal point in the image window. This is challenging since the propagation medium is unknown, so we cannot just back-propagate the recovered Green's function vectors, as it is done in time reversal.
In the second step of our approach, we deal with the _focal spot localization problem_ where we associate each recovered Green's function vector \(\hat{\boldsymbol{g}}_{i}\) with its corresponding focal or source point in the image window. The key idea is the use of cross-correlations between the columns of the estimated matrix \(\boldsymbol{\hat{\mathcal{G}}}\) to identify the nearest neighbors of their focal or source points in the image window and then infer their associated focal points on the grid. Coherence becomes crucial in this step. We can do this by using for the cross-correlations a suitable subset of the elements of the estimated columns \(\hat{\boldsymbol{g}}_{i}\) corresponding to a fraction of the size of the recording array. An example is illustrated in Figure 2-(b) where a subarray of half the size is used. The grid reconstruction problem is then solved using MDS with a proxy distance, as in sensor network localization problems [18]. No Euclidian distance information is known but a proxy distance is obtained from connectivity information at limited range that is recovered from cross-correlations.
The cross-correlations formed can be interpreted as time reversal experiments. That is, the signals recorded on the array are re-emitted in the same medium and given the time-reversibility of the wave equation, those signals will focus at the location from which the original signal was emitted. Our connectivity reconstruction relies on this fundamental property of the wave equation and therefore is robust to the complexity of the medium. Our numerical simulations confirm that this is the case.
Dictionary learning
In this Section, we discuss an algorithm aimed at learning from the gathered data a dictionary \(\mathbf{A}\in\mathbb{C}^{N\times K}\) that represents the normalized sensing matrix (5). We assume that the recorded signals \(\boldsymbol{y}_{i}\in\mathbb{C}^{N}\), \(i=1,\ldots,M\), the data, come from only a few sources and can therefore be represented as a linear combination of a small number of columns in the dictionary \(\mathbf{A}\) we want to determine. This means that \(\boldsymbol{y}_{i}=\mathbf{A}\,\boldsymbol{x}_{i}\), where \(\boldsymbol{x}_{i}\in\mathbb{C}^{K}\) are sparse vectors that represent unknown collections of sources firing at the same time. For imaging applications, we can assume that the columns of \(\mathbf{A}\) have unit lengths. We also assume that we know the dimension \(K\) (with usually \(K\gg N\)), which is the number of points in the image window and therefore specifies the resolution of the image. An estimate of \(K\) can be based on the resolution of the imaging setup expected in a homogeneous medium. In a random medium, resolution in time reversal, but not in imaging, will improve [14, 6] and therefore \(K\) could be larger. We assume here that \(K\) is chosen based on a homogeneous medium.
To find the dictionary \(\mathbf{A}\) and the layout of sources, we define the matrix \(\mathbf{X}=[\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{M}]\in\mathbb{C}^{K \times M}\) and the data matrix \(\mathbf{Y}=[\boldsymbol{y}_{1},\ldots,\boldsymbol{y}_{M}]\in\mathbb{C}^{N \times M}\), and solve the problem
\[\begin{array}{rl}\min_{\mathbf{A},\mathbf{X}}&\|\mathbf{A}\mathbf{X}- \mathbf{Y}\|_{F}^{2}\\ \text{s.t.}&\|\boldsymbol{x}_{i}\|_{0}\leqslant s,\,i=1,\ldots,M,\end{array} \tag{8}\]
where \(\|\cdot\|_{0}\) counts the number of non-zero elements and \(s\) is the expected sparsity level. The decomposition \(\mathbf{Y}=\mathbf{A}\mathbf{X}\) is unique up to permutations of the columns of \(\mathbf{A}\) and rows of \(\mathbf{X}\) provided that the data \(\mathbf{Y}\) is rich enough [3].
How much data do we need, that is, how big should \(M\) be? As already noted, in imaging we usually have \(K\gg N\). If the sparsity \(s\) is fixed independent of \(N\) then the condition
\[M>K\log K \tag{9}\]
is sufficient [2, 17] for a suitable probabilistic model of \(\mathbf{A},\mathbf{X},\mathbf{Y}\).
Problem (8) is non-convex, as the constraint is not convex and both \(\mathbf{A}\) and \(\mathbf{X}\) are unknown. However, its solution can be found efficiently by means of an alternating optimization procedure that uses the \(\ell_{1}\)-norm instead of the sparsity count, provided the initialization is close enough to the true solution and the columns of \(\mathbf{X}\) are sparse enough [1]. Specifically, if \(\mathbf{A}\) is known, then \(\mathbf{X}\) in (8) can be obtained as an \(\ell_{1}\)-norm minimization problem
\[\min\|\mathbf{X}\|_{1}\ \ \text{subject to}\ \mathbf{A}\mathbf{X}=\mathbf{Y}\,, \tag{10}\]
that can be easily solved by several different algorithms [11, 19, 12, 5]. Here we solve (10) using a Generalized Lagrangian Multiplier Algorithm (GeLMA) [15].
In the next step, if \(\mathbf{X}\) is known, the minimization problem for \(\mathbf{A}\)
\[\min_{\mathbf{A}}\ \ \|\mathbf{AX}-\mathbf{Y}\|_{F}^{2}\ \ \, \tag{11}\]
can be easily solved. The exact solution is \(\mathbf{A}=\mathbf{Y}\mathbf{X}^{T}(\mathbf{X}\mathbf{X}^{T})^{-1}\), provided \(\mathbf{X}\mathbf{X}^{T}\) is invertible (actually stably invertible). We can normalize the columns of \(\mathbf{A}\) to one after they have been computed.
To summarize, in order to solve (8), we alternate between problems (10) and (11) to update \(\mathbf{X}\) and \(\mathbf{A}\) sequentially one after the other. Each iteration has two steps, starting from an initial guess \(\mathbf{X}_{0}\) and \(\mathbf{A}_{0}\). At the beginning of iteration \(l\geqslant 1\) we have \(\mathbf{A}_{l-1}\) and \(\mathbf{X}_{l-1}\), and we solve (10) with \(\mathbf{A}=\mathbf{A}_{l-1}\) to obtain \(\mathbf{X}_{l}\) using GeLMA. Then, we solve (11) with \(\mathbf{X}=\mathbf{X}_{l}\) fixed to find
\[\mathbf{A}_{l}=\mathbf{Y}\mathbf{X}_{l}^{T}(\mathbf{X}_{l}\mathbf{X}_{l}^{T}) ^{-1} \tag{12}\]
in the second step. This is very much like the Method of Optimal Directions (MOD) algorithm proposed in [13] for signal compression that uses Matching Pursuit for finding \(\mathbf{X}_{l}\) instead of GeLMA in the first step.
Our numerical experiments indicate that, given a suitable initialization, this dictionary learning algorithm can construct the matrix of Green's functions for wave propagation in random media. However, the columns of this matrix, the dictionary, are unordered. They cannot be used for imaging because we do not know the points where they focus in the image window. The next Section addresses this problem.
## 3 Grid reconstruction
In this Section we describe an algorithm for finding the focal spots in the image window from the _estimated_ Green's function vectors \(\{\hat{\boldsymbol{g}}_{i}\}_{i=1}^{K}\). It is the range-free or connectivity based sensor localization algorithm [20], analyzed in [18]. Our main contribution here is to determine connectivity from cross-correlations of the Green's function vectors \(\{\hat{\boldsymbol{g}}_{i}\}_{i=1}^{K}\), which now must retain some coherence. However, the dictionary learning algorithm of the previous section requires incoherence, which means that the \(\nu\) in (7) for the estimated \(\{\hat{\boldsymbol{g}}_{i}\}_{i=1}^{K}\) is small. By using a subset of the components of the \(\{\hat{\boldsymbol{g}}_{i}\}_{i=1}^{K}\), corresponding to a subarray as illustrated in Figure 2, we increase their coherence. In this Section we use the same notation \(\{\hat{\boldsymbol{g}}_{i}\}_{i=1}^{K}\) for their subsampled components.
This increased coherence is essential for the connectivity-based localization algorithm because it allows us to introduce a graph \(G=(V,E)\) where the vertex set \(V=\{1,2,\ldots,K\}\) is associated with the estimated Green's function vectors \(\{\boldsymbol{\hat{g}}_{i}\}_{i=1}^{K}\). A pair of vertices \((i,j)\) is then connected by an edge in \(E\), and assigned the value one in the adjacency matrix of the graph, if the cross-correlation \(\boldsymbol{\hat{g}}_{i}^{*}\boldsymbol{\hat{g}}_{j}\) is sufficiently close to one in absolute value. Otherwise, the pair \((i,j)\) is not connected, and zero is entered in the adjacency matrix. The size of the subarray of receivers is adjusted so that each vertex has up to \(k=2r\) edges, where \(r=2,3\) is the ambient dimension of the image window, assumed known. This is special for our imaging setup and could be generalized depending on the behavior of the cross-correlations.
The proxy distance between two Green's function vectors is now the geodesic graph distance between their corresponding vertices. That is, the proxy distance between \(\hat{\boldsymbol{g}}_{i}\) and \(\hat{\boldsymbol{g}}_{j}\), denoted by \(\hat{d}_{ij}\), is the number of edges in the shortest path connecting \(i\) and \(j\). We use this proxy distance as a replacement of the Euclidean distance between pairs of focal points in the image window associated with Green's function vectors in the MDS algorithm. The resulting configuration of focal points \(\hat{Z}=[\hat{\boldsymbol{x}}_{1},\hat{\boldsymbol{x}}_{2},\ldots,\hat{ \boldsymbol{x}}_{K}]^{T}\) in the image window provides an estimate for the true configuration of focal points \(Z=[\boldsymbol{\vec{x}}_{1},\boldsymbol{\vec{x}}_{2},\ldots,\boldsymbol{\vec{ x}}_{K}]^{T}\), up to rotation, translation and scaling. This is the MDS-MAP algorithm [20] with our correlation-based proxy distance.
When the true Euclidean distance \(D=(d_{ij})\) is used instead of \(\hat{D}=(\hat{d}_{il})\) the classical metric MDS algorithm [9] recovers the configuration of focal points \(Z=[\boldsymbol{\vec{x}}_{1},\boldsymbol{\vec{x}}_{2},\ldots,\boldsymbol{\vec{ x}}_{K}]^{T}\) up to rotation and translation. In this case the input is a \(K\times K\) squared distance matrix \(D\) with entries \(d_{ij}\), \(d_{ij}=(\boldsymbol{\vec{x}}_{i}-\boldsymbol{\vec{x}}_{j})^{T}(\boldsymbol{ \vec{x}}_{i}-\boldsymbol{\vec{x}}_{j})\), and the output the \(K\times r\) configuration matrix of focal points \(Z\). We have that [9]
\[-\frac{1}{2}LDL=LZZ^{T}L\,, \tag{13}\]
where \(L=\mathbf{I}_{K}-\mathbf{1}_{K}\mathbf{1}_{K}^{T}/K\) is a centering matrix, with \(\mathbf{I}_{K}\) the \(K\times K\) identity matrix, and \(\mathbf{1}_{K}\) the column vector of all ones. This means that the matrices \(ZZ^{T}\) and \(-D/2\) are equal when the center of mass of the configuration is moved to zero. For the Euclidean distance matrix \(D\) the focal point reconstruction algorithm (1) determines the Euclidean coordinates of the focal points.
The rank of the matrix \(P=-\frac{1}{2}LDL\) equals the ambient dimension \(r\) of the image window when the Euclidian distance matrix \(D\) is used in the MDS algorithm. When we use the geodesic distance \(\hat{D}\) on the graph then the rank
```
INPUT:\(N\times K\) matrix \(\hat{\mathbf{G}}\) with columns \(\hat{\mathbf{g}}_{i}\), space dimensions \(r=2,3\). OUTPUT: Matrix \(\hat{Z}\) whose column vectors are the estimated coordinates of the grid points \(\hat{\mathbf{\bar{x}}}_{i}\), \(i=1,\ldots,K\). Compute\(G=(V,E)\), with \(V=\{1,2,\ldots,K\}\) and \(E\) so that each node is connected to \(2r\) neighbors; those corresponding to the 2r-largest values of \(|\hat{\mathbf{g}}_{i}^{*}\hat{\mathbf{g}}_{j}|\). Compute the proxy for distance matrix \(\hat{D}\): if\((i,j)\in E\)then \(\hat{d}_{ij}=1\). else \(\hat{d}_{ij}=\) shortest path along \(G\) endif Compute \(P=-\frac{1}{2}L\hat{D}L^{T}\), where \(L=\mathbf{I}_{K}-\mathbf{1}_{K}\mathbf{1}_{K}^{T}/K\). Diagonalize \(P\): \(P=U\Sigma U^{T}\). Compute \(\hat{Z}=U_{r}\Sigma_{r}^{1/2}\).
```
**Algorithm 1** Reconstruction of focal points in image window
of \(P\) is not equal to \(r\) any more. However, the first \(r\) singular vectors of \(P\) are close to the true coordinates \(Z\) (up to centering and rotation) [18]. This is illustrated in Figure 3. The absolute location of the focal points in the image window can be determined using the true location of a few of them, the anchors. These anchors allow us to find the proper rigid transformation and scaling to superimpose the given configuration over them. The anchors can be known a priori or their location can be estimated using coherent interferometric imaging [8]. The number of anchors needed is small, typically \(r+1\).
## 4 Numerical experiments
To simulate wave propagation in random media we use the random travel time model ([8, 16] and references therein) which provides an analytical approximation for the Green's function in (1) in the high-frequency regime in random media with weak fluctuations and large correlation lengths \(\ell=100\lambda\) compared to the central wavelength \(\lambda\), given by
\[\widehat{G}(\vec{\boldsymbol{x}},\vec{\boldsymbol{y}})=\widehat{G}_{0}(\vec{ \boldsymbol{x}},\vec{\boldsymbol{y}})\exp\left[i\sigma\kappa|\vec{\boldsymbol{ x}}-\vec{\boldsymbol{y}}|\int_{0}^{1}\mu(\frac{\vec{\boldsymbol{x}}}{l}+\frac{s}{l}( \vec{\boldsymbol{y}}-\vec{\boldsymbol{x}}))\,ds\right]. \tag{14}\]
Comparing Eqs. (14) and (2) for a homogeneous medium we see that, in this regime, only the phases are perturbed by the random medium while the magnitudes remain unchanged.
Figure 3: The singular values of the doubly centered distance matrix \(P\) normalized by the maximal one are plotted with red circles when \(D\) is used (rank is exactly 2) and with blue stars when \(\hat{D}\) is used. There are exactly 2 top singular values in the second case as well, plotted with blue stars, and the lower eigenvalues drop to zero fast but are not immediately zero as with the red circles.
In our numerical experiments the distance between the array and the image window \(L=100\ell\) is large, so the small distortions produced by each inhomogeneity build up over the propagation distance and are significant at the receivers. The strength of the fluctuations \(\sigma\) is scaled by the dimensionless parameter \(\lambda/\sqrt{lL}\), for which the standard deviation of the random phase fluctuations in the Green's function is \(\mathcal{O}(1)\). The strength of the fluctuations \(\tilde{\sigma}=\sigma/(\lambda/\sqrt{lL})\) in the simulations is \(\tilde{\sigma}=0.6\) or \(\tilde{\sigma}=0.8\).
Figure 4: Cross-correlations of the sensing matrix. Left: cross-correlations of the sensing matrix in the random medium. Center: cross-correlations of the sensing matrix in the homogeneous medium. Right cross-correlations between the sensing matrix in the random and homogeneous media. Strength of the fluctuations of the random medium \(\tilde{\sigma}=0.8\).
Figure 5: Maximum correlation between each estimated column and the true ones in the sensing matrix, as in (15). In red the results when the columns of the matrix are more incoherent (\(a=48\ell\)). In blue the results when the columns of the matrix are more coherent (\(a=24\ell\)). Sparsity \(s=4\) on the left and \(s=8\) on the right.
We consider the following setup for our numerical simulations. In the first step of dictionary learning for the sensing matrix \(\mathbf{\mathcal{G}}\) we use the multi-frequency data recorded with a large array aperture \(a=48\ell\) with \(N_{r}=145\) equally spaced receivers (see Fig. 2). In the second step for the grid reconstruction, we use the data corresponding to half the array aperture, so \(a=24\ell\). The bandwidth \([0.5f,f]\), with \(f=c_{0}/\lambda\), is discretized with \(N_{f}=10\) equally spaced frequencies, and is the same for both steps of the algorithm. We organize the multiple frequency data column-wise, so
\[\mathbf{Y}=[\mathbf{Y}(f_{1})^{\intercal},\mathbf{Y}(f_{2})^{\intercal},\dots,\mathbf{Y}(f_{N_{f}})^{\intercal}]^{\intercal}\]
and the multi-frequency sensing matrix is now
\[\widehat{\mathbf{g}}_{i}=[\widehat{\mathbf{g}}(\mathbf{\bar{x}}_{i},f_{1})^{\intercal}, \widehat{\mathbf{g}}(\mathbf{\bar{x}}_{i},f_{2})^{\intercal},\dots,\widehat{\mathbf{g}}( \mathbf{\bar{x}}_{i},f_{N_{f}})^{\intercal}]^{\intercal},\]
\(i=1,\dots,K\). Thus, the sensing matrix \(\mathbf{\mathcal{G}}=[\widehat{\mathbf{g}}_{1}\,\cdots\,\widehat{\mathbf{g}}_{K}]\) has dimensions \(N\times K\) with The sampling of the \(20\times 20\) points in the image window is based on the homogeneous medium array resolution, \(O(\lambda L/a)\) in cross-range and \(O(c_{0}/B)\) in range [10, 6].
In Figure 4, we assume that the sensing matrices corresponding to a random medium \(\mathbf{\mathcal{G}}\) and to the homogeneous medium \(\mathbf{\mathcal{G}}_{0}\) are known, and we show the cross-correlation matrices of \(\mathbf{\mathcal{G}}^{*}\mathbf{\mathcal{G}}\) (left), \(\mathbf{\mathcal{G}}_{0}^{*}\mathbf{\mathcal{G}}_{0}\) (center), and \(\mathbf{\mathcal{G}}^{*}\mathbf{\mathcal{G}}_{0}\) (right). For the homogeneous medium the Green's function used is given by (2). Each row \(i\) in these images corresponds to a time reversal experiment where a source located at \(\mathbf{\bar{x}}_{i}\) emits a pulse, and the recorded signals are time reversed and emitted back into the medium. When the waves are re-emitted into the same medium in which the measurements were obtained, as in the left and center images of Figure 4, they retrace the original scattering process and arrive back approximately at the point at which they were emitted, that is, the focal point. However, when the back-propagation is done in a different medium, as in the right image of this figure, there is no re-focusing. In Figure 4, the large values (lighter blue color) correspond to re-focusing points. This figure shows that (a) time reversal of waves into random and homogeneous media are similar, and (b) that we cannot use the homogeneous medium to recover this structure if there is scattering.
In our numerical experiments, we assume that we have a diverse set of data \(\mathbf{Y}=[\mathbf{y}_{1},\mathbf{y}_{2},\dots,\mathbf{y}_{M}]\), with \(\mathbf{y}_{i}=\mathbf{\mathcal{G}}\,\mathbf{x}_{i}\). Both the sensing matrix \(\mathbf{\mathcal{G}}\in C^{N\times K}\) and the sparse vectors \(\mathbf{x}_{i}\in\mathbb{C}^{K}\) are unknown. We assume data corresponding to a large number of experiments, so \(M\gg K\). Given this set of data, we want to recover the columns of the sensing matrix \(\mathbf{\mathcal{G}}=[\widehat{\mathbf{g}}_{1}\,\cdots\,\widehat{\mathbf{g}}_{K}]\), whose rank is approximately \(200<K=400\) and whose
coherence is \(\nu=0.7\). The sensing matrix is rank-deficient because the resolution of the image window is high, with pixel sizes \(\lambda L/a\) in cross-range and \(c_{0}/B\) in range.
The results of the first step of the proposed strategy are depicted in Figure 5 for large (red lines) and small (blue lines) arrays. We solve the problem (8) as described in Section 2. To measure the success of this first step we form, for every (normalized) recovered column \(\hat{\boldsymbol{g}}_{i}\), the cross-correlations with all the columns of the true sensing matrix \(\boldsymbol{\mathcal{G}}\), and represent in Figure 5 the maximum value
\[C_{max}(i)=\max_{j}|\hat{\boldsymbol{g}}_{i}^{T}\boldsymbol{\widehat{g}}_{j}| \tag{15}\]
for a sparsity level \(s=4\) (left) and \(s=8\) (right). We observe values very close to \(1\) in both cases when the columns of the sensing matrices are for large array apertures (red lines). This means that the true Green's function vectors are recovered when large aperture arrays are used because they are incoherent. However, when smaller arrays are used (blue lines) Green's function vectors are coherent and some of them are not recovered. It is very important to recover accurately all, or almost all, Green's function vectors because, otherwise, we cannot establish their connectivity properly and, therefore, we cannot reconstruct the grid in the image window in the second step.
Figure 6 shows the results for the grid reconstructions accomplished with algorithm (1) described in Section 3 using \(k=4\) neighbors. This algorithm provides the correspondence between the Green's function vectors found in the first step and their focal points in the image window. From left to
Figure 6: From left to right: Grid reconstruction from true Euclidean distances using the MDS algorithm when all the pairwise distances are assumed known; from true Euclidean distances when only distances corresponding to the four nearest neighbors are assumed known; using the MDS-MAP algorithm with geodesic graph distances for \(\tilde{\sigma}=0.6\); and using the MDS-MAP algorithm with geodesic graph distances for \(\tilde{\sigma}=0.8\). Sparsity \(s=8\) in all cases.
right we show the results when (left) all the pairwise Euclidean distances between the focal points are known in a homogeneous medium, (second from the left) when only Euclidean distances between the four nearest neighbors are known in a homogeneous medium, (second from the right) using only connectivity information in a random medium with \(\tilde{\sigma}=0.6\), and (right) using only connectivity information in a random medium with \(\tilde{\sigma}=0.8\). In all the cases, the sparsity level is \(s=8\). The algorithm (1) provides grid positions up to a rigid transformation and scaling. We post-process the results shown in the second from the right and right images in Figure 6 to transform them to absolute positions using two anchors. See Figure 7.
We observe in Figure 7 that the grids are quite well reconstructed near the center but bent towards the edges. This occurs because our geodesic graph distance is the scaled \(l_{1}\) distance on the grid, and an embedding of such distances into Euclidean spaces leads to such distortions. Naturally, there is no grid deformation shown in the left image of Figure 6 since Euclidean distances are used.
After the two steps of the proposed strategy, we recover the ordered sensing matrix \(\hat{\mathbf{\mathcal{G}}}\). We can, therefore, image any signal that comes to the array from the image window. In Figure 8, we back-propagate a signal \(\widehat{\mathbf{g}}(\mathbf{\vec{x}}_{j})\) from a source located at \(\mathbf{\vec{x}}_{j}\) using the recovered Green's function vectors, so the image formed at points \(\mathbf{\vec{x}}_{i},\ i=1,\ldots,K\) is
\[\mathcal{I}(\mathbf{\vec{x}}_{i};\mathbf{\vec{x}}_{j})=\left|\widehat{g}(\hat{ \dashrightarrow}\mathbf{\vec{x}}^{*}\widehat{\mathbf{g}}(\mathbf{\vec{x}}_{j})\right|. \tag{16}\]
As before, the hat in (16), denotes the recovered Green's function vectors of the sensing matrix using the two step method introduced here. As illustrated in Figure 8, the image produced (right) is similar to the one obtained using
Figure 7: Using two points as anchors, _i.e._ assuming the location of those two points is known, we can estimate the scaling and the rotation needed to recover the absolute grid positions. We compare the recovered locations (red stars) with the true ones (blue circles) where \(\tilde{\sigma}=0.6\) (left) and \(\tilde{\sigma}=0.8\) (right). Sparsity \(s=8\) in both cases. |
2309.08673 | A Two-Level Linear Dependent Type Theory | We present a type theory combining both linearity and dependency by
stratifying typing rules into a level for logics and a level for programs. The
distinction between logics and programs decouples their semantics, allowing the
type system to assume tight resource bounds. A natural notion of irrelevancy is
established where all proofs and types occurring inside programs are fully
erasable without compromising their operational behavior. Through a heap-based
operational semantics, we show that extracted programs always make
computational progress and run memory clean. Additionally, programs can be
freely reflected into the logical level for conducting deep proofs in the style
of standard dependent type theories. This enables one to write resource safe
programs and verify their correctness using a unified language. | Qiancheng Fu, Hongwei Xi | 2023-09-15T18:04:35Z | http://arxiv.org/abs/2309.08673v1 | # A Two-Level Linear Dependent Type Theory
###### Abstract.
We present a type theory combining both linearity and dependency by stratifying typing rules into a level for logics and a level for programs. The distinction between logics and programs decouples their semantics, allowing the type system to assume tight resource bounds. A natural notion of irrelevancy is established where all proofs and types occurring inside programs are fully erasable without compromising their operational behavior. Through a heap-based operational semantics, we show that extracted programs always make computational progress and run memory clean. Additionally, programs can be freely reflected into the logical level for conducting deep proofs in the style of standard dependent type theories. This enables one to write resource safe programs and verify their correctness using a unified language.
type theory, linear logic, computational relevancy, heap semantics +
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
In order to show that TLL guarantees productive programs and safe resource usage, we develop a type directed erasure procedure and a heap semantics inspired by Turner and Wadler (Turner and Wadler, 2011). During erasure, the syntax tree of a well-typed program is stripped of all type annotations and computationally irrelevant terms. The program extracted from erasure is then evaluated using our heap semantics. As the program evaluates, heap cells are dynamically allocated and freed when linearly typed values are constructed and consumed. We prove that our calculus is sound with regards to this erasure procedure and heap semantics, ensuring evaluation progress and safe memory usage at runtime.
All lemmas and theorems reported in this paper are formalized and proven correct in Coq (Tull, 2012). We also implement a compiler in OCaml that compiles TLL programs into C. Proofs, source code and example programs are available in our git repository 1.
Footnote 1: [https://github.com/qcfu-bu/TLL-arxiv-repo](https://github.com/qcfu-bu/TLL-arxiv-repo)
In summary, we make the following contributions:
* First, we design TLL, a two-level linear dependent type system. By stratifying the typing rules into a logical level and a program level, we are able to characterize proofs and programs more precisely.
* Second, we study the meta-theoretical properties of the two levels. We show that the logical level exhibits qualities such as confluence and strong normalization that make it suitable for logical reasoning.
* Furthermore, we design an erasure procedure and heap semantics that model the behavior of programs at runtime. Using this semantics, we show programs extracted from erasure run memory clean.
* The entire calculus with its meta theories is formalized and proven correct in Coq. We also implement a compiler in OCaml with many supporting examples.
## 2. Overall Structure
The syntax of TLL is presented in Figure 2.
The typing rules of TLL are stratified into a level for logics and a level for programs. This is expressed formally through the two judgments depicted in Figure 1. The logical typing judgment \(\Gamma\vdash m:A\) states that term \(m\) has type \(A\) under logical context \(\Gamma\). The program typing judgment \(\Gamma;\Delta\vdash m:A\) states that term \(m\) has type \(A\) under logical context \(\Gamma\) and program context \(\Delta\).
TLL utilizes two sorts L and U to distinguish between the modalities of types (linear and non-linear respectively). When an arbitrary type \(A\) is of the sort L, we say that this type is linear. If \(A\) is of sort U, we say that this type is non-linear. In order to avoid Girard's paradox, sorts can be endowed with universe levels in the usual way, but we do not do so in this paper for the sake of presentation clarity.
From a logical typing perspective, modality has no effect. This follows the intuition that hypothetical reasoning about resources does not consume them. The type system at the logical level essentially boils down to standard dependent type theory with extra modality and relevancy annotations.
Figure 1. Logical and Program Typing
At the program level, the modalities of types come into effect. Computationally relevant terms of linear types must be used exactly once and terms which are computationally irrelevant or of non-linear types can be used freely. This is accomplished by carefully controlling how contraction and weakening rules are applied to the program context \(\Delta\).
For well-typed programs, a type directed erasure procedure can be carried out to remove all type annotations and sub-terms occurring in computationally irrelevant positions. The erasure soundness theorems guarantee that programs extracted by erasure always make computational progress in a manner that is compatible with their original non-erased counterparts.
## 3. Logical Typing
This section describes the typing rules in the logical level. Some of these rules will appear to be redundant as type modality and computational relevancy hold no weight at the logical level where essentially everything is computationally irrelevant. However, the importance of these rules lies in their interactions with program typing which is presented in Section 4.
### Type Formation
The type formation rules presented in Figure 4 appear at the logical level. They determine the canonical forms of types. An obvious departure from standard formalizations of dependent type theory is the presence of two kinds of \(\Pi\)-types.
The first of these is the \(\Pi 0\)-type of the form \(\Pi_{t}\{x:A\}.B\). We refer to the \(\lambda\)-terms inhabiting \(\Pi 0\)-types at the program level as \(\lambda 0\)-programs. For \(\lambda 0\)-programs, their arguments may only be used irrelevantly within their bodies. This is similar in spirit to the \(\Lambda\)-quantifier of System F where type parameterized terms behave computationally the same regardless of the choice of type instantiation. However, \(\Pi 0\)-types are richer than \(\Lambda\)-quantifiers in the sense that they can depend on arbitrary terms and not just types.
The \(\Pi 1\)-type of the form \(\Pi_{t}(x:A).B\) is the usual function type. Similarly to the \(\Pi 0\)-type case, we refer to the \(\lambda\)-terms inhabiting \(\Pi 1\)-types at the program level as \(\lambda 1\)-programs. The arguments of \(\lambda 1\)-programs are allowed to be used relevantly in their bodies.
Figure 4. Type Formation
Figure 3. Logical Context
Figure 2. Syntax of TLL
One final detail that we want to emphasize about \(\Pi\)-types in TLL is the sort annotation \(t\). It is clear from the typing rules here that sort \(s\) of the domain, sort \(r\) of the codomain and \(t\) are not correlated. The \(t\) annotation controls the modality of the overall \(\Pi\)-type intrinsically, meaning that if \(t\) is set to L, then the \(\lambda\)-programs inhabiting this type must be applied exactly once. In Section 4, we will see how the \(t\) annotation imposes constraints on \(\lambda\)-program construction.
### Logical Terms
The typing rules for logical terms are presented in Figure 5. We can see from the lack of substructural restrictions and the symmetry between the rules concerning \(\Pi 0\)-types and \(\Pi 1\)-types that the terms at the logical level are just Martin-Lof terms (Mattin and Lof, 2015) with extra annotations. The relation \(A\simeq B\) asserted by the last rule is the usual definitional equality relation stating that \(A\) and \(B\) are convertible through logical reductions (Section 6.1).
## 4. Program Typing
The typing of programs is where our preparation at the logical level pays off. Due to the substructural nature of program typing, we must first understand the formation of program contexts and the constraints that can be imposed upon them before we can progress further into the presentation.
### Program Context
A program context \(\Delta\) is a sequence of triples in the form \(x:_{s}A,y:r\), \(B,\ldots\) where each triple is comprised of a fresh variable, a sort and a type. The well-formation of a program context \(\Delta\) is defined under a logical context \(\Gamma\) as the judgment \(\Gamma;\Delta\vdash\) whose rules are formally presented in Figure 6. Basically, for a well-formed program context according to \(\Gamma;\Delta\vdash\), each entry \(x:_{s}A\) in \(\Delta\) must be correspondingly well-sorted at the logical level as \(\Gamma\vdash A:s\). We can already see here a major role of the logical level: it provides the types that programs inhabit.
We can see from the second and third rules in Figure 6 that given judgment \(\Gamma;\Delta\vdash\), the program context \(\Delta\) forms an annotated sub-context of the logical context \(\Gamma\). For the sake of readability, we implicitly assume that \(x\not\in\Delta\) whenever the notation \(\Gamma,x:A;\Delta\) is used.
Figure 5. Logical Terms
Figure 6. Program Context
### Context Management
Careful context management lies at the heart of sub-structural type systems. For this purpose, we introduce context merge \(\Delta_{1}\cup\Delta_{2}\) and context constraint \(\Delta\triangleright s\) whose rules are listed in Figure 7 and Figure 8 respectively.
Context merge \(\Delta_{1}\cup\Delta_{2}\) is a partial function that applies the contraction rule to overlapping U sorted triples in program contexts \(\Delta_{1}\) and \(\Delta_{2}\). For L sorted triples, they must occur uniquely in either \(\Delta_{1}\) or \(\Delta_{2}\) but never both. Whenever we write \(\Delta_{1}\cup\Delta_{2}\) inside typing rules, we implicitly assert that context merging is well-defined for \(\Delta_{1}\) and \(\Delta_{2}\).
For the sort indexed context constraint \(\Delta\triangleright s\), if \(s=\text{U}\) then all triples in \(\Delta\) must be U annotated. In this situation, we know from context well-formation that all types in \(\Delta\) must be non-linear. On the other hand, if \(s=\text{L}\) then triples in \(\Delta\) may be of both sorts. This parameterized behavior allows context constraints appearing in typing rules to work for both linear and non-linear modalities.
### General Typing
At this point we are ready to begin the discussion on typing rules at the program level properly. An immediate difference between the program level and logical level is the lack of type formation rules at the program level. From TLL's perspective, types are hypothetical entities whose purpose is to mediate program composition. All the type formation rules for deriving the types of programs are defined at the logical level.
The first two rules at the program level are presented in Figure 9 concerning variable typing and type conversion respectively. In the variable typing rule we see that the program context \(\Delta\) must contain the program variable \(x:_{s}A\) of interest. Furthermore, the rest of the program context is subject to constraint \(\Delta/\{x:_{s}A\}\triangleright\text{U}\) which inhibits weakening the context with variables of linear types. The conversion rule states that a program of type \(A\) can be viewed as a program of type \(B\) provided that \(A\) and \(B\) are definitionally equal and \(B\) is well-sorted at the logical level.
Figure 8. Context Constraint
Figure 7. Context Merge
Figure 9. General Program Typing
### Irrelevance Quantification
In Section 3.1 we introduced \(\Pi 0\)-types and \(\Pi 1\)-types. Figure 10 shows how \(\lambda 0\)-programs inhabiting \(\Pi 0\)-types are constructed along with their corresponding application rule.
Observe that in the premise of the \(\lambda 0\)-program construction rule, only the logical context is expanded with the parameter as \(\Gamma,x:A\) whereas the program context \(\Delta\) is left unchanged. The body of the \(\lambda 0\)-program \(m\) does not have access to \(x\) through the program context, so according to the program variable rule \(x\) cannot be typed directly as a program in \(m\). However, type annotations and irrelevant terms require only the presence of the logical context which is why \(x\) can still be used irrelevantly in \(m\). The side condition \(\Delta\triangleright t\) ensures that if \(\Delta\) contains linear variables, they cannot be trivially duplicated by simply packing them into a non-linear \(\lambda 0\)-program.
The application rule for \(\lambda 0\)-programs presents an interesting situation where its premise requires that \(m\) be typed at the program level and \(n\) be typed at the logical level. If \(m\) is a \(\lambda 0\)-program, then the parameter of \(m\) can only be used irrelevantly inside its body. Due to the fact that \(n\) will always land in computationally irrelevant positions after \(\beta\)-reduction, all of \(n\) is considered to be irrelevant as well.
### Relevance Quantification
The rules governing the creation and application of \(\lambda 1\)-programs are presented in Figure 11. They are similar to their irrelevance counterparts with some subtle yet important differences.
In the premise of the introduction rule for \(\lambda 1\)-programs, we see that both the logical context and program context are expanded with the parameter as \(\Gamma,x:A\) and \(\Delta,x:_{s}A\) respectively. This means that \(m\) can utilize \(x\) both irrelevantly inside type annotations and also relevantly as a subprogram. Furthermore, if \(s=\mathrm{L}\), the argument \(x\) must be used exactly once inside \(m\) because linearly typed variables cannot be discarded from the program context through weakening nor duplicated through contraction. In the case that \(s=\mathrm{U}\), the argument \(x\) may be used freely inside \(m\) as the structural rules are admissible on variables with non-linear types.
Since the introduction rule establishes that the arguments of \(\lambda 1\)-programs can be used relevantly inside their bodies, the application rule must account for linear resources used by the applied argument. We can see this taking place here as the argument \(n\) must be typed at the program level with program context \(\Delta_{2}\). Additionally, the program context \(\Delta_{1}\) of \(m\) is merged together with \(\Delta_{2}\) as \(\Delta_{1}\cup\Delta_{2}\) in the conclusion.
Careful readers may have noticed the possibility for a seemingly unsound situation to arise in the typing rule for applications. Suppose that \(m\) is a \(\lambda 1\)-program whose domain is of non-linear type \(A\). This means that the parameter of \(m\) can be used freely inside its body. If \(n\) uses linear resources
Figure 11. Relevance Quantification
Figure 10. Irrelevance Quantification
in \(\Delta_{2}\), then substituting \(n\) into the body of \(m\) could result in the duplication or leakage of resources. Unlike some prior works on linear dependent types which strictly forbid these applications from being well-typed (Bang et al., 2017) or require multiple copies of \(\Delta_{2}\)(Bang et al., 2017), we provide an alternative solution where applications of this form are sound using a single copy of \(\Delta_{2}\). By leveraging the flexibility of TLL's two-level design, we decouple the operational semantics of the logical level from the program level and enforce a call-by-value style evaluation order in the program level semantics. When using call-by-value, \(n\) must first be evaluated to a value of type \(A\). This eager evaluation strategy essentially consumes all of the necessary linear resources once before \(\beta\)-reduction. The final value of type \(A\) that substitutes into \(m\) is guaranteed by the value stability theorem (Theorem 4.1) to be resource free which allows it to be used soundly within \(m\).
## 5. Program Extraction
In this section, we describe the type directed procedure for erasing type annotations and irrelevant terms from TLL programs. During erasure, every term to be erased is replaced with a special constant \(\Box\) with no typing information nor computational behavior. We use the judgment \(\Gamma;\Delta\vdash m\sim m^{\prime}:A\) to formally state that a program \(m\) of type \(A\) is erased to the extracted program \(m^{\prime}\).
The following example shows the erasure of a program to a much simpler extracted form where all irrelevant terms are replaced with \(\Box\). Notice that the entire argument applied to the \(\lambda 0\)-program is erased in the extracted program.
\[(\lambda_{\mathrm{L}}\{A:\mathrm{U}\}.\lambda_{\mathrm{L}}(x:A).x)\ \left( \Pi_{\mathrm{U}}\{B:\mathrm{L}\}.\Pi_{\mathrm{U}}(x:B).B\right)\sim\] \[(\lambda_{\mathrm{L}}\{A:\Box\}.\lambda_{\mathrm{L}}(x:\Box).x)\ \Box\]
### General Extraction
The erasure judgment is defined in a similar fashion to program typing. We begin by presenting the erasure rules for variables and type conversion in Figure 12.
Program variables are considered atomic by erasure in the sense that they do not contain irrelevant sub-terms. So erasure is an identity operation when applied to program variables. The type conversion erasure rule states that if a program \(m\) of type \(A\) can be extracted to \(m^{\prime}\) and \(A\) is definitionally equal to some well-sorted type \(B\), then \(m\) can be viewed as a program of type \(B\) and still be extracted to \(m^{\prime}\).
### Irrelevance Erasure
The rules for performing erasure on \(\lambda 0\)-programs and their applications are presented in Figure 13, both of which mimic their program typing counterparts.
Figure 12. General Erasure
Figure 13. Irrelevance Erasure
### Relevance Erasure
Erasure for \(\lambda 1\)-programs and their applications can be carried out in an inductive manner as depicted in Figure 14. Both of these rules are straightforward as they simply push the erasure procedure structurally into their sub-programs.
## 6. Operational Semantics
The exposition up until this point has been solely concerned with the static aspects of TLL such as typing and erasure. We now turn our focus to TLL's dynamic behavior by endowing it with two separate operational semantics: one for the logical level and the other for the program level. We use the relation \(m\leadsto n\) for logical reductions and the relation \(m\leadsto n\) for program reductions.
### Logical Reductions
The reductions carried out by terms in the logical level are entirely standard. Figure 15 presents an excerpt of the logical reduction rules where many of the uninteresting structural cases have been elided. Unlike the reductions at the program level which are carried out using a call-by-value style evaluation order, the reductions at the logical level are not restricted to any particular evaluation order. The confluence theorem (Theorem 1) for logic level reductions ensures that for any arbitrary term, every reduction strategy can ultimately be joined at a common term. Coupled with the fact that logical reductions are strongly normalizing (Theorem 5) for well-typed logical terms, one can check the definitional equality of two terms by comparing their normal forms.
### Program Reductions
As we have mentioned previously, the program level operational semantics utilizes a call-by-value style evaluation order. Figure 16 lists the various value forms. We consider program variables to be values in order to allow user assumed constants to be passed around.
The reduction rules at the program level are given in Figure 17. Due to the fact that types and irrelevant terms are computationally inert, there are significantly fewer reduction rules at the program level than at the logical level. Among the reductions here are the \(\beta_{0}\)-reduction for \(\lambda 0\)-programs and \(\beta_{1}\)-reduction for \(\lambda 1\)-programs.
Figure 16. Program Values
Figure 14. Relevance Erasure
Figure 15. Logical Reductions (Excerpt)
From the program level typing rules we know that arguments applied to \(\lambda 0\)-programs must be irrelevant terms. So due to its purely hypothetical nature, the irrelevant argument \(n\) here will never consume actual resources. This claim is reinforced by the fact that after erasure, \(n\) will be \(\square\), which is completely devoid of operational behavior. Thus it is sound for the \(\beta_{0}\)-reduction to immediately substitute \(n\) into \(m\) without evaluation.
In Section 4.5 we have explained that the call-by-value style operational semantics at the program level allows us to assume tight resource bounds when defining the typing rules for \(\lambda 1\)-program application. This statement is realized by the \(\beta_{1}\)-reduction rule which requires the applied argument \(v\) to be a value. Now that \(v\) is a value, the resources contained within \(v\) are upper bound by the value stability theorem (Theorem 4.1). So it is sound for the \(\beta_{1}\)-reduction to substitute \(v\) into \(m\).
## 7. Meta Theory
In this section we study the meta-theoretic properties of TLL. We organize the presentation into three subsections which are concerned with logical level theories, program level theories and program extraction theories respectively.
### Logical Theories
The first theorem of the logical level is that of confluence. As described in Section 6.1, logical reductions do not have a fixed evaluation order so confluence is necessary to join together different reduction paths. This is especially important from an implementation perspective where definitional equality is checked by reducing terms to their normal forms. If confluence is not admissible, then certain reduction strategies may lead to a loss of valid definitional equalities. To prove the confluence theorem, we use the standard technique of showing the diamond property for parallelized logical reductions.
Theorem 1 (Confluence of Logical Reductions).: _If \(m\leadsto^{*}m_{1}\) and \(m\leadsto^{*}m_{2}\), then there exists \(n\) such that \(m_{1}\leadsto^{*}n\) and \(m_{2}\leadsto^{*}n\)._
At the logical level, types are terms inhabiting sorts. The type validity theorem shows that the types of terms are indeed valid types according to this definition. Besides substantiating the design of TLL as a dependent type system, the type validity theorem also provides a great deal of utility when proving other theorems as it allows types to be viewed as terms. This enables various inversion lemmas to be applicable to types.
Theorem 2 (Type Validity).: _For any logical typing \(\Gamma\vdash m:A\), there exists sorts such that \(\Gamma\vdash A:s\) is derivable._
The sort of a TLL type determines its modality. A type inhabiting the L sort is linear and a type inhabiting the U sort is non-linear. With the sorts of types playing such a crucial role in the substructural type system at the program level, it is important to show that no ambiguity arises when assigning sorts to types at the logical level. The sort uniqueness theorem states that the sort of a particular type is unique thus preventing contradictory situations where a type is both L sorted
Figure 17. Program Reductions
and U sorted. When viewed in conjunction with type validity, these theorems show that there always exists a unique sort for the type of a term to inhabit.
Theorem 3 (Sort Uniqueness).: _If there are logical typings \(\Gamma\vdash A:s\) and \(\Gamma\vdash A:t\), then \(s=t\)._
The standard subject reduction theorem is admissible for well-typed logical terms. This means that the types of logical terms are preserved by reductions. Properties and theorems that are derived from the logical typing judgment can be propagated across reductions as well. Furthermore, subject reduction also ensures that a reduction based definitional equality checker never alters the types of its candidates.
Theorem 4 (Logical Subject Reduction).: _If there are logical typing \(\Gamma\vdash m:A\) and reduction \(m\leadsto n\), then \(\Gamma\vdash n:A\) is derivable._
Finally, we have the strong normalization theorem at the logical level. Assuming that sorts are always implicitly labeled with universe levels in the usual way (i.e., \(s_{l}:\text{U}_{l+1}\)), then universe inconsistencies can be ruled out by the type system. At this point, the logical level of TLL can be modeled in Martin-Lof type theory (MLTT) [15] in a straightforward manner that preserves its reduction behavior. The rules for carrying out the modeling procedure are given formally in Figure 18. Basically, an MLTT model of logical TLL collapses the two sorts into one and inductively strips terms of their modality and relevancy annotations.
After naturally extending the modeling procedure to all types appearing in TLL logical contexts, the following two lemmas can be proven. These results show that this model is indeed sound with regards to logical reduction. By virtue of the strong normalization property for MLTT [8], TLL must be strongly normalizing as well.
Lemma 1 (Logical Type Model).: _Given a TLL logical typing judgment \(\Gamma\vdash_{TLL}m:A\), the judgment \(\llbracket\Gamma\rrbracket\vdash_{MLTT}\llbracket m\rrbracket:\llbracket A\rrbracket\) can be derived in Martin-Lof type theory._
Lemma 2 (Logical Reduction Model).: _Given a TLL logical reduction \(m\leadsto_{TLL}n\), the reduction \(\llbracket m\rrbracket\leadsto_{MLTT}\llbracket n\rrbracket\) can be derived in Martin-Lof type theory._
Theorem 5 (Logical Strong Normalization).: _For any TLL term \(m\) with logical typing \(\Gamma\vdash m:A\), it is strongly normalizing._
Figure 18. Logical TLL in Martin-Lof
### Program Theories
TLL prohibits weakening the program context with variables of linear types to prevent the discarding of resources. However, the logical context can be independently weakened by itself since hypothetical resources can always be assumed freely. Moreover, weakening is admissible for the program context if the weakened variable is of non-linear type. These observations are expressed formally in the following pair of weakening lemmas.
Lemma 3 (Program 0-Weakening).: _For valid program typing \(\Gamma;\Delta\vdash m:A\) and logical typing \(\Gamma\vdash B:s\), the judgment \(\Gamma,x:B;\Delta\vdash m:A\) is derivable for any \(x\notin\Gamma\)._
Lemma 4 (Program 1-Weakening).: _For valid program typing \(\Gamma;\Delta\vdash m:A\) and logical typing \(\Gamma\vdash B:U\), the judgment \(\Gamma,x:B;\Delta,x:_{U}B\vdash m:A\) is derivable for any \(x\notin\Gamma\)._
A common drawback of stratified type systems is the lack of code sharing between language fragments. Structures performing similar tasks must be implemented independently in the different layers of the language. Libraries with large amounts of redundant code can become difficult to scale and maintain so it is important to reduce code duplication to the best of our abilities. The program reflection theorem tackles the code redundancy problem by allowing us to freely reflect well-typed programs into the logical level. This essentially allows the sharing of all code written in the program level with the logical level.
Theorem 6 (Program Reflection).: _For any program typing \(\Gamma;\Delta\vdash m:A\), logical typing \(\Gamma\vdash m:A\) is derivable._
Arbitrary TLL programs may utilize linear resources to compute a final non-linear value. So despite these programs being of non-linear types, they cannot be freely duplicated without breaking the no-contraction principle. However for programs in value form, the value stability theorem gives an upper bound on the resources they are allowed to consume. For a linearly typed value, it will always be used exactly once and consequently any resource held by the value is used exactly once as well. For values of non-linear type, the context constraint \(\Delta\triangleright\) U prevents resources from occurring inside the value which allows it to be duplicated soundly.
Theorem 7 (Value Stability).: _If there is value \(v\) with program typing \(\Gamma;\Delta\vdash v:A\) and \(\Gamma\vdash A:s\), then \(\Delta\triangleright s\)._
The program level supports its own version of the subject reduction theorem that is defined on the program typing judgment and program reductions. Although there are less typing rules at the program level when compared to the logical level, the program subject reduction is more difficult to prove than its logical counterpart. This is due to the necessity of carefully tracking changes in the program context during variable substitution. The two following lemmas describe the interactions between substitution and contexts.
Lemma 5 (Program 0-Substitution).: _If there are program typing \(\Gamma,x:A;\Delta\vdash m:B\) and logical typing \(\Gamma\vdash n:A\), then \(\Gamma;\Delta\vdash m[n/x]:B[n/x]\) is derivable._
Lemma 6 (Program 1-Substitution).: _If there are program typings \(\Gamma,x:A;\Delta_{1},x:_{s}A\vdash m:B\) and \(\Gamma;\Delta_{2}\vdash n:A\) and context constraint \(\Delta_{2}\triangleright s\), then \(\Gamma;\Delta_{1}\mathbin{\mathaccent 0{\cdot}\cup}\Delta_{2}\vdash m[n/x]:B[n/x]\) is derivable._
Theorem 8 (Program Subject Reduction).: _For any program typing \(\epsilon;\epsilon\vdash m:A\) and reduction \(m\rightsquigarrow n\), there is \(\epsilon;\epsilon\vdash n:A\)._
To show that well-typed programs "cannot go wrong", we prove the following progress theorem. When viewed together with the program subject reduction theorem, it is clear that closed TLL programs will not get stuck during evaluation.
Theorem 7.1 (Program Progress).: _If there is program typing \(\epsilon;\epsilon\vdash m:A\), then \(m\) is either a value or there exists \(n\) such that \(m\rightsquigarrow n\)._
### Program Extraction
Introduced in Section 5, the program erasure procedure is carried out by inductively erasing type annotations and irrelevant terms occurring inside programs. The erasure existence theorem shows that extraction is well-defined for all well-typed TLL programs.
Theorem 7.2 (Erasure Existence).: _For any well typed program \(\Gamma;\Delta\vdash m:A\), there exists an extracted version of it \(m^{\prime}\) such that the erasure relation \(\Gamma;\Delta\vdash m\sim m^{\prime}:A\) is derivable._
After erasure has been successfully carried out on well-type programs, the extracted results retain only the relevant parts of the original program. It is now important to show that these extracted programs still behave as expected of their original selves computationally. We accomplish this by proving instrumented subject reduction and progress theorems.
The first theorem to establish the connection between original programs and their extracted forms is the erasure subject reduction theorem. Erasure subject reduction tells us that if a program reduction \(m^{\prime}\rightsquigarrow n^{\prime}\) can be triggered for the extracted \(m^{\prime}\) of a well-typed program \(m\), then there exists a well-type program \(n\) that extracts to \(n^{\prime}\) and the reduction \(m\rightsquigarrow n\) exists on the original forms.
Theorem 7.3 (Erasure Subject Reduction).: _For any erasure relation \(\epsilon;\epsilon\vdash m\sim m^{\prime}:A\) and reduction \(m^{\prime}\rightsquigarrow n^{\prime}\), there exists program \(n\) such that the following diagram commutes._
\[\begin{array}{ccc}\epsilon;\epsilon\ \vdash m&\ \ \ \ \ \ \ \ m^{\prime}\ :\ A\\ &\raisebox{-1.5pt}{\mbox{\Large$\chi$}}&&\raisebox{-1.5pt}{\mbox{\Large$\chi$}}\\ \epsilon;\epsilon\ \vdash n&\ \
### Heaps
Heaps \(H\) are maps from unique locations \(l\) to sort annotated values. Generally, a heap is of the following form:
\[H::=\{l_{1}\mapsto_{s_{1}}v_{1},l_{2}\mapsto_{s_{2}}v_{2},\ldots,l_{k}\mapsto_{s_ {k}}v_{k}\}\]
Each entry \(l_{i}\mapsto_{s_{i}}v_{i}\) denotes a mapping from location \(l_{i}\) to value \(v_{i}\) of \(s_{i}\) modality. In particular, if \(s_{i}=\mathrm{U}\) then looking up location \(l_{i}\) in the heap will not cause any changes. However, if \(s_{i}=\mathrm{L}\) then looking up location \(l_{i}\) in the heap will remove the mapping from the heap. Formally, we introduce the relation \(lookup(H_{1},l,v,H_{2})\) with rules presented in Figure 19, stating that looking up location \(l\) in heap \(H_{1}\) results in value \(v\) and heap \(H_{2}\). Depending on the mapping modality between \(l\) and \(m\), the resulting heap \(H_{2}\) after lookup is equal to either \(H_{1}\) in the U case or \(H_{1}/\{l\mapsto_{\mathrm{L}}v\}\) in the L case.
Notice that the entries \(l\mapsto_{s}v\) of heaps are similar to the triples \(x:_{s}A\) of program contexts. While one maps locations to values, the other maps variables to types. Both kinds of mappings are annotated by sort \(s\). Taking advantage of these commonalities, we overload the merge operator \(H_{1}\cup H_{2}\) and constraint \(H\triangleright s\) to work for heaps as well. So instead of operating over variables in contexts, they operate over locations in heaps.
### Heap Reductions
The rules of heap reductions are presented in Figure 20. These rules essentially form a modified version of the program level call-by-value semantics. When a program is evaluated to value \(v\), a memory cell at a fresh location \(l\) in the heap is allocated to store \(v\). For example, since \(\lambda\)-programs are considered values, a fresh cell with the same modality as the \(\lambda\)-program is allocated to store it. Now if a pointer expression \(*l\) is encountered, we know that it points to a value located in the heap. The application rules utilize this fact to enforce the call-by-value evaluation order for relevant applications.
Generally for redexes in the heap semantics, pointers are expected in place of values in the standard program semantics. For example, the application of two pointers \(*l_{1}*l_{2}\) is considered a \(\beta_{1}\)-redex if \(*l_{1}\) points to a relevant \(\lambda\)1-program in the heap. To reduce this
Figure 19. Heap Lookup
Figure 20. Heap Reductions
pointer \(*l_{2}\) is substituted into the body of the \(\lambda 1\)-program referenced by \(*l_{1}\). Likewise, the application form \(*l\)\(n\) is considered a \(\beta_{0}\)-redex if \(*l\) points to an irrelevant \(\lambda 0\)-program in the heap. For extracted programs satisfying the erasure relation, the argument \(n\) here must be \(\Box\) so we immediately substitute \(\Box\) into the body of the \(\lambda 0\)-program referenced by \(*l\).
### Pointer Resolution
We do not extend our typing rules or standard operational semantics to cover pointer expressions, so whenever there is \(\Gamma;\Delta\vdash m:A\) or \(m\leadsto n\) we know that all terms involved must not contain pointers. Instead, we introduce a new judgment \(H;m\sim n\) presented in Figure 21 which recursively dereferences all pointers in \(m\) using heap \(H\) until there are no pointers occurring in \(n\).
When defining the judgment \(H;m\sim n\), care is taken to ensure that dereferencing pointers obey the no-weakening and no-contraction principles analogously to the program typing rules. In other words, pointers to linear mappings in \(H\) are dereferenced only once. This is accomplished by enforcing side conditions \(H_{1}\cup H_{2}\) and \(H\vDash s\) which basically perform the same roles as their typing counterparts. The only rule without a typing counterpart is the last rule where it initially dereferences pointer \(*l\) to value \(v^{\prime}\) and recursively resolves all pointers in \(v^{\prime}\) using the remaining heap \(H^{\prime}\).
To reintroduce typing information back into programs containing pointer expressions, we develop the ternary relation of well-resolved programs presented in Figure 22 that unites the erasure relation of Section 5 and pointer resolution. For a valid instance of well-resolved \(H\vdash a\sim b\sim c:A\) we know that \(a\) is a well-type program that extracts to \(b\) and resolving the pointers in \(c\) also gives us \(b\). Essentially, the extracted program \(b\) serves to bridge the well-typed original program \(a\) and the pointer program \(c\).
From the definition of pointer resolution it is clear that the contents in heap \(H\) greatly influence the outcome of resolving pointer program \(m\) in judgment \(H;m\sim n\). To characterize the heaps produced during heap reduction, we introduce in Figure 23 a judgment \(H\)_wr-heap_ stating that \(H\) is of a form that could plausibly be produced by heap reduction. Basically, all locations in \(H\) map to closed values. This means that performing lookup in heaps satisfying the _wr-heap_ property is guaranteed to be productive because a value is retrieved after one level of indirection. The _wr-heap_ property serves as an inductive invariant in the proof of heap semantics soundness, informing us of the structure of heaps at every reduction step.
Figure 21. Pointer Resolution
Figure 22. Well-Resolved
### Soundness of Heap Semantics
The soundness of heap semantics is again justified through progress-preservation style theorems. Due to the fact that these theorems must now account for program typing, erasure, pointer resolution, heap reductions and many other concepts simultaneously, their statements and proofs become significantly more involved than previous versions.
The first theorem that we present is resolution stability. It is reminiscent of the value stability theorem (Theorem 7). But instead of exploring the constraints set by the modality of values on their program contexts as in the value stability case, the resolution stability theorem derives constraints on the mappings inside heaps.
Theorem 13 (Resolution Stability).: _Given valid instances of well-resolved \(H\vdash a\sim b\sim c:A\), logical typing \(\epsilon\vdash A:s\) and \(H\) wr-heap, if \(b\) is a value then heap \(H\) can be upper bound by constraint \(H\triangleright s\)._
The heap subject reduction theorem propagates the well-resolved and _wr-heap_ invariants across heap reduction, ensuring that pointer programs well-resolved in wr-heaps always reduce to programs that are well-resolved in wr-heaps. Additionally, heap reductions agree with iterated steps in the standard semantics for original programs and extracted programs.
Theorem 14 (Heap Subject Reduction).: _Given instances of well-resolved \(H\vdash a\sim b\sim c:A\) and \(H\) wr-heap, then for heap reduction \(H;c\rightsquigarrow H^{\prime};c^{\prime}\) there exist \(a^{\prime}\) and \(b^{\prime}\) such that the following judgments \(H^{\prime}\vdash a^{\prime}\sim b^{\prime}\sim c^{\prime}:A,H^{\prime}\) wr-heap, \(a\rightsquigarrow^{*}a^{\prime}\) and \(b\rightsquigarrow^{*}b^{\prime}\) all hold._
Finally, the heap progress theorem shows that for any pointer program \(c\) that is well-resolved in a heap \(H\) satisfying the _wr-heap_ condition, either there exists a heap reduction \(H;c\rightsquigarrow H^{\prime};c^{\prime}\) or \(c\) is a pointer. From the definition of _wr-heap_ we know that all elements contained in the heap are values. In the case that \(c\) is a pointer, dereferencing \(c\) yields a value.
Theorem 15 (Heap Progress).: _Given valid instances of well-resolved \(H\vdash a\sim b\sim c:A\) and \(H\) wr-heap, then either there exist heap \(H^{\prime}\) and program \(c^{\prime}\) such that there is reduction \(H;c\rightsquigarrow H^{\prime};c^{\prime}\) or there exists a location \(l\) such that \(c=*l\)._
Starting from a well-typed closed program \(m\) and an empty heap, due to the fact that empty heaps are degenerate wr-heaps and well-typed closed programs are trivially well-resolved in empty heaps, the heap subject reduction theorem and heap progress theorem allow for heap reductions to be repeatedly generated and applied until a value referencing pointer is reached. The resolution stability theorem tells us that the heap at this point can be constrained by \(H\triangleright s\) where \(s\) is the sort of the original program \(m\)'s type. In practice, if the designated starting main expression is required to be of a non-linear type, all allocated heap memory will be safely freed by the time that the program terminates.
## 9. Extensions
In this section, we describe some useful extensions to the core TLL language for program development and reasoning. The meta theoretic results presented in the previous sections can all be extended naturally to cover these additions. In fact, our theorems are all proven assuming the inclusion of these extensions.
Figure 23. WR-Heaps
### Propositional Equality
The first extension that we present is propositional equality with logical typing rules given in Figure 24. This is the usual propositional equality found in intensional dependent type theories which allows one to posit equality between two terms. Equality and its proofs exist purely for reasoning purposes so they can only be derived at the logical level. However, a proof of equality constructed at the logical level can be eliminated at the program level using the rule shown in Figure 25. This allows the type-casting of programs with a proof that the target type is propositionally equal to the original type. A simple example of program level equality elimination in practice is the type-casting of a length indexed vector \(ls\) with type \(vec\ (x+y)\ A\) to type \(vec\ (y+x)\ A\) by appealing to the proof that addition is commutative.
The most interesting aspect of TLL propositional equality lies in its different reduction behaviors at the logical level and at the program level, the rules of which are presented in Figure 26. Notice that in the logical reduction rule \(\mathrm{R}^{=}_{[x,p]A}(H,\mathrm{refl}\ m)\leadsto H\), the proof of equality must be of the form \(\mathrm{refl}\ m\) in order to trigger reduction. This is to ensure that \(\mathrm{R}^{=}_{[x,p]A}(H,\mathrm{refl}\ m)\) and \(H\) have definitionally equal types. This seems to indicate that propositional equality is computationally relevant in the sense that an equality eliminator \(\mathrm{R}^{=}_{[x,p]A}(H,P)\) occurring at the program level ought to carry around proof \(P\) and reduce it to \(\mathrm{refl}\ m\). From the actual program level reduction rule \(\mathrm{R}^{=}_{[x,p]A}(H,P)\leadsto H\) we can see that this first impression is misleading: the program level equality eliminator does not impose any restrictions on \(P\) and immediately reduces to \(H\). The soundness of this rule is due to the fact that program reductions are not performed under context. By the time an equality eliminator is evaluated, logical strong normalization and canonicity guarantee the existence of a \(\mathrm{refl}\ m\) proof that \(P\) is logically reducible to. In other words, equality eliminators at the program level reduce their proofs _conceptually_ but not literally.
After recognizing that equality proofs are computationally irrelevant at the program level, we define the erasure procedure for equality eliminators in Figure 27 that removes the equality proof
Figure 26. Propositional Equality Reduction
Figure 24. Propositional Equality (Logical Rules)
entirely. These erased programs can be represented and evaluated much more efficiently at runtime than their compile time logical counterparts.
### Subset Pairs
Leveraging the distinction between proofs and programs in TLL, we encode a variation of \(\Sigma\)-types often referred to as subset types in the verification community. For a subset type of the form \(\Sigma_{t}\{x:A.B\}\), its canonical inhabitant will be a pair of the form \(\{m,n\}_{t}\) where \(m\) is a relevant payload of type \(A\) and \(n\) is an irrelevant proof of the dependent type \(B\). A common use case for subset types is to refine programs by the properties they satisfy, essentially carving out a _subset_ of the original type.
The logical rules for subset types are presented in Figure 28 and the program rules are presented in Figure 29. The side condition \((t=\mathrm{U})\Rightarrow(s=\mathrm{U})\) is required by the subset type formation rule to prevent resource leakage via packing linear payloads into non-linear subset pairs. Notice how in the program rule for subset pair construction the payload \(m\) is typed at the program level whereas the term \(n\) is typed at the logical level. This indicates that \(m\) is computationally relevant and \(n\) is computationally irrelevant. From an erasure perspective, a program of the form \(\{m,n\}_{t}\) is erased to \(\{m,\Box\}_{t}\).
The following example shows applying erasure to a pair of type \(\Sigma_{\mathrm{U}}\{x:\mathrm{nat.}x+1=_{\mathrm{nat}}2\}\). Notice that the proof component of the pair (ref1) is completely removed by erasure. These subset pairs
Figure 28. Subset Pairs (Logical Rules)
Figure 27. Propositional Equality (Erasure Rules)
Figure 29. Subset Pairs (Program Rules)
realize the principle of computational irrelevancy for program properties.
\[\vdash\{1,\text{refl }2\}_{\text{U}}\sim\{1,\Box\}_{\text{U}}:\Sigma_{\text{U}}\{x: \text{nat}.x+1=_{\text{nat}}2\}\]
Standard dependent pairs where both components are computationally relevant can be defined in a straightforward manner. The typing rules and semantics for relevant pairs are fully formalized in our Coq development, but for the sake of saving space we do not present them here.
### Additive Pairs
To integrate the additive fragment of Linear Logic (Leslie, 2000) into TLL, we introduce &-types as an extension. The logical rules are presented in Figure 30 and the program rules are presented in Figure 31. Intuitively, a &-type of the form \(A\) &\({}_{t}B\) represents the pairing of two delayed computations of types \(A\) and \(B\) respectively. Canonical inhabitants of \(A\) &\({}_{t}B\) are additive pairs of the form \((m,n)_{t}\).
Of the rules depicted here, the most interesting is the program typing rule governing the construction of additive pairs. Notice that in the premise, both components \(m\) and \(n\) are typed in the same program context \(\Delta\). Furthermore, the conclusion only assumes a single copy of \(\Delta\). This is a realization of the additive fragment of Linear Logic (Leslie, 2000). Due to the fact that \(m\) and \(n\) are delayed computations, only one of the two will ultimately be projected out and evaluated. So only a single copy of \(\Delta\) is committed to the component that actually gets evaluated.
## 10. Implementation and Application
In this section, we describe language features implemented in the TLL compiler with simple examples on how they can be used to effectively construct and verify programs.
### Linear Inductive Types
The TLL compiler supports user defined inductive types in the style of CIC (Leslie, 2000). Figure 32 demonstrates how a linear list can be defined. In this definition, the _arity_ of the \(\mathtt{llist}\) type constructor ends in sort L. Once \(\mathtt{llist}\) is fully applied to an arbitrary linear type A, the resulting type \(\mathtt{llist}\) A will be of sort L which requires that the lists inhabiting this type are used exactly once. So basically, by varying the sorts of type arities and constructor arguments, we can define different combinations of linear and non-linear inductive types.
Figure 31. Additive Pairs (Program Rules)
Figure 32. Linear Lists
Figure 30. Additive Pairs (Logical Rules)
Figure 33 defines a function 2 for appending two linear lists. The usage of the toplevel keyword program here allows the lappend function to be applicable at the program level. Since types can only appear in computationally irrelevant positions at the program level, the type parameter A is quantified irrelevantly as a \(\lambda 0\) argument. Additionally, the body of lappend is subject to linear type-checking because it can be used at the program level. The C code emitted for lappend will reclaim the memory used for representing xs as it is deconstructed by the match expression.
Footnote 2: Underscores can be used as implicit arguments which are inferred through unification.
After a program has been defined, theorems regarding its properties can be proven at the logical level using the logical keyword. Figure 34 shows a proof 3 that the logical length of two lists appended together by lappend is equal to the sum of their individual lengths. Notice that the llen function here is a logical specification: it cannot be used at the program level for actual computations. If llen were to exist relevantly at the program level, the elementhd dropped without usage in the lcons case would cause memory leakage. Basically, terms declared with the logical keyword are not subject to linear type checking and are pruned during the erasure phase of the compiler.
The example that was just shown is a form of _extrinsic_ verification. In this style of verification, an unverified program is first written using standard programming techniques. After the program has been fully constructed, its properties are then stated and proven as theorems external to the program.
Taking advantage of dependent types, programs can also be verified in an _intrinsic_ manner where data and proofs are tightly integrated. Consider length indexed linear vector defined in Figure 35. The constructorsUil and lCons of this inductive type carry irrelevant proofs that the
Figure 34: Logical Proofs Relating Append and Length
Figure 33: Program for Appending Linear Lists
indexing natural number n accurately characterizes the length of the constructed vector. So if a vector is known to be of type lvec n A, we can trust that its length must be n.
A program for appending linear vectors is given in Figure 36. We can see immediately from the type of the output lvec (m + n) A that its length must be exactly the sum of the lengths of its inputs. Compared to the extrinsic approach, intrinsic verification can help to guide the process of program construction itself as the tightly integrated proofs serve as precise interfaces that rule out incorrect programs.
The computational relevancy mechanism of TLL allows irrelevant constructor arguments to be safely erased. For lvec in particular, constructor arguments surrounded by braces are erased. The structure of lvec after erasure is identical to llist. If lvec was defined verbatim in Coq, then the n0 argument of lCons would not be erased because nat is in the Set universe.
### Sort-Polymorphism
In TLL, non-linear types and linear types are unambiguously grouped in sorts U and L respectively as shown through the sort uniqueness theorem (Theorem 3). This means that multiple versions of equivalent functions may need to be defined at different sorts, causing large amounts of code duplication. Consider the polymorphic identity functions shown in Figure 37. The first function idU is polymorphic over non-linear types and the second idL is polymorphic over linear types.
In order to reduce code duplication, we implement sort-polymorphism in the TLL compiler. Toplevel declarations are allowed to quantify over sorts using sort variables. We refer to these sort quantified declarations as _sort-polymorphic schemes_. Figure 38 shows how a sort-polymorphic identity function can be defined. The id<s> scheme is parameterized by sort variable s which is then used by Type<s> to refer to the sort of type A generically.
Figure 37. Sort Monomorphic Identity Functions
Figure 36. Program for Appending Linear Vectors
Figure 35. Linear Length Indexed Vectors
program id<s> {A : Type<s>} (x : A) : A = x
It is important to note that schemes are not proper terms in TLL. Instead, the compiler attempts to instantiate the sort parameters of schemes with all possible combinations of U and L. The instantiated instances that pass type checking are elaborated into sort-monomorhpic TLL terms. Conversely, instances that do not pass type checking are pruned. For schemes such as id where both instantiated instances are well-typed, the compiler will essentially derive idU and idL automatically and apply the correct version depending on the sort of its argument.
Sort-polymorhpic schemes can also be used for deriving inductive types. Figure 39 defines a list<s,t> inductive type scheme whose elements are of sort s and that itself is of sort t. The llist type presented in Section 10.1 can be viewed as the instantiated instance list<t,L> where the elements of the list are linear and the list itself is also linear. An interesting instance that could be obtained from instantiating the list<s,t> scheme is list<L,U>. The cons constructor of list<L,U> is unsound as it enables the duplication of linear resources by first packing them into non-linear lists. To prevent such unsound situations from occurring, the cons constructor of list<L,U> is pruned, leaving list<L,U> with only nil as its constructor. In general, constructors of non-linear inductive types may only take arguments which are also non-linear. The constructors of scheme instances that do not satisfy this criteria are pruned.
Figure 40 presents a length function for sort-polymorphic lists. Notice that unlike the logical lten shown in Figure 34, the new len<s,t> function also returns back the original input list paired with its length. This makes len<s,t> sound for use as a computationally relevant program as the aforementioned memory leakage problem regarding hd is no longer possible. Furthermore, we can prove at the logical level that the list returned by len<s,t> is indeed equal to its input. Due to the fact that the len_id<s,t> theorem here is also sort-polymorphic, this single proof suffices to verify len<s,t> for all valid sort variations of lists.
Figure 38: Sort Polymorphic Identity Function
Figure 40: Sort-Polymorhpic Length (Excerpt)
Figure 39: Sort-Polymorhpic Lists
### Dependent Session Types
The TLL compiler supports concurrency and facilitates communication between processes using dependent session types. Each channel type ch(P) is indexed with a protocol P describing the communication that is expected to be conducted over it. Channel types are linear, so the type system ensures that the only way to consume a channel is to execute its indexing protocol to completion. We have formalized and proven the soundness of these concurrency extensions in Coq, but due to space limitations we will not present the communication calculus in this paper. The examples presented here are meant to illustrate the applications of TLL to concurrent programming.
#### Specification of Concurrent Algorithms
The most immediate application for dependent session types is the precise specification of concurrent algorithms. Figure 41 demonstrates how a channel for conducting concurrent mergesort can be specified. The protocol \(\exists\)(uniq _ (msort xs)) \(\rightarrow\)\(\bullet\) indexing the channel type requires that a value of the singleton type uniq _ (msort xs) be sent on the channel before it can be closed. So any process with a channel of the type cmsort_ch<>(xs) is expected to send a list that is logically equal to xs sorted by the sequential implementation of mergesort msort<t>.
Figure 42 presents a worker function utilizing a channel c of type cmsort_ch<t>(zs0). A list is expected to be sent on channel c along with a proof that it is exactly equal to zs0 sorted by the sequential implementation. Notice in the splitting case for cmsort_worker<t>, two child-processes are spawned by fork to carry out sorting for the list halves xs0 and ys0 in a concurrent manner. The results of the child-processes are communicated back to their parent-process through the channels r1 and r2 which are of types cmsort_ch<t>(xs0) and cmsort_ch<t>(ys0) respectively. This means that the lists xs1 and ys1 received from r1 and r2 come with proofs that they are logically equal to msort xs0 and msort ys0. Finally, the merged list zs1 is sent on channel c with a proof that it is equal to the original input zs0 sorted sequentially.
Figure 41. Protocol Specification for Concurrent Mergesort
Figure 42. Verified Concurrent Mergesort (Excerpt)
### Implicit Secrecy Synchronization
The dependent session types found in the TLL compiler have access to the same computational irrelevancy and erasure mechanism enjoyed by \(\lambda\)0-programs. For instance, \(\Uparrow\{x:A\}\to B\) is a protocol which expects to send a computationally irrelevant message \(x\) of type \(A\) then continue as protocol \(B\). After erasure, the message sent in place of \(x\) is replaced by \(\Box\). At first glance, sending and receiving irrelevant messages seem to be meaningless as \(\Box\) communicates no information between sender and receiver. However, one must remember that these irrelevant message can still be used to instantiate protocols, so a form of implicit synchronization can be achieved without actually sending messages across the channel.
Figure 43 demonstrates how computational irrelevancy can be used to encode the Diffie-Hellman key exchange [9] as a session type. The parameters \(\mathsf{p}\) and \(\mathsf{g}\) are public values that both parties agree on. From Alice's perspective, she first sends her secret value \(\mathsf{a}\) as an irrelevant message to initialize her half of the protocol. Next, her public value \(\mathsf{A}\) is sent as a relevant message to Bob along with a proof that \(\mathsf{A}\) is correctly computed from values \(\mathsf{p}\), \(\mathsf{g}\) and \(\mathsf{a}\). At this point, Alice has finished sending messages and waits for messages from Bob to complete the key exchange. She first receives Bob's secret \(\mathsf{b}\) as an irrelevant message which initializes his half of the protocol. Later, Bob's public value \(\mathsf{B}\) is received as a relevant message along with a proof that \(\mathsf{B}\) is correctly computed from value \(\mathsf{p}\), \(\mathsf{g}\) and \(\mathsf{b}\). Notice that between Alice and Bob, only the relevant messages \(\mathsf{A}\) and \(\mathsf{B}\) will be exchanged at runtime. The secret values \(\mathsf{a}\) and \(\mathsf{b}\) and other correctness proofs will be pruned by erasure because they are irrelevant. Basically, computational irrelevancy has allowed us to synchronize values implicitly while also maintaining their secrecy.
Figure 44: Alice (holder of secret \(\mathsf{a}\)) and Bob (holder of secret \(\mathsf{b}\))
Figure 43: Diffie-Hellman Session Protocol
Two simple programs \(\mathtt{alice}\) and \(\mathtt{bob}\) implementing the DH key exchange are presented in Figure 44. In this definition, the secret value \(\mathtt{b}\) that \(\mathtt{alice}\) receives from \(\mathtt{bob}\) is computationally irrelevant. She can only use \(\mathtt{b}\) in conjunction with \(\mathtt{pf}\) of type \(\mathtt{B}\equiv\mathtt{pow}\;\mathtt{g}\;\mathtt{b}\approx\mathtt{p}\) at the logical level for hypothetical reasoning since \(\mathtt{b}\) will not actually be sent to her at runtime. Likewise, \(\mathtt{bob}\) does not gain access to a computationally relevant copy of \(\mathtt{a}\) either. One can even inspect the intermediate representation (and \(\mathtt{C}\) code) generated for \(\mathtt{alice}\) and \(\mathtt{bob}\) by the compiler to confirm that secret values are not transmitted. In fact, the generated code simply assigns \(\mathtt{\Box}\) to \(\mathtt{b}\) and \(\mathtt{a}\) for \(\mathtt{alice}\) and \(\mathtt{bob}\) respectively. Furthermore, the ability to type check \(\mathtt{alice}\) and \(\mathtt{bob}\) independently of each other enables modular and scalable software development.
The RSA encryption algorithm (Krause et al., 2017) can also be encoded as a session type in a similar manner by using computational irrelevancy to specify the relationship between public keys and private keys. We present a specification of the RSA protocol and a client-server pair implementing the protocol in the appendix.
## 11. Related Work
### Computational Relevancy
Over the years, a number of mechanisms have been proposed for specifying computational relevancy and program extraction. The Dependent ML (DML) of Xi (Zhou et al., 2017) uses a stratified language where the static fragment is irrelevant and the dynamic fragment is relevant. A special class of indexed singleton types carries information between the statics and the dynamics. Miquel (Miquel, 2017) introduces the Implicit Calculus of Constructions (ICC) which extends the standard Calculus of Constructions of Coquand (Coquand, 2017) with intersection types. Intersection types allow implicitly quantified terms to be instantiated with hypothetical arguments that are not explicitly present in the syntax tree. ICC programs essentially never carry around irrelevant terms in the first place. Due to the fact that these instantiating arguments must be synthesized spontaneously without additional information from the syntax tree, type checking for ICC is undecidable. Barras and Bernardo (Barras and Bernardo, 2017) develop a decidable variation of ICC (\(\mathrm{ICC}^{\star}\)) by requiring explicit instantiations for implicit quantifiers. An erasure procedure is then carried out to remove the arguments of implicit instantiations.
From a computational relevancy perspective, TLL can be viewed as an integration of DML style stratification and \(\mathrm{ICC}^{\star}\) style implicit quantification. On the one hand, stratification distinguishes proofs from programs in a straightforward manner that enforces types and assumed axioms to always be irrelevant. Additionally, the operational semantics of the two levels can be tailored to better facilitate reasoning and computation independently of each other. On the other hand, implicit quantifications allow for Martin-Lof style dependency (Mattin and Lof, 2017) which is more expressive than DML style dependency.
### Combining Dependency and Linearity
Linear types are a class of type systems inspired by Girard's sub-structural Linear Logic (Girard, 2017). Girard notices that the weakening and contraction rules of classical logic when restricted carefully, give rise to a new logical foundation for reasoning about resources. Wadler (Wadler, 2017) applies an analogous restriction to variable usage in simple type theory, leading to the development of linear type theory where expressions respect resources. Programming languages featuring linear types (Bordner, 2017) or affine-like types (Girard, 2017) have been implemented, allowing programmers to write resource safe software in practical applications.
Work has been done to enrich linear type theories with dependent types. Cervesato and Pfenning (Cervesato and Pfenning, 2017) extend LF (Girard, 2017) with linear types as LLF, being first to demonstrate that dependency and linearity can coexist within a type theory. The ATS programming language (Zhou et al., 2017) extends DML style
dependent types with linear types to facilitate safe effectful programming. Vakar (Vakar, 2018) presents a dependent linear type theory ILDTT with an underlying categorical semantics. Krishnaswami et al. (2018) introduce a dependent linear type theory \(\text{LNL}_{D}\) based on Benton's (Benton, 2017) prior work of mixed linear and non-linear calculus. Although \(\text{LNL}_{D}\) also employs a stratified type system, the stratification here is not used for specifying computational relevancy. Instead, the stratification of \(\text{LNL}_{D}\) is used for separating linear types from non-linear types and Miquel style intersection types are used for encoding computational relevancy.
The works described so far all prohibit types from depending on linear terms in order to prevent resource duplication within types. Luo and Zhang (2017) are the first to describe a system where linear dependency is allowed. To accomplish this, they introduce the notion of _essential linearity_ which points out that types are hypothetical entities so linear terms occurring inside types should not contribute to overall resource consumption. Our work is inspired by the idea of _essential linearity_ and also supports linear dependency which allows one to prove theorems regarding linear programs.
Based on initial ideas of McBride (McBride, 2017), Atkey's QTT (Benton, 2017) uses semi-ring annotations to track variable occurrence, simulating computational relevancy, linear types and affine types within a unified framework. The heap semantics analysis of Choudhury et al. (2017) show that QTT requires a form of reference counting to garbage collect unneeded resources at runtime. The soundness theorems for our heap semantics guarantee TLL programs to be memory clean without runtime garbage collection.
## 12. Conclusion and Future Work
TLL is a two-level dependent type theory that aims to characterize the nature of proofs and programs faithfully. Hosting a structural type system that is reminiscent of Martin-Lof type theory (McBride, 2017), the logical level derives hypothetical objects such as proofs and types which are computationally irrelevant. The program level uses the types and proofs derived at the logical level to realize a Linear Logic (Choudhury et al., 2017) inspired type system. Programs constructed using the program level rules can be freely reflected into the logical level for hypothetical reasoning. We develop an erasure procedure for removing irrelevant terms occurring inside programs and show that programs extracted this way maintain computational productivity. Through a heap semantics analysis we prove that extracted TLL programs run memory clean.
We plan to investigate what additional extensions are possible with the additional flexibility afforded to us by TLL's stratified design. Currently, we have extended TLL with session type based concurrency and intend to present a detailed account of the calculus in the future. We are interested to see if our approach can be refined for specifying multiparty communication and conducting end-to-end verification of cryptographic protocols. Our ultimate goal is to strive towards a framework that synergistically unites theorem proving and practical programming.
|
2309.03726 | Interpretable Visual Question Answering via Reasoning Supervision | Transformer-based architectures have recently demonstrated remarkable
performance in the Visual Question Answering (VQA) task. However, such models
are likely to disregard crucial visual cues and often rely on multimodal
shortcuts and inherent biases of the language modality to predict the correct
answer, a phenomenon commonly referred to as lack of visual grounding. In this
work, we alleviate this shortcoming through a novel architecture for visual
question answering that leverages common sense reasoning as a supervisory
signal. Reasoning supervision takes the form of a textual justification of the
correct answer, with such annotations being already available on large-scale
Visual Common Sense Reasoning (VCR) datasets. The model's visual attention is
guided toward important elements of the scene through a similarity loss that
aligns the learned attention distributions guided by the question and the
correct reasoning. We demonstrate both quantitatively and qualitatively that
the proposed approach can boost the model's visual perception capability and
lead to performance increase, without requiring training on explicit grounding
annotations. | Maria Parelli, Dimitrios Mallis, Markos Diomataris, Vassilis Pitsikalis | 2023-09-07T14:12:31Z | http://arxiv.org/abs/2309.03726v1 | # Interpretable Visual Question Answering via Reasoning Supervision
###### Abstract
Transformer-based architectures have recently demonstrated remarkable performance in the Visual Question Answering (VQA) task. However, such models are likely to disregard crucial visual cues and often rely on multimodal shortcuts and inherent biases of the language modality to predict the correct answer, a phenomenon commonly referred to as _lack of visual grounding_. In this work, we alleviate this shortcoming through a novel architecture for visual question answering that leverages _common sense reasoning as a supervisory signal_. Reasoning supervision takes the form of a textual justification of the correct answer, with such annotations being already available on large-scale Visual Common Sense Reasoning (VCR) datasets. The model's visual attention is guided toward important elements of the scene through a similarity loss that aligns the learned attention distributions guided by the question and the correct reasoning. We demonstrate both quantitatively and qualitatively that the proposed approach can boost the model's visual perception capability and lead to performance increase, without requiring training on explicit grounding annotations.
Maria Parelli\({}^{{\dagger}{\ddagger}}\) Dimitrios Mallis\({}^{{\dagger}}\) Markos Diomataris\({}^{{\dagger}{\ddagger}}\)1 Vassilis Pitsikalis\({}^{{\dagger}}\)\({}^{{\dagger}}\) DeepLab, Athens, Greece
\({}^{{\ddagger}}\) ETH Zurich, Zurich, Switzerland [email protected], [email protected], [email protected], [email protected] Visual Question Answering, Visual Grounding, Interpretability, Attention Similarity
Footnote 1: Work was done while Markos Diomataris was with DeepLab.
## 1 Introduction
Models for Visual Question Answering (VQA) provide answers to natural language questions about an image by perceiving both textual and image cues. VQA lies at the intersection of vision and language and has recently generated significant research interest. Existing methods aim to tackle the task via deep multi-layer transformer architectures, attending to linguistic and visual tokens [1, 2, 3, 4, 5]. However, despite their superior performance, attempts to diagnose these models' robustness and reasoning capability have revealed that they often rely on linguistic biases and shallow correlations to generate the correct answer [6, 7]. The language modality has been proven a strong signal that is easy to exploit, causing the model to overlook visual information and rely on shallow patterns, such as correlations between words in the question [8]. It has been shown that the performance of recent models can clearly degrade under evaluation settings that penalize reliance on such spurious correlations [9, 10, 11].
This tendency of recent models to reason about the correct answer without attending to the relevant image areas has been referred to as _lack of visual grounding_[12, 13]. To alleviate this, a line of work explores techniques for training VQA models that are sensitive to the same image regions as human annotators, commonly by enforcing alignment with human attention maps [12, 14]. While such methods can reduce reliance on language biases, they also require explicit grounding supervision that is rarely available. In this work, we explore an alternative approach towards attending to informative image regions, that does not require explicit grounding supervision, but leverages instead _common sense reasoning_ as a supervisory signal.
We take advantage of the fact that reasoning-level supervision in the form of textual justification of why an answer is true, is already available in large-scale Visual Common
Figure 1: This work proposes a novel mechanism for leveraging _common sense reasoning as a supervisory signal_. Our VQA model, guided by the correct reasoning (R:[person1] is holding the cigarette), is able to attend to the appropriate image regions and accurately select the right answer (A:[person1] is smoking).
sense Reasoning datasets like [15]. For example, in Fig. 1, to answer the **question**_'What is_ [person1]_doing?'_, the **reasoning**_'[person1]_is holding a cigarette and is leaned over it_' can accurately guide a model's visual attention towards predicting the correct **answer**, '[person1]_is smoking'_. The correct reasoning often contains details of the scene and references to objects and people relevant to the right answer. Our VQA model is trained to utilize reasoning supervision as a proxy signal to generate interpretable attention maps that guide visual attention toward informative image regions.
Our proposed framework processes question/answer pairs using a multilayer BERT [16] transformer architecture. A separate visual attention stream is incorporated to generate two attention distributions, one conditioned on the question and the other on the correct reasoning. We distill knowledge from the reasoning attention to our VQA model through a similarity loss term, that encourages question and reasoning attention alignment. Our model can accurately capture the visual components required to find the correct answer and produce interpretable, human-like attention maps, thus boosting baseline performance. We evaluate our pipeline both quantitatively and qualitatively on the Visual Commonsense Reasoning dataset [15], a large-scale dataset for cognition-level visual understanding. To the best of our knowledge, we are one of the first works to employ implicit attention guidance, free from explicit grounding supervision in a vision-language transformer setting.
## 2 Related Work
The main VQA paradigm is multi-layer transformers operating on joint image-text embeddings [3, 5, 1, 3]. These methods benefit from extensive pre-training on large-scale VL datasets, to extract meaningful image-text representations and align visio-linguistic clues. One notable example is VL-BERT [2], a model that is pre-trained on text-only corpora with standard Masked Language Modeling (MLM) as well as visual-linguistic corpora via predicting randomly masked words and Regions of Interest (RoIs) of the image.
Despite superior performance, state-of-the-art VQA models can often make decisions by relying on shortcuts and statistical regularities instead of comprehending the scene as demonstrated in [10]. Similarly, authors in [17] identify that VQA models exploit co-occurrences of words in the question and object segments in the image, which they define as multimodal shortcuts.
In an attempt to counter shortcuts and language priors, some methods encourage the model to effectively attend to visual components and infer visual relationships. The authors of [12] align gradient-based explanations with human attention annotations via a ranking loss to guide the network to focus on the correct image regions. The authors of [14] train an attention auxiliary model with ground truth human-labeled attention maps and consequently apply human-like attention supervision to an attention-based VQA model. Another work in this direction [18] proposes a method that automatically selects region and object annotations from Visual Genome [19] that serve as labels for implementing visual grounding as an auxiliary task for VQA. In contrast to these approaches, this work mitigates over-reliance on language priors without requiring annotated attention maps. We train our network instead, to look at the image and attend to meaningful visual evidence through reasoning supervision.
## 3 Methodology
**Problem statement.** A VQA model is tasked with answering natural language questions from the visual content of a scene. Given a dataset \(\mathcal{X}=\{u_{i},q_{i},a_{i},r_{i}\}_{i=1}^{N}\) of \(N\) images where \(u_{i}\in V\) is the visual input with question \(q_{i}\in Q\), reasoning \(r_{i}\in R\) and groundtruth answer \(a_{i}\in A\), our goal is to learn a function \(f:Q\times V\rightarrow\mathbb{R}^{A}\) that predicts a distribution \(P(A)\) over possible answers in \(A\). Our proposed pipeline consists of two parallel streams, a _language stream_ with model parameters \(\theta_{L}\) and a _visual attention stream_ with model parameters \(\theta_{V_{q}}\) and \(\theta_{V_{r}}\) (question and reasoning guided attention decoder that we will discuss next). During training, we will utilize reasoning supervision as an additional supervisory signal, thus modeling \(P(A|u_{i},q_{i},r_{i};(\theta_{L},\theta_{V_{q}},\theta_{V_{r}}))\) that simplifies to \(P(A|u_{i},q_{i};(\theta_{L},\theta_{V_{q}}))\) at test time.
**Language Stream.** The first stream is language-focused and aims to generate an informative representation of the input question and answer sentence pairs by modeling their relationship. The core of its architecture is a bi-directional 12-layer transformer initialized with weights from BERT [16]. It takes a sequence of word embeddings of the question and answer as input (separated by a separation element [SEP]) and adds a sequence positional embedding to each token. The final output feature \(x_{[CLS]}\) of the [CLS] element is used to obtain the final pooled linguistic representation.
**Visual Attention Stream.** The visual attention stream consists of two 9-layer transformer decoders. The first one generates an attention vector over the image features guided by the question, and the second an attention vector over the image features guided by the correct reasoning. We take advantage of the cross-attention module to perceive multimodal information and capture relationships between image features and word embeddings. The process is as follows: The image is first processed via the backbone of a ResNet-50-FPN to extract visual appearance features. The output is a feature map \(\mathcal{F}\in\mathbb{R}^{H\times W\times 256}\), which we treat as a sequence of 256-dimensional image features. Following [2], a visual geometric embedding is added to each input token to inject 2D awareness into the model. We also encode the question and correct reasoning language tokens via a pre-trained BERT model, which yields a 786-dim representation for each word.
Question and reasoning word embeddings are used as input to the corresponding question and reasoning transformer decoders (functioning as query tokens). The image visual features \(\mathcal{F}\) are used to generate the keys and values. Then, the attention weights are calculated based on the pairwise similarity of the query and key elements. The output of each decoder is an attention distribution over the image regions \(\alpha\in\mathbb{R}^{H\times W}\), conditioned on either the word embeddings of the question (referred to as \(\alpha^{Q}\)) or the word embeddings of the correct reasoning (referred to as \(\alpha^{R}\)). In practice, to obtain the final attention vectors \(\alpha^{Q}\) and \(\alpha^{R}\), we compute the average per-head attention of the last layer generated by the \([CLS]\) token over the image features.
The generated attention map \(\alpha^{Q}\), is then used to take the _weighted sum_ over the image features \(\mathcal{F}\), which is passed through a linear layer to obtain the final _attended-by-the-question_ representation of the image, \(V_{q}\). The same operation is performed to obtain the _attended-by-the-reasoning_ image representation \(V_{r}\). Formally,
\[\begin{split} V_{q}&=Linear(\alpha^{Q}\odot \mathcal{F})\\ V_{r}&=Linear(\alpha^{R}\odot\mathcal{F})\end{split} \tag{1}\]
Combining Language and Visual Streams.The model outputs two separate predictions, one conditioned on the question \(y_{q}\) and the other on the correct reasoning \(y_{r}\). These are produced by fusing the outputs of the _language stream_\(x_{[CLS]}\) and the _visual attention stream_, via Hadamard multiplication and then passing them through a softmax classifier \(s\), thus \(y^{q}=s(x_{[CLS]}\odot V_{q})\) and \(y^{r}=s(x_{[CLS]}\odot V_{r})\). At test time, \(y^{q}\) is used to provide predictions over possible answers.
Training.Our training pipeline consists of 2 stages. In the first stage, we train with two cross-entropy loss terms \(\mathcal{L}_{q}\) and \(\mathcal{L}_{r}\) w.r.t the ground truth answer \([a_{i}]\), or
\[\mathcal{L}_{stage_{1}}=-\frac{1}{N}\sum_{i}^{N}\log(y_{i}^{q})[a_{i}]-\frac{1} {N}\sum_{i}^{N}\log(y_{i}^{r})[a_{i}] \tag{2}\]
In the second stage, we distill knowledge from the reasoning decoder by aligning the attention distributions conditioned on the question \(\alpha^{Q}\) to the attention distributions conditioned on the correct reasoning \(\alpha^{R}\). To that end, we freeze the weights of the reasoning attention decoder and only fine-tune the question attention decoder (through \(\mathcal{L}_{q}\)) while also utilizing an attention similarity loss, formulated as the forward Kullback-Leibler divergence between attention maps \(\alpha^{Q}\) and \(\alpha^{R}\), or \(D_{KL}(\alpha^{Q}||\alpha^{R})\). The complete \(\mathcal{L}_{stage_{2}}\) loss is:
\[\mathcal{L}_{stage_{2}}=-\frac{1}{N}\sum_{i}^{N}\log(y_{i}^{q})[a_{i}]+\frac{1 }{N}\sum_{i}^{N}\alpha_{i}^{Q}\log(\frac{\alpha_{i}^{Q}}{\alpha_{i}^{R}}) \tag{3}\]
The whole process is illustrated in Figure 2. Our model is trained for 11 epochs for stage 1 and then finetuned for 5 more epochs in stage 2.
## 4 Experiments
Dataset.We validate our VQA model on the Visual Commonsense Reasoning dataset [15], which consists of 290k QA problems derived from 110k movie scenes. Four possible answers and four rationales are provided for each question,
Figure 2: Proposed VAQ architecture: Our model comprises 2 main streams that operate in parallel, a _language_ and a _visual_ stream. The output of the 2 streams is fused via Hadamard multiplication to obtain the final prediction. During training, we utilize a _reasoning_ attention decoder to distil reasoning information into the model, through a similarity loss between question and reasoning-guided attention maps. Reasoning supervision leads in the formation of interpretable attention maps.
but we use only the correct rationale/reasoning. Note that reasoning is only used during training as additional supervision.
**Quantitative Evaluation.** Results in terms of model accuracy are reported in Table 1. Our baseline model (only the question decoder) achieves \(61.2\%\) accuracy on the validation set. Finetuning by aligning question and reasoning attention distributions yields \(63.9\%\), that is a \(2.7\%\) absolute improvement, thus demonstrating the benefit of reasoning supervision. We note that our main goal is to propose a novel training strategy for boosting a VQA model's visual explanatory strength by exploiting reasoning as an alternative supervisory signal. Thus, we do not directly compare to methods such as [2, 3, 20] that contain a larger number of parameters, leverage large-scale VL and video pretraining or ground-truth object bounding boxes. For comparison, the best performance reported in R2C [15] was \(63.8\%\).
To further investigate our model's ability to leverage the visual modality, we perform an ablation study where we mask the visual features of the objects/people referenced by the question at test time and measure the effect on accuracy. Results are reported in Table 2. We observe that the baseline VQA model (that does not fully alleviate the lack of visual grounding) suffers a lesser performance degradation of \(1.9\%\) compared to \(2.8\%\) for our finetuned model (on reasoning supervision). This is a different manifestation of the fact, that the baseline model is over-reliant on the language modality, thus performance is penalized less when visual information is not available due to object masking.
**Visual Results.** In Fig. 3, we visualize attention maps (\(\alpha^{Q}\)) for both the baseline model _(above)_ and finetuned model (with reasoning supervision) _(below)_. The correct reasoning can intuitively provide important guidance during training. For example, for the question (Q: Which person is the lead for this dance group?), the reasoning (R: [1] is in the middle, which is generally where the main dancer goes)) clearly explains the dynamics of different elements of the scene. This information is distilled to our VQA model through our attention similarity loss. In Fig. 3, we observe that after fine-tuning, visual attention improves. Our method is able to produce interpretable, human-like attention maps, thus being able to predict the correct answer by perceiving relevant visual concepts.
## 5 Conclusion
In this work, we alleviate the lack of visual grounding through reasoning supervision. This additional supervision takes the form of textual justifications of the correct answer and it's already available for VCR datasets. We incorporate a similarity loss that encourages the alignment between the visual attention maps (guided by the question and correct reasoning) thus improving the model's visual perception capability. We demonstrate qualitatively and quantitatively that reasoning information can lead to interpretable attention maps and performance increase for visual question answering.
\begin{table}
\begin{tabular}{l|c} \hline \hline Model & Acc(\%) \\ \hline _Baseline model_ & 61.2 \\ _Reasoning Supervision_ & **63.9** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy of the baseline and our proposed model finetuned with _reasoning supervision_ on VCR.
\begin{table}
\begin{tabular}{l|c} \hline \hline Model & Acc(\%) \\ \hline _Baseline model_ & 61.2 \\ _Reasoning Supervision_ & **63.9** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance drop on the VCR validation set due to object masking.
Figure 3: Comparison of question-guided attention maps (only the question attention decoder) before _(first row)_, and after fine-tuning with reasoning supervision _(second row)_. We observe that the finetuned model is able to attend to informative regions. |
2309.05431 | General theory for plane extensible elastica with arbitrary undeformed
shape | A general expression for the strain energy of a homogeneous, isotropic, plane
extensible elastica with an arbitrary undeformed configuration is derived. This
energy constitutes the correct expression for one-dimensional models of
polymers or vesicles, whose natural configuration is characterized by locally
changing curvature. We derive the macroscopic stress-strain relations,
providing an universal criterion for the neutral curve location. In this
respect, we demonstrate that the natural curve existence constitutes the
fundamental requirement for the conformational dynamics of any inextensbile
biological filament. | Alessandro Taloni, Daniele Vilone, Giuseppe Ruta | 2023-09-11T13:07:04Z | http://arxiv.org/abs/2309.05431v1 | # General theory for plane extensible elastica with arbitrary undeformed shape
###### Abstract
A general expression for the strain energy of a homogeneous, isotropic, plane extensible elastica with an arbitrary undeformed configuration is derived. This energy constitutes the correct expression for one-dimensional models of polymers or vesicles, whose natural configuration is characterized by locally changing curvature. We derive the macroscopic stress-strain relations, providing an universal criterion for the neutral curve location. In this respect, we demonstrate that the natural curve existence constitutes the fundamental requirement for the conformational dynamics of any inextensbile biological filament.
keywords: Elasticity, neutral fiber, linear deformations +
Footnote †: journal: International Journal of Engineering Science
## Introduction
The old theories of flexure were based on the assumption that the elastica strain consists of extension and contraction of longitudinal filaments, said fibers [1; 2; 3]. In this context, the reduction of the problem to the one-dimensional plane strain [4] was naturally taken as the compelling point of view. By instance, Lagrange writes about the elastica as a _fil flexible et en meme temps extensible et contractible_[5]. Yet, in Kirchoff theory the
three-dimensional deformation of a slender elastic rod is reduced to the bending deformation of a one-dimensional curve [6]. Moreover, the theory of the bending and twisting of thin rods and wires was for a long time developed independently of the general three-dimensional equations of elasticity, by one-dimensional methods akin to those employed by Bernoulli and Euler [7]. However, the problem of which, among the fibers composing the elastica, was eligible to be taken as the representative curve, remained unsettled till the profound works of Parent and Coulomb. Already in 1695, Jacob Bernoulli had argued that the bending moment had to be taken, at each cross-section, with respect to the point where it intersects the _line of fulcra_, i.e. the line which does not suffer any extension nor contraction all along the deformation: the neutral fiber. If the tensions vary linearly over the rod rigid cross-section, the neutral line coincides with its central line. Although this fact was already known by Beeckman, Hooke, Huygens, Varignon and Mariotte [3, 8], it was only in 1713 that these observations took the mathematical form of the Parent criterion [9].
The issue of the existence and ensuing location of the neutral fiber, avoiding any specific elastic hypothesis about the law of variation of the tensions over the cross-section, remains today unjustly unattended by modern scientists. Indeed, what was considered as _the_ elastic problem for a hundred of years has been regarded with condescension by later historians. Mostly because it is wrongly assumed that later works on three-dimensional elasticity somehow fixed it. However, in modern theories of bending, such as Saint Venant's, the existence of the neutral line is implicitly postulated _ad hoc_, not demonstrated [10, 3]. Our purpose, in this paper, is to tackle a problem of a more limited reach: to identify a criterion for the neutral fiber existence and placement, if the linearity of the Hooke law is assumed, but the starting configuration of the elastica is not necessarily straight, as in the slender beam classical theory, but any.
The problem of spontaneously curved bands was already known to Jacob Bernoulli as testified by a posthumous fragment published in 1744 [10]. In the same year also the Euler's treatise on elastic curves appeared, in which the law of an elastica endowed with a non-zero curvature was asserted [3]. A systematic theory of rods with curved undeformed configuration is substantially due to Clebsch [11], although outlined by Kirchoff [6], to whom probably the notion of "unstressed state" must be acknowledged. Only in Timoshenko's theory of curved beam [12], however, it is shown that the neutral fiber does not correspond to the line of centroids. Surprisingly,
in the 20th century the problem of the elastica whose undeformed shapes are circular arcs or rings, in particular, has been taken up by several authors [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24], seemingly neglecting Timoshenko's work. As a matter of fact, when the one-dimensional representation of the naturally skew elastica is provided, usually this corresponds to the line of centroids [14, 15], assumed, sometimes tacitly, to be the neutral fiber, which can be extensible [14, 15, 13, 16, 25, 26, 27], or inextensible [17, 18, 19, 20, 21, 22, 23, 24].
In this paper we consider the deformation of an elastica with a spontaneous curvature which is not constant, thus generalizing the theory of curved beams. Our work is motivated by the fact that natural biological filaments cannot be considered straight in their native state, nor endowed with a uniform curvature. To the contrary, biological materials develop into a variety of complex shapes, which can sense and respond to local curvature. This field of mechanobiology [28] poses fundamental questions about the function of geometry and, particularly, on the role that the local curvature plays in biological systems [29]. In particular, the curvature appears as a determining factor in important biological functional tasks executed by many bio-polymers inside the cell [30, 31, 32, 33, 34], as well as in the design of next-generation genomics technologies [35].
On top of that, none of the three major classes of cytoskeletal filaments, microtubules, actin filaments, and intermediate filaments, can be considered homogeneous nor isotropic, as each is formed from chains of discrete protein monomers [36]. At the typical level of description in usual molecular mechanics models, however, they are often treated as traditional engineering elements [37]: they are considered plane elasticae whose mechanical behavior is entirely dictated by the Bernoulli-Euler theory. Our model does not deviate from this conceptual framework; however, our construction can be easily generalized to realistically account for non-homogeneous filaments with varying geometrical and mechanical properties.
Our starting point is the derivation of the strain energy associated to any three-dimensional elastica deformation, as the energy cost of a shape fluctuation is the significant starting concept of any dynamical model in polymer physics [38, 39]. The one-dimensional representation of the elastica arises as the natural context for the strain energy formulation (Section 1). The macroscopic constitutive relations are obtained from the strain energy function (Section 2), and in Section 3 we demonstrate that the strain energy minimization furnishes the criterion for locating the neutral curve. We conclude
the paper in Section 4, discussing the implication of our findings in terms of polymer dynamics and statistics, providing the evidence that the developed framework yields the correct generalization of flexible and semiflexible polymers models, and of simplified one-dimensional models of fluctuating vesicles. We discuss the relevance that the old-fashioned concept of neutral fiber acquires in modern polymer physics. We show, indeed, that a renovated interest around it, is motivated by the fact that its existence is provided by the minimum energy principle at the base of any polymer conformational change. We discuss the correct and convenient way of implementing our model in computer simulations and, finally, the relevance of our model in describing real biological non-homogeneous filaments.
## 1 The Model
We consider the deformation of an homogeneous _plane extensible elastica_[14]. This is defined as an hypereleastic three-dimensional body of rectangular section, thin in two of its dimensions (height \(h\) and depth \(b\)), but not in its length, and subject to the only restriction that its deformations take place in a plane, thus excluding twisting. According to the Cosserat's theories [40; 41; 42; 43; 44], any elastic deformation stems from an undeformed body configuration or undeformed state, defined as the shape corresponding to the unstressed condition [7]. The work performed in passing from the undeformed to the deformed state is the strain or stored energy, which is the pivotal concept in our model.
We propose to compute the strain energy using a finite difference scheme, as displayed in Fig. 1. The undeformed body is schematically subdivided in \(N\) elementary blocks of arbitrary volumes \(\Delta V^{(i)}\) (\(i=[1,N]\)) (Fig. 1A). To each elementary volume is assigned a local reference system, where the \(\xi\) and \(\eta_{\alpha}\) axes define respectively the block's longitudinal and transverse directions, and the origin is arbitrarily placed at an height \(\alpha h\) (\(0\leq\alpha\leq 1\)) from the block's bottom surface, see Fig. 1A, and at the left side of the block as shown in figure. We now introduce a reference segment of length \(\Delta L_{\alpha}^{(i)}\), equivalent to the longitudinal dimension of the plane \(\eta_{\alpha}=0\) (Fig. 1A). Assuming the cross-sections to remain rigid, the body deformation necessarily involves the deformation of each block, passing from the volume \(\Delta V^{(i)}\) to \(\Delta v^{(i)}\). At the same time, the reference segment \(\Delta L_{\alpha}^{(i)}\) transforms into \(\Delta l_{\alpha}^{(i)}\). As in the old theories of elastic flexure, we assume that the stress is determined by the strain [45], and the strain consists of the displacements
of independent longitudinal filaments \(u^{(i)}(\eta_{\alpha})=\Delta l^{(i)}(\eta_{\alpha})-\Delta L^{(i)}(\eta_{\alpha})\). Embracing the microscopical validity of the Hooke's law, we sum, _a la_ Leibniz, over the contributions of the fibers upon the entire cross section. Thus, the elementary block's strain energy can be written as
\[\Delta E_{\alpha}^{(i)}=\frac{bY}{2}\int_{-\alpha h}^{(1-\alpha)h}\left[\frac{u ^{(i)}(\eta_{\alpha})}{\Delta L^{(i)}(\eta_{\alpha})}\right]^{2}\Delta L^{(i)} (\eta_{\alpha})\ d\eta_{\alpha}, \tag{1}\]
where \(Y\) is a parameter which has the dimensions of a stress and depends on the material properties. Therefore the strain energy (1) represents the work performed in deforming the elementary block from the unstressed configuration reported in Fig. 1A, to that in Fig. 1A\({}^{\prime}\). However, in continuity with the old theories, the strain energy (1) is associated to the deformation of the representative material segment. Nevertheless, \(\Delta E_{\alpha}^{(i)}\) is a scalar quantity and, as such, must be invariant under change of the local reference frame, i.e. change of material representative segment. As a matter of fact, by applying the transformation \(\eta_{\alpha}=\eta_{\beta}-(\alpha-\beta)h\), the property \(\Delta E_{\alpha}^{(i)}=\Delta E_{\beta}^{(i)}\) is always satisfied, as it is demonstrated in the Supplementary Informations (SI). The analytical derivation of the energy (1) is fully developed in SI. We hereby report the final expression:
\[\Delta E_{\alpha}^{(i)}=\frac{Y}{2}\Delta L_{\alpha}^{(i)}\left\{{F_{\alpha}^ {(i)}\,{\varepsilon_{\alpha}^{(i)}}^{2}+2S_{\alpha}^{(i)}\,{\varepsilon_{ \alpha}^{(i)}}\mu_{\alpha}^{(i)}+{I_{\alpha}^{(i)}\,{\mu_{\alpha}^{(i)}}^{2}} }\right\}\, \tag{2}\]
where we have introduced the axial strain
\[\varepsilon_{\alpha}^{(i)}=\frac{\Delta l_{\alpha}^{(i)}-\Delta L_{\alpha}^{( i)}}{\Delta L_{\alpha}^{(i)}} \tag{3}\]
and the bending measure [14]
\[\mu_{\alpha}^{(i)}=\frac{1}{\Delta L_{\alpha}^{(i)}}\left(\frac{\Delta l_{ \alpha}^{(i)}}{r_{\alpha}^{(i)}}-\frac{\Delta L_{\alpha}^{(i)}}{R_{\alpha}^{( i)}}\right). \tag{4}\]
The quantities \(R_{\alpha}^{(i)}\) and \(r_{\alpha}^{(i)}\) are the radii connecting the intersection point between the extensions of the block limiting sections, with the origin of the block's local reference system: they have a sign corresponding to their orientation, as shown in the SI (see Fig. 1A,A\({}^{\prime}\) for the details). In the following we will show how, in the continuum limit, they correspond to the local radii of curvature of the undeformed and deformed configurations, respectively. For
simplicity, from now on we will refer to such quantities as curvature radii also in the discrete case. Likewise, we will define the block spontaneous curvature as \(K_{\alpha}^{(i)}=\frac{1}{\left|R_{\alpha}^{(i)}\right|}\), while \(k_{\alpha}^{(i)}=\frac{1}{\left|r_{\alpha}^{(i)}\right|}\).
Most importantly, according to the Timoshenko's curved beam theory [12], \(F_{\alpha}^{(i)}\) represents the reduced area, \(I_{\alpha}^{(i)}\) the reduced moment of inertia and \(S_{\alpha}^{(i)}\) is the reduced axial-bending coupling moment. They have a simple integral expression furnished in SI, corresponding to the formula obtained by Kammel in the theory of a ring allowing axial compressibility and subjected to hydrostatic pressure [13], and subsequently used in the analysis of compressible rings [15]. Our derivation, however, sheds light on a fundamental aspect that seemingly remained unnoticed in the past analysis: that is, \(F_{\alpha}^{(i)}\), \(S_{\alpha}^{(i)}\) and \(I_{\alpha}^{(i)}\) take different forms according to whether the intersection between the sidelines containing the block's sections lies above or below the elementary volume, i.e. they depend on the direction of \(R_{\alpha}^{(i)}\) (see SI). This dependence is compactly expressed as
\[F_{\alpha}^{(i)}=\frac{b}{K_{\alpha}^{(i)}}\ln\left(1+h\,\max[K_{0}^{(i)},K_{1 }^{(i)}]\right), \tag{5}\]
\[S_{\alpha}^{(i)}=\frac{\operatorname{sgn}\left(K_{0}^{(i)}-K_{1}^{(i)}\right) }{K_{\alpha}^{(i)}}\left[bh-F_{\alpha}^{(i)}\right], \tag{6}\]
\[I_{\alpha}^{(i)}=\frac{\operatorname{sgn}\left(K_{0}^{(i)}-K_{1}^{(i)}\right) }{K_{\alpha}^{(i)}}\left[b\left(\frac{1}{2}-\alpha\right)\,h^{2}-S_{\alpha}^{ (i)}\right]. \tag{7}\]
If the center of the intersection point is placed below the block's bottom line (\(\alpha=0\)), hence \(K_{0}^{(i)}>K_{1}^{(i)}\) and the function \(\operatorname{sgn}\left(K_{0}^{(i)}-K_{1}^{(i)}\right)=1\). In the opposite case, the curvature of the upper fiber \(\alpha=1\) is such that \(K_{1}^{(i)}>K_{0}^{(i)}\), and \(\operatorname{sgn}\left(K_{0}^{(i)}-K_{1}^{(i)}\right)=-1\). By inspection of the expression (6), there exists one value of \(\alpha\) for which stretching and bending term are decoupled, i.e. \(S_{\alpha}^{(i)}=0\). This value corresponds to
\[\alpha_{U}=\begin{cases}\frac{1}{\ln(1+h\,K_{0})}-\frac{1}{hK_{0}}&K_{0}>K_{1 }\\ \\ 1-\left[\frac{1}{\ln(1+h\,K_{1})}-\frac{1}{hK_{1}}\right]&K_{1}>K_{0}\,\end{cases} \tag{8}\]
with \(S_{\alpha}>0\) for \(\alpha<\alpha_{U}\), and \(S_{\alpha}<0\) for \(\alpha>\alpha_{U}\). Moreover, it is worth to notice that it results
\[\lim_{K_{0/1}\to 0^{+}}\alpha_{U}=\frac{1}{2}.\]
This means that the fiber allowing the axial-bending uncoupling coincides with the line of centroids in the case of flat undeformed configuration, recovering the classical beam theories result, as shown in the SI.
Formally, the full body strain energy is the sum of the elementary blocks strain energy, i.e. \(E_{\alpha}=\sum_{i=1}^{N}\Delta E_{\alpha}^{(i)}\). In our finite difference scheme, the ordered sequence of consecutive \(\Delta L_{\alpha}^{(i)}\) constitutes the representative undeformed material polygonal chain \(L_{\alpha}\) in Fig. 1B. Correspondingly, the ordered sequence of consecutive \(\Delta l_{\alpha}^{(i)}\) composes the representative deformed material polygonal chain \(l_{\alpha}\) in Fig. 1B\({}^{\prime}\). Now, adopting as unique reference system the laboratory frame as in Fig.1B,B\({}^{\prime}\), the strain energy \(E_{\alpha}\) is expressible in terms of the axial strain (3), and of the bending measure \(\mu_{\alpha}^{(i)}\), that assumes an alternative but equivalent expression to that in (4) [14, 13, 15, 46]:
\[\mu_{\alpha}^{(i)}=\frac{\Delta\Phi^{(i)}-\Delta\varphi^{(i)}}{\Delta L_{ \alpha}^{(i)}}, \tag{9}\]
where \(\Phi^{(i)}\) (\(\varphi^{(i)}\)) is the \((i)\)-th cross-sectional bending angle with respect the \(x\)-axis in the undeformed (deformed) configuration (see Fig. 1B and 1B\({}^{\prime}\)). Hence
\[E_{\alpha}=\frac{Y}{2}\sum_{i=1}^{N}\Delta L_{\alpha}^{(i)}\left\{F_{\alpha}^ {(i)}\,{\varepsilon_{\alpha}^{(i)}}^{2}+2S_{\alpha}^{(i)}\,\varepsilon_{ \alpha}^{(i)}\mu_{\alpha}^{(i)}+I_{\alpha}^{(i)}\,{\mu_{\alpha}^{(i)}}^{2} \right\}. \tag{10}\]
In our finite difference scheme, the blocks are assumed to be independent. Thus, nothing prevents us from assigning to each block a different local reference frame, i.e., a different \(\alpha\). As a matter of fact, thanks to the property of invariance of \(\Delta E_{\alpha}^{(i)}\) under a change of local reference frame, the total energy \(E_{\alpha}\) is also invariant. We will revisit this important consideration in Sec. 3 when we discuss the existence of the neutral fiber.
The continuum limit is taken by increasing \(N\) up to the point that the poligonal chain \(L_{\alpha}\) (\(l_{\alpha}\)) tends to a finite curve, namely the reference curve, or representative fiber \(\mathscr{L}_{\alpha}\) (\(\ell_{\alpha}\)) (Fig. 1C, 1C\({}^{\prime}\)). According to the older theories of flexure, a material fiber can be envisaged as a translation of one of the outermost curves (\(\alpha=0\) or \(\alpha=1\)) along the orthogonal cross sections of the
body. The Cartesian components of the material line, or fiber, in the laboratory frame are expressed as \(\mathbf{L}_{\alpha}(s)\left(X_{\alpha}(s),Y_{\alpha}(s)\right)\) and \(\mathbf{l}_{\alpha}(s)\left(x_{\alpha}(s),y_{\alpha}(s)\right)\), respectively, where \(s\) is the same internal parameter for both the undeformed reference fiber \(\mathscr{L}_{\alpha}:[s_{m},s_{M}]\rightarrow\mathbb{R}^{2}\) and deformed one \(\ell_{\alpha}:[s_{m},s_{M}]\rightarrow\mathbb{R}^{2}\). By introducing the tangents \(\mathbf{T}_{\alpha}=\frac{d\mathbf{L}_{\alpha}}{ds}\) and \(\mathbf{t}_{\alpha}=\frac{d\mathbf{l}_{\alpha}}{ds}\) to \(\mathscr{L}_{\alpha}\) and \(\ell_{\alpha}\), the continuum limit of the elastica strain energy is
\[\mathscr{E}_{\alpha}=\frac{Y}{2}\int_{s_{m}}^{s_{M}}ds\left\{\frac{F_{\alpha}( s)}{|\mathbf{T}_{\alpha}(s)|}\,\left[|\mathbf{t}_{\alpha}(s)|-|\mathbf{T}_{ \alpha}(s)|\right]^{2}+\right. \tag{11}\]
where \((\cdot)^{\prime}\) denotes the derivative with respect to \(s\). The expression (11) constitutes one of the main results of our analysis, extending the Timoshenko's linear theory of curved beams to the case of an elastica with generic undeformed condition, identified by varying local spontaneous curvature \(K(s)=|\Phi^{\prime}(s)|\). In SI the differential form of the quantities in (5), (6) and (7) are furnished. In particular, it is worth of noticing how the condition for the axial-bending uncoupling (8) must be valid locally in the continuum limit, i.e. \(S_{\alpha}(s)=0\). This can be achieved only if \(\alpha_{U}\) in (8) changes with \(s\) according to
\[\alpha_{U}(s)=\frac{|\mathbf{T}_{0}(s)|}{h\Phi^{\prime}(s)}-\frac{|\Phi^{ \prime}(s)|}{\Phi^{\prime}(s)}\,\frac{1}{\ln\left(1+h\frac{|\Phi^{\prime}(s)| }{\min\left[|\mathbf{T}_{0}(s)|,|\mathbf{T}_{1}(s)|\right]}\right)}. \tag{12}\]
The classical case of a thin slender rod, beam or bar [3], is also derived in SI for the sake of completeness. It is shown how this constitutes the zero curvature limit of the general model represented by the energy expression (11).
## 2 Macroscopic constitutive relations
We now derive the macroscopic stress-strain relations for the elastica, given a generic undeformed configuration. Our starting point is the strain energy density, defined as \(W_{\alpha}=\frac{\Delta E_{\alpha}}{\Delta L_{\alpha}}\). Therefore, when the undeformed state has a constant curvature throughout the whole elastica, from Eq. (2) it turns out that for a generic choice of the representative fiber
\[W_{\alpha}=\frac{Y}{2}\left\{F_{\alpha}\,{\varepsilon_{\alpha}}^{2}+2S_{ \alpha}\,{\varepsilon_{\alpha}}\mu_{\alpha}+I_{\alpha}\,{\mu_{\alpha}}^{2} \right\}. \tag{13}\]
The macroscopic constitutive equations can be drawn from (13), according to the derivation furnished in [14], which employs the definition of the elastica as a three-dimensional body, or to an alternative variational approach in which the elastica is treated as a one-dimensional medium [25]:
\[\begin{cases}N_{\alpha}=\frac{\partial W_{\alpha}}{\partial\varepsilon_{\alpha} }=Y\left(F_{\alpha}\varepsilon_{\alpha}+S_{\alpha}\mu_{\alpha}\right)\\ \\ M_{\alpha}=\frac{\partial W_{\alpha}}{\partial\mu_{\alpha}}=Y\left(S_{\alpha} \varepsilon_{\alpha}+I_{\alpha}\mu_{\alpha}\right).\end{cases} \tag{14}\]
\(N_{\alpha}\) and \(M_{\alpha}\) are the elastica axial force and bending moment respectively, where the one-dimensional representation of the elastica coincides with a generic fiber \(\alpha\). We recall that the bending measure enjoys two analogue definitions, Eq.s (4) and (9). From Eq.s(14), it is immediately verified that linear decoupled relations only hold for the choice of the material fiber allowing for the axial-bending uncoupling of the strain energy (2), namely \(\alpha=\alpha_{U}\) in Eq.(8). The same condition is valid in the classical case of flat initial configuration, as it is shown in SI.
It is instructive to see how the constitutive relations transform under change of reference line, i.e. when shifting from the fiber \(\alpha\) to \(\beta\). To this end, we firstly need to express how the strain and the bending measure are modified if the material line changes:
\[\varepsilon_{\alpha}=\frac{\Delta L_{\beta}}{\Delta L_{\alpha}}\left[ \varepsilon_{\beta}+h(\alpha-\beta)\mu_{\beta}\right], \tag{15}\]
and
\[\mu_{\alpha}=\frac{\Delta L_{\beta}}{\Delta L_{\alpha}}\mu_{\beta}. \tag{16}\]
Therefore, plugging these relations into (14), it turns out that
\[\begin{cases}N_{\alpha}=N_{\beta}\\ \\ M_{\alpha}=M_{\beta}+h(\beta-\alpha)N_{\beta}.\end{cases} \tag{17}\]
The former transformations appear to be universal, in the sense that they are valid in the case of slender bars as well, as shown in SI. Importantly, while the axial force is invariant under change of representative fiber, the bending moment gains an axial force contribution which accounts for the different
curvatures of the fibers \(\alpha\) and \(\beta\). Moerover, when \(N_{\alpha}=0\), the bending moment is invariant under changes of representative fiber: \(M_{\alpha}=M_{\beta}\). Physically this corresponds to the fact that, for a pure bending transformation, the bending moment cannot depend on the fiber chosen as representative. For any other transformation for which \(N_{\alpha}\neq 0\), the bending moment will depend on the fiber on which it is computed, and the axial force contribution has to be summed up to the total moment of the forces.
When the curvature is locally changing within the elastica, the continuum version of the strain energy density (13) is
\[\mathscr{W}_{\alpha}(s)=\frac{Y}{2\left|\mathbf{T}_{\alpha}(s) \right|^{2}}\left\{F_{\alpha}(s)\,\left[\left|\mathbf{t}_{\alpha}(s)\right|- \left|\mathbf{T}_{\alpha}(s)\right|\right]^{2}+\right.\] \[\left.-2S_{\alpha}(s)\,\left[\left|\mathbf{t}_{\alpha}(s)\right|- \left|\mathbf{T}_{\alpha}(s)\right|\right]\left[\varphi^{\prime}(s)-\Phi^{ \prime}(s)\right]+I_{\alpha}(s)\,\left[\varphi^{\prime}(s)-\Phi^{\prime}(s) \right]^{2}\right\}, \tag{18}\]
from which the constitutive equations in a local form can be defined as
\[\begin{cases}N_{\alpha}(s)=\frac{\partial\mathscr{W}_{\alpha}(s)}{\partial \left|\mathbf{t}_{\alpha}(s)\right|}=\frac{Y}{\left|\mathbf{T}_{\alpha}(s) \right|^{2}}\left\{F_{\alpha}(s)\left[\left|\mathbf{t}_{\alpha}(s)\right|- \left|\mathbf{T}_{\alpha}(s)\right|\right]-S_{\alpha}(s)\left[\varphi^{\prime} (s)-\Phi^{\prime}(s)\right]\right\}\\ M_{\alpha}(s)=\frac{\partial\mathscr{W}_{\alpha}(s)}{\partial\varphi^{\prime} (s)}=\frac{Y}{\left|\mathbf{T}_{\alpha}(s)\right|^{2}}\left\{-S_{\alpha}(s) \left[\left|\mathbf{t}_{\alpha}(s)\right|-\left|\mathbf{T}_{\alpha}(s)\right| \right]+I_{\alpha}(s)\left[\varphi^{\prime}(s)-\Phi^{\prime}(s)\right]\right\}. \end{cases} \tag{19}\]
Once again, the value of \(\alpha\) needs to be varied throughout the elastica according to the law (12), for the stress-strain relationships (19) to be locally decoupled.
## 3 Neutral fiber location
The neutral fiber is the locus of points which does not undergo any longitudinal extension nor contraction during the elastica deformation [7]. If there is one, this does not necessarily coincide with the line of centroids [7] and its existence depends entirely on the transformation put in use.
After the Jacob Bernoulli's first erroneous attempt of establishing a general principle for the neutral fiber placement, based on the balance of bending moments between elongated and compressed fibers, the correct condition was enunciated by Parent in 1713 [9] and rediscovered by Coulomb 60 years later [47]: _the neutral fiber is the locus of points separating the region of extension from that of compression, such that the longitudinal forces are balanced_.
This criterion, however, is easy to implement in case of Hookean dependence of the tension over the cross-section, but is hard to be assessed for a generic (non-linear) stress-strain relation, as precisely sought by Jacob Bernoulli.
Let us see how Parent's principle applies to the case under investigation. We start by considering an elastica with uniform spontaneous curvature, a situation encompassing the cases of a slender bar, a circular arc or an hyperbola. The requirement of the longitudinal forces balance corresponds to \(N_{\alpha}=0\). However this constitutes just the necessary condition for the existence of the neutral line. As a matter of fact it does not furnish a precise answer about its placement because, in view of the first of Eq. (17), it does not depend on the specific choice of the representative fiber. However, among all the representative fibers, for rigid cross sections, only one satisfies the zero strain condition \(\varepsilon_{\alpha}=0\), i.e. the value of \(\alpha_{U}\) in (8). Therefore, as a general principle, we can conclude that the neutral fiber, when it exists, corresponds to the material representative line which ensures the axial-bending uncoupling of the strain energy. This is an universal property, valid for both flat as well as for non-flat initial configurations. Moreover, it appears that for any transformation guaranteeing the existence of the neutral fiber, the bending moment is independent of the representative fiber choice, as implied by Eq.s (17).
We now investigate how the neutral fiber is identified when the curvature is not constant throughout the elastica in the undeformed configuration. This is achieved by using the macroscopic constitutive relations in the local formulation (19). The condition implied by the Parent principle requires \(N_{\alpha}(s)=0\)\(\forall s\), accompanied by the zero strain condition \(|{\bf t}_{\alpha}(s)|=|{\bf T}_{\alpha}(s)|\). While in the case of elastica endowed with constant spontaneous curvature the neutral fiber is identified with a material line corresponding to a constant \(\alpha\), the situation is totally different for a generic undeformed configuration. As a matter of fact, in this case one cannot properly talk about neutral fiber, in the sense that the curve guaranteeing \(N_{\alpha}(s)=0\) together with the zero local strain conditions, does not overlap with one of the elastica material fibers. Rather, the concept of neutral fiber is replaced by that of neutral curve, defined as the locus of points identified by the value of \(\alpha_{U}(s)\) in Eq. (12). A typical situation is that depicted in Fig. 2C, where the initial curvature is a function of the internal parameter \(s\), \(K_{\alpha}(s)\), and the neutral curve is shown as a black dotted line, jiggling around the line of centroids (dashed red line), and approaching the side of the elastica with maximum value of the natural curvature. Notably, while any representative fiber is always orthogonal to
the cross-section, the neutral curve is not in general orthogonal.
Therefore, if the elastica transformation provides the existence of a neutral curve, it is possible to adopt a _neutral_ arc-length parametrization such that \(|\mathbf{T}_{\alpha_{U}}(s)|=1\). Adopting this parametrization, the energy assumes the following simple form (see SI)
\[\mathscr{E}_{\alpha_{U}}=-\frac{bYh^{2}}{2}\int_{s_{m}}^{s_{M}}ds\left[\frac{1 }{2}-\alpha_{U}(s)\right]\,\frac{\left[\varphi^{\prime}(s)-\Phi^{\prime}(s) \right]^{2}}{\Phi^{\prime}(s)}. \tag{20}\]
At the same time, the bending moment takes the form
\[M_{\alpha_{U}}(s)=-bYh^{2}\left[\frac{1}{2}-\alpha_{U}(s)\right]\,\frac{\left[ \varphi^{\prime}(s)-\Phi^{\prime}(s)\right]}{\Phi^{\prime}(s)}. \tag{21}\]
Let us take a moment to clarify what is implicit in the expression (20). The neutral curve requires that the value of \(\alpha\) changes with \(s\) according to Eq. (12). As a consequence, the neutral arc-length parameterization entails that the strain energy (20) interpolates the expressions (11), in the same fashion as \(\alpha_{U}(s)\) interpolates among the different \(\alpha\). However, for a deformation that preserves the neutral curve, the value of \(\mathscr{E}_{\alpha_{U}}\equiv\mathscr{E}_{\alpha}\), thanks to the property of invariance of \(\Delta E_{\alpha}\) under a change of reference frame, which holds locally also in the continuum limit.
## 4 Discussion
Our one-dimensional treatment of a three-dimensional elastica is in the wake of the old theories of flexure and bending. Nevertheless, the choice of a fiber \(\alpha\) as the representative one-dimensional medium is purely arbitrary. As a matter of fact, the linchpin of our analysis has been to disentangle the representative material fiber from the notion of neutral fiber. Indeed, contrary to the modern three-dimensional theories of bending, we do not postulate the existence of a neutral fiber, neither we identify it with the line passing through the centroids. Our main achievement has been to furnish the exact criteria for its existence and its location. These criteria ultimately stem from the derivation of the strain energy, and are encapsulated in the following universal condition:
\[\left.\frac{\partial\Delta E_{\alpha}}{\partial\varepsilon_{\alpha}}\right|_{ \varepsilon_{\alpha}=0}=0. \tag{22}\]
Locating the position of the neutral fiber does not have only an effect on the stiffness of a given cross-sectional form, the Eq. (22) implies that it becomes a crucial issue for any physical transformation aiming at minimizing the amount of work required, given a shape transformation. Let us explain this point with the help of Fig. 2: imagine to have a slender bar, such that in panel A, transformed into the circle in panel A\({}^{\prime}\). Among all the possible deformations that bend the elastic bar into a ring, only one allows the presence of the neutral fiber, and is that leaving unvaried the line of the centroids. In this case, Parent's principle is satisfied and the longitudinal forces are balanced, i.e. \(N_{\alpha}=0\ \forall\alpha\). We could imagine to give the bar a final circular shape preserving the length of a different fiber, say the bottom material line (\(\alpha=0\)), together with Jacob Bernoulli (in his first attempt, dating 1694) or Euler [3]. In this case, however, \(N_{\alpha}\neq 0\). Therefore the bottom line cannot be identified with the neutral fiber although \(\varepsilon_{\alpha=0}=0\). The important point is that the amount of work spent by deforming the beam into a circle, is minimum if the line of centroids maintains its length unvaried. This is plainly demonstrated in SI.
Now, it is also instructive to consider the opposite case, where the circular shape is the unstressed condition (Fig. 2B), and the flat configuration is the deformed one (Fig. 2B\({}^{\prime}\)). The neutral fiber in this case does not correspond to the line of centroids (red dashed line) but it is a circumference whose radius is smaller, according to the value of (8) (black dotted line). Thus the minimum amount of work required to "stretch" the circle into a bar is obtained if the corresponding transformation leaves the neutral fiber unvaried. As a corollary, the bar shown in panel B\({}^{\prime}\) of Fig. 2 is smaller than that in panel A, because the length of the extended beam in panel B\({}^{\prime}\) is \(l=\frac{2\pi h}{\ln\left(\frac{L+\pi h}{L-\pi h}\right)}<L\).
So far we addressed the cases of elastica endowed with spontaneous constant curvature. We now turn to the elastica with intrinsic local curvature as in Fig. 2C. The variational version of the minimum principle in (22) is
\[\left.\frac{\delta\mathscr{E}_{\alpha}}{\delta\left|\mathbf{t}_{\alpha}(s) \right|}\right|_{\left|\mathbf{t}_{\alpha}(s)\right|=\left|\mathbf{T}_{\alpha }(s)\right|}=0. \tag{23}\]
Imagine an elastic filament undergoing a shape transformation. One is generally inclined to think that such a system would prefer to deform by minimizing its energy, skewing and swelling in the fashion that is the easiest of all. The expression (23) specifies that such a minimum-cost deformation is only obtainable by preserving the length of the neutral curve. However, due
to the immaterial nature of the neutral curve, its profile may differ considerably from the actual three-dimensional configuration. This is graphically represented in Fig. 2C\({}^{\prime}\) where, for the sake of simplicity, the elastica deformed configuration is set to be straight and the neutral curve is quite something else.
### The importance of the neutral curve for ideal chains
The above considerations have a tremendous impact into both non-equilibrium properties and equilibrium statistics of polymers or other macromolecules. Take for example the conformational dynamics of ideal chains [38; 39]. Ideal chains provide simplified one-dimensional models for real flexible and semiflexible biomolecules, being substantially the trace of their backbone, with negligible self-interactions or excluded volume effects. Among semiflexible models, certainly the most successful is the Worm-Like Chain (WLC) introduced by Kratky and Porod [48; 49] for inextensible stiff-rod polymers, for which the energy associated with conformational fluctuations may be captured by using merely linear elasticity. The effective WLC energy associated with the bending is [50; 51; 49]
\[\mathscr{E}_{WLC}=\frac{k_{B}T}{2}\int_{0}^{L}ds\,A\,k(s)^{2} \tag{24}\]
where \(k_{B}\) is the Boltzman constant, \(T\) is the temperature and \(A\) is the persistent length, characterizing the bending stiffness of the polymer. As the expression (24) is derived from thin-rod linear elasticity, it entirely fits into the framework developed in this paper. Indeed the stretching energy of a straight ribbon with inextensible neutral fiber, i.e. the line of centroids, can be expressed as
\[\mathscr{E}_{1/2}=\frac{bYh^{3}}{2}\int_{0}^{L}ds\,\frac{\varphi^{\prime}(s)^{ 2}}{12} \tag{25}\]
if the same line of centroids (\(\alpha=1/2\)) is assumed as the representative fiber (see SI). The analogy between the expression (24) and (25) is apparent if one recalls that \(i)\) both expressions are derived assuming the arc-length parametrization for which \(\big{|}\mathbf{t}_{(1/2)}(s)\big{|}=1\), \(ii)\) the local curvature \(k(s)\equiv|\varphi^{\prime}(s)|\), and \(iii)\) the Young's modulus \(Y\) exhibits a temperature dependence [52] (in this respect notice that also \(A\) may depend on \(T\)[53; 54]). At the same time, from the comparison of Eq.s (24) and (25) it emerges
that the natural configuration of a polymer is tacitly assumed to be a rigid rod [55; 56], not only at \(T=0\). This contrasts with real polymeric chains, whose microstructural shape is naturally coiled. Taking into account the polymers' natural curvature requires the formal extension of the ideal energy (24) along the line marked by our theoretical analysis. In particular, in view of Eq. (23), the inextensibility requirement attains the higher value of a principle of minimum energy. To be inextensible, however, is none of the polymer material lines, but the neutral curve. This practically translates into fluctuations of the three dimensional polymer conformation and of its contour length, although limited, in contrast with the classical WLC picture. Importantly, it elucidates the contour molecule fluctuations observed in short to moderate length molecules, where \(L/A\sim 6-20\)[57], and it may help to explain the extreme bendability of short DNA [58]. As a matter of fact, the difference between the length of a generic fiber and that of the natural curve becomes more and more apparent for short filaments, or for increasing values of the \(h/L\) ratio. Hence, assuming the neutral arc-length parametrization, the correct expression for the bending energy is the Eq. (20), while any other material fiber representation based on constant \(\alpha\) requires the general expression (11). This contrasts with flexible polymers for which the concept of neutral curve loses importance and, even assuming the neutral arc-length parametrization (12), the energy is given by
\[\mathscr{E}_{\alpha_{U}}=\frac{bY}{2}\int_{s_{m}}^{s_{M}}ds\,\left\{h\left[| \mathbf{t}_{\alpha_{U}}(s)|-1\right]^{2}-h^{2}\left[\frac{1}{2}-\alpha_{U}(s) \right]\,\frac{\left[\varphi^{\prime}(s)-\Phi^{\prime}(s)\right]^{2}}{\Phi^{ \prime}(s)}\right\}. \tag{26}\]
Finally, from a statistical mechanics point of view, at the equilibrium the Boltzmann distribution is \(e^{-\frac{\mathscr{E}_{\alpha_{U}}}{k_{B}T}}\) rather than \(e^{-\frac{\mathscr{E}_{WLC}}{k_{B}T}}\), weighting a statistical ensemble composed by polymers configurations satisfying the constrain of having constant neutral curve rather than constant line of centroids.
### Numerical model for vesicles and polymers
Models for polymers, flexible films, membranes or vesicles often require a numerical implementation, in order to study shapes, fluctuations and dynamics, complementing the statistical mechanics analytical approach [59]. These models in general consist of \(N\) impenetrable circular beads connected by links, arranged in a configuration which requires a well defined energy
cost. A typical example is furnished by planar thermally fluctuating rings used as simplified models of fluctuating vesicles such as red blood cells. The Liebler-Singh-Fisher (LSF) model [60], by instance, considers planar closed chains including the pressure and a curvature energy term such as \(\sum_{i=1}^{N}\frac{k}{\Delta L}\left(1-\cos\Delta\varphi^{(i)}\right)\), where \(k\) is the constant bending modulus. Many aspect and variants of the LSF have been discussed in the physics literature using a wealth of analytical and computational techniques [61, 62, 63, 64, 65, 66, 67, 68, 69], allowing the stretching of the ring [70, 71], as well as a spontaneous curvature and locally varying bending modulus [20].
In our theory the strain energy function (11) has been derived by a finite difference scheme, as the continuum limit of the sum of the blocks strain energy (1). Therefore, the discrete form of the Eq. (11) constitutes, without any further approximation, the appropriate expression to be used in numerical simulations (see Fig. 1B,B\({}^{\prime}\) and SI). However, the fact that the uncoupling value of \(\alpha_{U}\) in (8) depends crucially on the local value of the curvature of each block, makes it impossible to adopt a global neutral arc-length parametrization for discrete chains. This means that a discrete decoupled expression for the strain energy function is out of the question, because the discrete chain would result in a discontinuous polygonal, when \(N\) is finite (green dashed lines in Fig. 1B,B\({}^{\prime}\)). From an energetic point of view, there would be nothing wrong, as the energy would remain the same due to the strain energy's invariance under a change of reference frame. However, from a graphical perspective, it would look awful! On the other side, adopting the constant \(\alpha\) description yields the correct and more convenient expression for the numerical implementation of our model, valid for any value of \(N\):
\[E_{\alpha}=\frac{Y}{2}\sum_{i=1}^{N}\Delta L_{\alpha}^{(i)}\left\{F_{\alpha}^{ (i)}\,{\varepsilon_{\alpha}^{(i)}}^{2}+2\frac{S_{\alpha}^{(i)}}{\Delta L_{ \alpha}^{(i)}}\varepsilon_{\alpha}^{(i)}\left[\Delta\Phi^{(i)}-\Delta\varphi^ {(i)}\right]+\right.\]
\[\left.+\frac{I_{\alpha}^{(i)}}{\Delta L_{\alpha}^{(i)2}}\left[\Delta\Phi^{(i) }-\Delta\varphi^{(i)}\right]^{2}\right\}\;. \tag{27}\]
We emphasize that this expression holds for any elastic chain, be extensible or unextensible. The unextensibility condition should be enforced by keeping the lengths of the segments at the height \(\alpha_{U}^{(i)}\) constants, in any conformational change, and modifying the representative \(\alpha\) fiber accordingly. However, the detailed procedure of the numerical implementation of our model will be the subject of an upcoming publication.
### Real filaments
Real biological filaments are characterized by local properties that can vary smoothly or abruptly along their contour. Our analysis focused on what we believe to be the most prominent property: the spontaneous curvature. However, geometric and mechanical properties may also characterize different portions of cellular filaments. In reality, filaments can only be considered homogeneous on average, with different heights (\(h\)) and depths (\(b\)) identifying each discrete protein-based segment. Simultaneously, the intrinsic mechanical properties of each monomer can differ, as in the case of the elastic modulus (\(Y\)), which we assumed to be constant in our analysis.
We believe that the entire framework assembled in this paper can be adapted to handle and accurately describe realistic biological filaments. The reasons lie in two main ingredients of our theory: the discrete scheme and the strain energy invariance under a change of reference frame. The finite difference scheme allows for a more precise description of biological shapes than any continuum theory, such as Cosserats' theory [40; 43; 44; 41; 42] because any biological filament, whether it is a microtubule, actin, or intermediate filament, is ultimately composed of discrete units. Since the blocks are independent in our model, the geometrical and mechanical characteristics of biological filaments can be easily implemented locally, while preserving the general form of the strain energy as in (10).
Furthermore, while the concept of material fibers becomes meaningless for non-homogeneous filaments, or at least non-parallel ones, the parametrization in terms of a reference segment \(\alpha\in[0,1]\), inherent to any block, will always be possible, giving the concept of fiber a geometrical rather than material meaning. Most importantly, the existence of a neutral curve, allowing the axial-bending decoupled form of the strain energy, must remain valid in any context, highlighting once more the general value of our findings.
## Acknowledgements
We thank for the useful discussions, criticisms and suggestions Stefano Zapperi, Lev Truskinovsky, Silas Alben, Umut Orguz Salman and Daniel Rayneau-Kirkhope.
## References
* (1) J. Bernoulli, Specimen alterum calculi differentialis in dimetienda spirali logarithmica, loxodromiis nautarum et areis triangulorum sphaericorum.
una cum additamento quodam ad problema funicularium, aliusque, Acta Eruditorum, Junii (1691) 282-290.
* [2] L. Euler, Genuina principia doctrinae de statu aequilibrii et motu corporum tam perfecte flesibiium quam elasticorum, Novi commentarii academiae scientiarum Petropolitanae (1771) 381-413.
* [3] L. Euler, The Rational Mechanics of Flexible Or Elastic Bodies 1638-1788: Introduction to Vol. X and XI, Springer Science & Business Media, 1960.
* [4] J. Michell, The theory of uniformly loaded beams, Quart. J. Math 32 (1901) 28-42.
* [5] J. L. de Lagrange, Mecanique analytique, Vol. 1, Mallet-Bachelier, 1853.
* [6] G. Kirchhoff, Ueber das gleichgewicht und die bewegung eines unendlich dunnen elastischen stabes., Journal fur die reine und angewandte Mathematik 1859 (56) (1859) 285-313.
* [7] A. E. H. Love, A treatise on the mathematical theory of elasticity, Cambridge university press, 2013.
* [8] S. Timoshenko, History of strength of materials: with a brief account of the history of theory of elasticity and theory of structures, Courier Corporation, 1983.
* [9] A. Parent, De la veritable mecanique de resistance des solides, Reflections sur la Systeme de M. Bernoulli de Bale Essais et recherches des Mathematiques et des physiques 3 (1713) 187-201.
* [10] C. Truesdell, The works of james bernoulli (1973), in: An Idiot's Fugitive Essays on Science, Springer, 1984, pp. 202-208.
* [11] R. F. A. Clebsch, Theorie der elasticitat fester korper, BG Teubner, 1862.
* [12] S. P. Timoshenko, Strength of materials part 1, Elementary theory and problems (1955) 165-310.
* [13] G. Kammel, Der einfluss der langsdehnung auf die elastische stabilitat geschlossener kreisringe, Acta Mechanica 4 (1967) 34-42.
* (14) S. Antman, General solutions for plane extensible elasticae having nonlinear stress-strain laws, Quarterly of Applied Mathematics 26 (1) (1968) 35-47.
* (15) T. Atanackovic, Buckling of a compressible elastic ring, Acta mechanica 127 (1-4) (1998) 121-134.
* (16) R. Lagrange, F. L. Jimenez, D. Terwagne, M. Brojan, P. Reis, From wrinkling to global buckling of a ring on a curved substrate, Journal of the Mechanics and Physics of Solids 89 (2016) 77-95.
* (17) A. Vakakis, T. Atanackovic, Buckling of an elastic ring forced by a periodic array of compressive loads, Journal of Applied Mechanics, Transactions ASME 66 (2) (1999) 361-367.
* (18) R. Schmidt, et al., A critical study of postbuckling analyses of uniformly compressed rings (1979).
* (19) L. Fu, A. Waas, Initial post-buckling behavior of thick rings under uniform external hydrostatic pressure (1995).
* (20) E. Katifori, S. Alben, D. R. Nelson, Collapse and folding of pressurized rings in two dimensions, Physical Review E 79 (5) (2009) 056604.
* (21) U. Kosel, Biegelinie eines elastischen rings als beispiel einer verzweigungslosung, ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift fur Angewandte Mathematik und Mechanik 64 (7) (1984) 316-319.
* (22) H. Troger, A. Steindl, Nonlinear stability and bifurcation theory: an introduction for engineers and applied scientists, Springer Science & Business Media, 2012.
* (23) J. Chaskalovic, S. Naili, Bifurcation theory applied to buckling states of a cylindrical shell, Zeitschrift fur angewandte Mathematik und Physik ZAMP 46 (1) (1995) 149-155.
* (24) R. Schmidt, Discussion:"initial post-buckling behavior of thick rings under uniform external hydrostatic pressure"(fu, lei and waas, am, 1995, asme j. appl. mech., 62, pp. 338-345) (1996).
* (25) I. Tadjbakhsh, The variational theory of the plane motion of the extensible elastica, International Journal of Engineering Science 4 (4) (1966) 433-450.
* (26) A. Magnusson, M. Ristinmaa, C. Ljung, Behaviour of the extensible elastica solution, International Journal of Solids and Structures 38 (46-47) (2001) 8441-8457.
* (27) O. Oshri, H. Diamant, Properties of compressible elastica from relativistic analogy, Soft matter 12 (3) (2016) 664-668.
* (28) I. Schoen, B. L. Pruitt, V. Vogel, The yin-yang of rigidity sensing: how forces and mechanical properties regulate the cellular response to materials, Annual Review of Materials Research 43 (2013) 589-618.
* (29) B. Schamberger, A. Roschger, R. Ziege, K. Anselme, M. B. Amar, M. Bykowski, A. P. Castro, A. Cipitria, R. Coles, R. Dimova, et al., Curvature in biological systems: its quantification, emergence and implications across the scales, Advanced Materials (2022) 2206110.
* (30) Y. Harada, A. Noguchi, A. Kishino, T. Yanagida, Sliding movement of single actin filaments on one-headed myosin filaments, Nature 326 (6115) (1987) 805-808.
* (31) A. Ghosh, N. Gov, Dynamics of active semiflexible polymers, Biophysical journal 107 (5) (2014) 1065-1073.
* (32) A. Bausch, K. Kroy, A bottom-up approach to cell mechanics, Nature physics 2 (4) (2006) 231-238.
* (33) C. P. Brangwynne, G. H. Koenderink, F. C. MacKintosh, D. A. Weitz, Cytoplasmic diffusion: molecular motors mix it up, The Journal of cell biology 183 (4) (2008) 583-587.
* (34) V. Schaller, C. Weber, C. Semmrich, E. Frey, A. R. Bausch, Polar patterns of driven filaments, Nature 467 (7311) (2010) 73-77.
* (35) K. D. Dorfman, The statistical segment length of dna: Opportunities for biomechanical modeling in polymer physics and next-generation genomics, Journal of biomechanical engineering 140 (2) (2018).
* (36) B. Alberts, Molecular biology of the cell, Garland science, 2017.
* (37) J. Howard, R. Clark, Mechanics of motor proteins and the cytoskeleton, Appl. Mech. Rev. 55 (2) (2002) B39-B39.
* (38) M. Rubinstein, R. H. Colby, et al., Polymer physics, Vol. 23, Oxford university press New York, 2003.
* (39) M. Doi, S. F. Edwards, S. F. Edwards, The theory of polymer dynamics, Vol. 73, oxford university press, 1988.
* (40) E. Cosserat, F. Cosserat, Theorie des corps deformables (1909).
* (41) A. E. Green, P. Naghdi, M. Wenner, On the theory of rods. i. derivations from the three-dimensional equations, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 337 (1611) (1974) 451-483.
* (42) A. E. Green, P. Naghdi, M. Wenner, On the theory of rods ii. developments by direct approach, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 337 (1611) (1974) 485-507.
* (43) M. Rubin, Cosserat theories: shells, Rods and Points.: Kluwer, The Netherlands (2000).
* (44) P. Naghdi, M. Rubin, Constrained theories of rods, Journal of Elasticity 14 (4) (1984) 343-361.
* (45) C. Truesdell, W. Noll, The non-linear field theories of mechanics, in: The non-linear field theories of mechanics, Springer, 2004, pp. 1-579.
* (46) J. M. Greenberg, On the equilibrium configurations of compressible slender bars, Archive for Rational Mechanics and Analysis 27 (3) (1967) 181-194.
* (47) A. Coulomb, Essay on the application of the rules of maxima and minima to certain statics problems relavant to architecture, Memoires presentes a l'Academie (1773) 343-384.
* (48) O. Kratky, G. Porod, Rontgenuntersuchung geloster fadenmolekule, Recueil des Travaux Chimiques des Pays-Bas 68 (12) (1949) 1106-1122.
* (49) J. F. Marko, E. D. Siggia, Stretching dna, Macromolecules 28 (26) (1995) 8759-8770.
* (50) M. Fixman, J. Kovac, Polymer conformational statistics. iii. modified gaussian models of stiff chains, The Journal of Chemical Physics 58 (4) (1973) 1564-1568.
* (51) H. Yamakawa, Statistical mechanics of wormlike chains, in: Macromolecular Chemistry-11, Elsevier, 1977, pp. 135-141.
* (52) Y. Varshni, Temperature dependence of the elastic constants, Physical Review B 2 (10) (1970) 3952.
* (53) J. Martin, E. C. Davidson, C. Greco, W. Xu, J. H. Bannock, A. Agirre, J. De Mello, R. A. Segalman, N. Stingelin, K. C. Daoulas, Temperature-dependence of persistence length affects phenomenological descriptions of aligning interactions in nematic semiconducting polymers, Chemistry of Materials 30 (3) (2018) 748-761.
* (54) S. Geggier, A. Kotlyar, A. Vologodskii, Temperature dependence of dna persistence length, Nucleic acids research 39 (4) (2011) 1419-1426.
* (55) F. B. Fuller, The writhing number of a space curve, Proceedings of the National Academy of Sciences 68 (4) (1971) 815-819.
* (56) F. Tanaka, H. Takahashi, Elastic theory of supercoiled dna, The Journal of chemical physics 83 (11) (1985) 6017-6026.
* (57) Y. Seol, J. Li, P. C. Nelson, T. T. Perkins, M. Betterton, Elasticity of short dna molecules: theory and experiment for contour lengths of 0.6-7 \(\mu\)m, Biophysical journal 93 (12) (2007) 4360-4373.
* (58) R. Vafabakhsh, T. Ha, Extreme bendability of dna less than 100 base pairs long revealed by single-molecule cyclization, Science 337 (6098) (2012) 1097-1101.
* (59) D. Nelson, T. Piran, S. Weinberg, Statistical Mechanics Of Membranes And Surfaces-Proceedings Of The 5th Jerusalem Winter School For Theoretical Physics, Vol. 5, World Scientific, 1989.
* (60) S. Leibler, R. R. Singh, M. E. Fisher, Thermodynamic behavior of two-dimensional vesicles, Physical review letters 59 (18) (1987) 1989.
* (61) E. Levinson, Asphericity of two-dimensional closed pressurized random walks, Physical Review A 45 (6) (1992) 3629.
* (62) G. Gaspari, J. Rudnick, A. Beldjenna, The shapes and sizes of two-dimensional pressurized self-intersecting rings, as models for two-dimensional vesicles, Journal of Physics A: Mathematical and General 26 (1) (1993) 1.
* (63) E. Haleva, H. Diamant, Smoothening transition of a two-dimensional pressurized polymer ring, The European Physical Journal E 19 (2006) 461-469.
* (64) M. K. Mitra, G. I. Menon, R. Rajesh, Phase transitions in pressurized semiflexible polymer rings, Physical Review E 77 (4) (2008) 041802.
* (65) A. C. Maggs, S. Leibler, M. E. Fisher, C. J. Camacho, Size of an inflated vesicle in two dimensions, Physical Review A 42 (2) (1990) 691.
* (66) C. J. Camacho, M. E. Fisher, R. R. Singh, Semiflexible planar polymeric loops, The Journal of chemical physics 94 (8) (1991) 5693-5700.
* (67) C. J. Camacho, M. E. Fisher, Tunable fractal shapes in self-avoiding polygons and planar vesicles, Physical Review Letters 65 (1) (1990) 9.
* (68) M. E. Fisher, Fractal and nonfractal shapes in two-dimensional vesicles, Physica D: Nonlinear Phenomena 38 (1-3) (1989) 112-118.
* (69) D. P. Landau, S. P. Lewis, H.-B. Schuttler, Computer Simulation Studies in Condensed-Matter Physics XII: Proceedings of the Twelfth Workshop, Athens, GA, USA, March 8-12, 1999, Vol. 85, Springer Science & Business Media, 2012.
* (70) A. Romero, A simple model for shapes of vesicles in two dimensions, Journal de Physique I 2 (1) (1992) 15-22.
* (71) U. M. B. Marconi, A. Maritan, Deflated regime for pressurized ring polymers with long-range interactions, Physical Review E 47 (5) (1993) 3795.
Figure 1: Derivation of the strain energy. **A:** The elementary block, with its integral system of reference \(\xi\)-0-\(\eta_{\alpha}\), has the origin placed at an height \(\alpha h\) from the bottom surface (\(\alpha\in[0,1]\)). A reference fiber is characterized by its length (\(\eta_{\alpha}=0\)) is \(\Delta L_{\alpha}\) (dotted black line) and by \(R_{\alpha}\), with the corresponding spontaneous curvature \(K_{\alpha}=\frac{1}{|R_{\alpha}|}\). \(R_{\alpha}\) is the distance of the origin of the local reference from the intersection between the extended sides of the block, with a positive sign if its orientation is concordant with that of \(\eta_{\alpha}\). It becomes the curvature radius in the continuum limit. **A’:** The block undergoes a deformation to a new shape (dark blue), where the original configuration is in light blue. Shape change entails \(R_{\alpha}\to r_{\alpha}\). Any fiber of the block endures a deformation represented by the fiber displacement \(u(\eta_{\alpha})=\Delta l(\eta_{\alpha})-\Delta L(\eta_{\alpha})\) (green arrows). **B:** The undeformed elastica is displayed as a sequence of elementary identical blocks, each with its own shape. The sequence of the reference segments is shown as a dotted black polygonal \(L_{\alpha}\), while bending angle \(\Phi^{(i)}\) of each block is in red. The green dashed lines correspond to the \(\alpha_{U}^{(i)}\) fibers. Conventionally, the construction assumes that the left section of each block remains straight-angled, so that the bending angles are defined at the left side of each block. **B’:** The deformation of the elastica involves the deformation of the reference polygonal chain, \(L_{\alpha}\to l_{\alpha}\) (dotted black line). The bending angles vary from \(\Phi^{(i)}\) to \(\varphi^{(i)}\). **C:** In the continuum limit, the material reference curve is represented by a solid black curve. The tangent to this curve, \({\bf T}_{\alpha}(s)\), forms with the \(x\) axis the spontaneous bending angle \(\Phi(s)\). **C’:** The deformed material curve **A:** (solid black line) stems from its undeformed configuration (dotted black line). The black arrow represents the new tangent \({\bf t}_{\alpha}(s)\), while its corresponding bending angle \(\varphi(s)\) is in red.
Figure 2: Neutral fiber location. **A:** The undeformed condition coincides with a straight bar of length \(L\). The line of centroids (\(\alpha=1/2\)) is depicted as a dotted black fiber: taking this as the representative one-dimensional medium would give the axial-bending uncoupling (see SI). **A’:** The beam in panel A is bent into a ring. If the line of centroids maintains constant its length, therefore this coincides with the neutral fiber and the axial forces are balanced (\(N_{\alpha}=0\), \(\forall\alpha\)). Among all possible deformations which transform a beam into a circle, the one allowing the existence of the neutral fiber costs the minimum amount of work. **B:** The undeformed condition is a circle. The line of centroids (\(\alpha=1/2\)) is depicted as a dashed red circle, while the material fiber granting the axial-bending uncoupling, defined by \(\alpha_{U}\) in Eq. (8), is shown as a dotted black line. **B’:** The circle in panel B is deformed into a straight beam. If the fiber identified by \(\alpha_{U}\) keeps its length constant during the circle extension, therefore it coincides with the neutral fiber and the Parent’s principle is satisfied. Any other stretching of the circle into a bar will cost more energy than that leaving unchanged the neutral fiber. Notice that, although the circles in panels A’ and B have the same dimensions, the absence of axial forces implies that the beam in A is longer than that in B’ (see SI). **C:** The generic undeformed configuration has a line of centroids shown as a dashed red line. The curve allowing the axial-bending uncoupling in the Eq.(11) is show as black dotted line. This curve does not overlap with any material fiber according to (12) **C’:** Any transformation where the length of the black dotted line is left invariant fulfills the Parent’s principle. Hence, the notion of neutral fiber shifts to that of _neutral curve_. Here the final deformed condition if flat, showing how the neutral curve is deformed differently from any other material fiber.
General theory for plane extensible elastica with arbitrary undeformed shape SUPPLEMENTARY INFORMATION
Alessandro Taloni1
CNR - Consiglio Nazionale delle Ricerche, Istituto dei Sistemi Complessi, via dei Taurini 19, 00185 Roma and Center for Complexity and Biosystems, Department of Physics, University of Milan, Milan, Italy
Daniele Vilone
[email protected] Laboratory of Agent Based Social Simulation, Institute of Cognitive Science and Technology, National Research Council, Via Palestro 32, 00185 Rome, Italy and Grupo Interdisciplinar de Sistemas Complejos, Departamento de Matematicas, Universidad Carlos III de Madrid, 28911 Leganes, Spain
Giuseppe Ruta
[email protected] Dipartimento di Ingegneria strutturale e geotecnica, Sapienza Universita di Roma. Via Eudossiana n1 18, 00184 Roma
Derivation of the strain energy from a generic undeformed configuration
We consider the elementary block depicted in Fig. 1. The undeformed condition is represented in panel \(\Lambda\), whereas its deformed state is shown in panel A\({}^{\prime}\). The length of the representative fiber in the natural state is \(\Delta L_{\alpha}\), and that of a generic fiber at an height \(\eta_{\alpha}\) is
\[\Delta L(\eta_{\alpha})=\Delta L_{\alpha}+\Lambda_{\alpha}(\eta_{\alpha}). \tag{1}\]
The corresponding deformed quantities are \(\Delta l_{\alpha}\) and \(\Delta l(\eta_{\alpha})\) are defined by
\[\Delta l(\eta_{\alpha})=\Delta l_{\alpha}+\lambda_{\alpha}(\eta_{\alpha}). \tag{2}\]
Hence, the deformation that a generic material segment experiences passing from the state A to the state A\({}^{\prime}\) is expressed as
\[u(\eta_{\alpha})=\Delta l_{\alpha}-\Delta L_{\alpha}+\Lambda_{\alpha}(\eta_{ \alpha})-\lambda_{\alpha}(\eta_{\alpha}). \tag{3}\]
The geometry of the undeformed shape satisfies the following equality [1; 2]
\[\frac{\Delta L_{\alpha}}{R_{\alpha}}=\frac{\Lambda_{\alpha}(\eta_{\alpha})}{ \eta_{\alpha}}, \tag{4}\]
Figure 1: Strain energy from a generic undeformed configuration. **A**: The block undeformed configuration is such that the representative material line has a natural extension equal to \(\Delta L_{\alpha}\). Correspondingly, the size of any other fiber is expressible through the relation (1), where the quantity \(\Lambda(\eta_{\alpha})\) are represented by orizontal thin red arrows. The spontaneous radius of curvature relative to the representative fiber \(R_{\alpha}\) is shown by a vertical thick red arrow. **A**\({}^{\prime}\): In the deformed condition, the size of the representative fiber varies from \(\Delta L_{\alpha}\) to \(\Delta l_{\alpha}\) and its radius of curvature becomes \(r_{\alpha}\). In the reference frame integral with the block, any fiber undergoes a displacement \(u(\eta_{\alpha})\) (green arrows). At the same time, the size of any deformed material fiber can be expressed as \(\Delta l(\eta_{\alpha})=\Delta l_{\alpha}+\lambda_{\alpha}(\eta_{\alpha})\) (red arrows).
while for the deformed configuration we have
\[\frac{\Delta l_{\alpha}}{r_{\alpha}}=\frac{\lambda_{\alpha}(\eta_{\alpha})}{\eta_ {\alpha}}. \tag{5}\]
Thanks to Eqs. (4) and (5), the displacement (3) attains the final form
\[u(\eta_{\alpha})=\Delta l_{\alpha}-\Delta L_{\alpha}+\eta_{\alpha}\left(\frac {\Delta L_{\alpha}}{R_{\alpha}}-\frac{\Delta l_{\alpha}}{r_{\alpha}}\right). \tag{6}\]
By substitution of Eqs. (1) and (6) into
\[\Delta E_{\alpha}=\frac{bY}{2}\int_{-\alpha h}^{(1-\alpha)h}\left[\frac{u( \eta_{\alpha})}{\Delta L(\eta_{\alpha})}\right]^{2}\Delta L(\eta_{\alpha})\ d\eta_{ \alpha}, \tag{7}\]
we obtain the expression:
\[\Delta E_{\alpha}=\frac{Y}{2\,\Delta L_{\alpha}}\left[F_{\alpha}\left(\Delta l _{\alpha}-\Delta L_{\alpha}\right)^{2}+2S_{\alpha}\left(\Delta l_{\alpha}- \Delta L_{\alpha}\right)\left(\frac{\Delta l_{\alpha}}{r_{\alpha}}-\frac{ \Delta L_{\alpha}}{R_{\alpha}}\right)+I_{\alpha}\left(\frac{\Delta l_{\alpha }}{r_{\alpha}}-\frac{\Delta L_{\alpha}}{R_{\alpha}}\right)^{2}\right], \tag{8}\]
where
\[F_{\alpha}=\int_{-\alpha h}^{(1-\alpha)h}d\eta_{\alpha}\frac{b}{1+\frac{\eta_ {\alpha}}{R_{\alpha}}}, \tag{9}\]
\[S_{\alpha}=\int_{-\alpha h}^{(1-\alpha)h}d\eta_{\alpha}\frac{b\eta_{\alpha}}{ 1+\frac{\eta_{\alpha}}{R_{\alpha}}} \tag{10}\]
and
\[I_{\alpha}=\int_{-\alpha h}^{(1-\alpha)h}d\eta_{\alpha}\frac{b\eta_{\alpha}^{ 2}}{1+\frac{\eta_{\alpha}}{R_{\alpha}}} \tag{11}\]
These three factors have different functional forms according to whether \(R_{\alpha}\) is positive or negative (see Fig. 2). The quantity \(R_{\alpha}\) can be positive or negative, as required by the consistency of the Eq.(4). \(|R_{\alpha}|\) becomes the radius of the osculating circle which locally approximates the reference segment in the continuum limit, and the sign of \(R_{\alpha}\) is assigned in the following way. It is clear that the intersection \(C\) between the sidelines containing the block's sections lies on the axis \(\xi=0\) of the local reference system \(\xi\)-0-\(\eta_{\alpha}\). If \(C\) lies below the bottom fiber \(\alpha=0\), then \(R_{\alpha}\) is positive (Fig.2A). If conversely \(C\) is above the upper fiber \(\alpha=1\), then \(R_{\alpha}\) is negative (Fig.2A\({}^{\prime}\)). It follows that \(C\) has coordinates \((0,-R_{\alpha})\) in the local reference system. The same prescription for the sign applies to \(r_{\alpha}\). If \(R_{\alpha}>0\) the solutions of (9), (10) and (11) read
\[F_{\alpha}=bR_{\alpha}\ln\left(1+\frac{h}{R_{0}}\right), \tag{12}\]
\[S_{\alpha}=bR_{\alpha}\left[h-R_{\alpha}\ln\left(1+\frac{h}{R_{0}}\right)\right] \tag{13}\]
and
\[I_{\alpha}=bR_{\alpha}\left[\left(\frac{1}{2}-\alpha\right)h^{2}-hR_{\alpha}+R_{ \alpha}^{2}\ln\left(1+\frac{h}{R_{0}}\right)\right], \tag{14}\]
with \(R_{\alpha}=R_{0}+\alpha h\). When we consider the case \(R_{\alpha}<0\), the three integrals 10-11 can be solved yielding
\[F_{\alpha}=-bR_{\alpha}\ln\left(1-\frac{h}{R_{1}}\right), \tag{15}\]
\[S_{\alpha}=bR_{\alpha}\left[h+R_{\alpha}\ln\left(1-\frac{h}{R_{1}}\right)\right] \tag{16}\]
and
\[I_{\alpha}=bR_{\alpha}\left[\left(\frac{1}{2}-\alpha\right)h^{2}-hR_{\alpha}- R_{\alpha}^{2}\ln\left(1-\frac{h}{R_{1}}\right)\right], \tag{17}\]
where \(R_{\alpha}=R_{1}-(1-\alpha)h\). The two opposite bending states in Fig.2 are expressible in the compact forms provided in the main text recalling that, if \(R_{\alpha}>0\), hence necessarily \(R_{0}<R_{1}\) (\(K_{0}>K_{1}\)), whilst, for \(R_{\alpha}<0\), therefore \(|R_{1}|<|R_{0}|\) and \(K_{1}>K_{0}\). We stress the fact that the expressions of \(F_{\alpha}\), \(S_{\alpha}\) and \(I_{\alpha}\) are fully established once the value of \(\alpha\) and \(\max[K_{0},K_{1}]\) are furnished. In particular the following identity holds if \(K_{0}>K_{1}\)
\[K_{\alpha}=\frac{K_{0}}{1+\alpha hK_{0}}, \tag{18}\]
Figure 2: Sign of the radius of curvature. The radius of curvature has a sign assigned whether its direction, connecting the intersection \(C\) of the sidelines containing the block’s sections with the beginning of the reference segment, coincides with that of the \(\eta_{\alpha}\) axis of the frame integral with the block (panel **A**), or opposite to it (panel **A\({}^{\prime}\)**).
while for \(K_{1}>K_{0}\)
\[K_{\alpha}=\frac{K_{1}}{1+(1-\alpha)hK_{1}}. \tag{19}\]
According to the integral expressions (9) and (11), it is always \(F_{\alpha}>0\) and \(I_{\alpha}>0\) because the integrand functions are strictly positive in the integration interval. To the contrary, the sign of \(S_{\alpha}\) can vary according to the value of \(\alpha\) and to \(\max[K_{0},K_{1}]\) (\(\min[R_{0},R_{1}]\)). For the sake of clarity, \(S_{\alpha}>0\) for \(\alpha<\alpha_{U}\), and \(S_{\alpha}<0\) for \(\alpha>\alpha_{U}\). The choice of \(\alpha_{U}\) ensuring the stretching-bending uncoupling, guarantees that
\[F_{\alpha_{U}}=bh, \tag{20}\]
\[S_{\alpha_{U}}=0 \tag{21}\]
and
\[I_{\alpha_{U}}=\frac{b}{K_{\alpha_{U}}}\left(\frac{1}{2}-\alpha_{U}\right)h^{ 2}=\frac{\text{sgn}\left(K_{0}-K_{1}\right)bh^{3}}{\ln\left(1+h\text{ max}[K_{0},K_{1}]\right)}\left[\frac{1}{2}-\frac{1}{\ln\left(1+h \text{ max}[K_{0},K_{1}]\right)}+\frac{1}{h\text{ max}[K_{0},K_{1}]}\right] \tag{22}\]
Figure 3: Beam discrete strain energy from a generic undeformed configuration. **A**: The polygonal, connecting the vertices of each undeformed representative segments, is represented by a dotted black thick line. In the laboratory frame, the vertices have planar coordinates \(\left(X_{\alpha}^{(i)},Y_{\alpha}^{(i)}\right)\) shown as green dots. The spontaneous radius of curvature can be positive (as \(R_{\alpha}^{1}\)) or negative (as \(R_{\alpha}^{2}\)). The connection between the size of representative segment, \(\Delta L_{\alpha}^{(i)}\), and its spontaneous radius of curvature \(R_{\alpha}^{(i)}\), is given by the relation (23), where the bending angles \(\Phi^{(i)}\) define the deflection of the undeformed blocks from the external \(x\) axis. **A\({}^{\prime}\)**: each block composing the beam undergoes a deformation, such that the deformed polygonal chains \(l_{\alpha}\) is constructed by the sequence of the representative segments whose vertices are \(\left(x_{\alpha}^{(i)},y_{\alpha}^{(i)}\right)\) (green dots).
The finite difference scheme, outlined so far, requires the evaluation of the discrete strain energy \(E_{\alpha}=\sum_{i=1}^{N}\Delta E_{\alpha}^{(i)}\) needed to deform the elastica in Fig. 3 from A to A\({}^{\prime}\). Moving to the laboratory frame we find that the relation
\[\frac{\Delta L_{\alpha}^{(i)}}{R_{\alpha}^{(i)}}=-\tan\Delta\Phi^{(i)}. \tag{23}\]
is always satisfied [3], where \(\Delta\Phi^{(i)}=\Phi^{(i)}-\Phi^{(i-1)}\) and \(\Phi^{(i)}\) is the \(i\)-th cross-sectional bending angle with respect the \(x\) axis. Since we assume the limit of small deflections, we can approximate \(\tan\Delta\Phi^{(i)}\simeq\Delta\Phi^{(i)}\). The reference undeformed polygonal chain \(L_{\alpha}\) is specified by the series of points \(\mathbf{L}_{\alpha}^{(i)}=(X_{\alpha}^{(i)},Y_{\alpha}^{(i)})\), with line segments \(\Delta L_{\alpha}^{(i)}=\left|\mathbf{L}_{\alpha}^{(i)}-\mathbf{L}_{\alpha}^ {(i-1)}\right|\) (Fig.3A).
In the deformed state (Fig.3A\({}^{\prime}\)) we have
\[\frac{\Delta l_{\alpha}^{(i)}}{r_{\alpha}^{(i)}}=-\tan\Delta\varphi^{(i)} \tag{24}\]
where \(\Delta\varphi^{(i)}=\varphi^{(i)}-\varphi^{(i-1)}\), and \(\varphi^{(i)}\) corresponds to the bending angle between the \(i-\)th block and the \(x\) axis. Again we assume the small deflection limit, i.e. \(\tan\Delta\varphi^{(i)}\simeq\Delta\varphi^{(i)}\). The deformed chain \(l_{\alpha}\) has the points \(\mathbf{l}_{\alpha}^{(i)}=(x_{\alpha}^{(i)},y_{\alpha}^{(i)})\) as vertices, with \(\Delta l_{\alpha}^{(i)}=\left|\mathbf{l}_{\alpha}^{(i)}-\mathbf{l}_{\alpha}^ {(i-1)}\right|\). We introduce the strain measure as \(\varepsilon_{\alpha}^{(i)}=\frac{\Delta l_{\alpha}^{(i)}-\Delta L_{\alpha}^{( i)}}{\Delta L_{\alpha}^{(i)}}\), while the bending strain measure can be obtained by two different definitions: the first is due to Kammel [4]
\[\mu_{\alpha}^{(i)}=\frac{\Delta\Phi^{(i)}-\Delta\varphi^{(i)}}{\Delta L_{ \alpha}^{(i)}}, \tag{25}\]
and the second to Antman [5]
\[\mu_{\alpha}^{(i)}=\frac{\Delta l_{\alpha}}{\Delta L_{\alpha}}\frac{1}{r_{ \alpha}}-\frac{1}{R_{\alpha}}. \tag{26}\]
One can easily see that they are equivalent by the construction in Fig.6. Upon summation of the terms in Eq. (8), using the definition (26), we obtain the energy
\[E_{\alpha}=\frac{Y}{2}\sum_{i=1}^{N}\Delta L_{\alpha}^{(i)}\left\{F_{\alpha}^ {(i)}\,\varepsilon_{\alpha}^{(i)}{}^{2}+2S_{\alpha}^{(i)}\,\varepsilon_{ \alpha}^{(i)}\mu_{\alpha}^{(i)}+I_{\alpha}^{(i)}\,\mu_{\alpha}^{(i)}{}^{2} \right\}\;. \tag{27}\]
The expression of \(F_{\alpha}\), \(S_{\alpha}\) and \(I_{\alpha}\) in terms of \(\Delta\Phi\), \(\Delta L_{\alpha}\) and \(\alpha\), is achieved by inserting the relation (23) into the expressions (12)-(14) and (15)-(17):
\[F_{\alpha}^{(i)}=\frac{\Delta L_{\alpha}^{(i)}}{\left|\Delta\Phi^{(i)}\right| }\ln\left(1+h\frac{\left|\Delta\Phi^{(i)}\right|}{\min[\Delta L_{0}^{(i)}, \Delta L_{1}^{(i)}]}\right), \tag{28}\]
\[S_{\alpha}^{(i)}=-\frac{\Delta L_{\alpha}^{(i)}}{\Delta\Phi^{(i)}}\left(h-F_{ \alpha}^{(i)}\right) \tag{29}\]
and
\[I_{\alpha}^{(i)}=-\frac{\Delta L_{\alpha}^{(i)}}{\Delta\Phi^{(i)}}\left[ \left(\frac{1}{2}-\alpha\right)h^{2}-S_{\alpha}^{(i)}\right]. \tag{30}\]
We recall that it is convenient to take \(\Delta L_{\alpha}=\Delta L_{0}-\alpha h\Delta\Phi\) if \(\min[\Delta L_{0},\Delta L_{1}]=\Delta L_{0}\), and \(\Delta L_{\alpha}=\Delta L_{1}+(1-\alpha)h\Delta\Phi\) if \(\min[\Delta L_{0},\Delta L_{1}]=\Delta L_{1}\).
The differential strain energy \(\mathscr{E}_{\alpha}\) is derived by firstly introducing two parametric expressions for the undeformed and deformed reference material curves as \(\mathscr{L}_{\alpha}:[s_{m},s_{M}]\rightarrow\mathbbm{R}^{2}\) and \(\ell_{\alpha}:[s_{m},s_{M}]\rightarrow\mathbbm{R}^{2}\) respectively. Secondly we take an arbitrary partition \(s_{m}=s_{0}<s_{1}<s_{2}<\cdots<s_{N}=s_{M}\) to which we connect the two polygonal chains \(L_{\alpha}\) and \(l_{a}\), in such a way that the vertices satisfy \(\mathbf{L}_{\alpha}(s_{i})\equiv\mathbf{L}_{\alpha}^{(i)}\) and \(\mathbf{l}_{\alpha}(s_{i})\equiv\mathbf{l}_{\alpha}^{(i)}\). Moreover we define the applications \(\Phi:[s_{m},s_{M}]\rightarrow\mathbbm{R}\) and \(\varphi:[s_{m},s_{M}]\rightarrow\mathbbm{R}\) with the properties \(\Phi(s_{i})\equiv\Phi^{(i)}\) and \(\varphi(s_{i})\equiv\varphi^{(i)}\). From the definitions of the two strain measures \(\varepsilon_{\alpha}^{(i)}\) and \(\mu_{\alpha}^{(i)}\), it follows
\[\varepsilon_{\alpha}(s_{i})=\frac{\frac{|\Delta\mathbf{l}_{\alpha}(s_{i})|}{ \Delta s_{i}}-\frac{|\Delta\mathbf{l}_{\alpha}(s_{i})|}{\Delta s_{i}}}{\frac {|\Delta\mathbf{l}_{\alpha}(s_{i})|}{\Delta s_{i}}}. \tag{31}\]
\[\mu_{\alpha}(s_{i})=\frac{\frac{\Delta\Phi(s_{i})}{\Delta s_{i}}-\frac{ \Delta\varphi(s_{i})}{\Delta s_{i}}}{\frac{|\Delta\mathbf{l}_{\alpha}(s_{i})| }{\Delta s_{i}}}. \tag{32}\]
In the continuum limit, \(N\) is increased until the lengths of the polygonal chains \(L_{\alpha}\) and \(l_{\alpha}\) equal those of the curves \(\mathscr{L}_{\alpha}\) and \(\ell_{\alpha}\). This condition is mathematically enforced by the limiting relations
\[\frac{|\Delta\mathbf{L}_{\alpha}(s_{i})|}{\Delta s_{i}}\rightarrow|\mathscr{ L}_{\alpha}^{\prime}(s)|\,,\ \ \frac{|\Delta\mathbf{l}_{\alpha}(s_{i})|}{\Delta s_{i}}\rightarrow|\ell_{\alpha}^ {\prime}(s)| \tag{33}\]
as \(\Delta s_{i}\to 0\). Correspondingly, the two tangents to the curves are defined as \(\mathbf{T}_{\alpha}(s)=\frac{d\mathbf{L}_{\alpha}}{ds}\) and \(\mathbf{t}_{\alpha}(s)=\frac{d\mathbf{L}_{\alpha}}{ds}\). Yet, the limit \(\Delta s_{i}\to 0\) entails
\[\frac{\Delta\Phi(s_{i})}{\Delta s_{i}}\rightarrow\Phi^{\prime}(s),\ \ \frac{\Delta\varphi(s_{i})}{\Delta s_{i}} \rightarrow\varphi^{\prime}(s). \tag{34}\]
The differential strain measures follow from the limit of Eq.s (31) and (32):
\[\varepsilon_{\alpha}(s)=\frac{|\mathbf{t}_{\alpha}(s)|-|\mathbf{T}_{\alpha}( s)|}{|\mathbf{T}_{\alpha}(s)|}. \tag{35}\]
\[\mu_{\alpha}(s)=\frac{\Phi^{\prime}(s)-\varphi^{\prime}(s)}{|\mathbf{T}_{ \alpha}(s)|}. \tag{36}\]
Now, if \(\sum_{i=1}^{N}\Delta L\rightarrow\int_{0}^{L}ds\), plugging the definitions of \(\mathbf{T}_{\alpha}(s)\), (35) and (36) into Eq.(27) we recover the energy reported in the main text:
\[\mathscr{E}_{\alpha}=\frac{Y}{2}\int_{s_{m}}^{s_{M}}ds\left\{\frac{F_{\alpha} (s)}{|\mathbf{T}_{\alpha}(s)|}\ \left[|\mathbf{t}_{\alpha}(s)|-|\mathbf{T}_{\alpha}(s)|\right]^{2}-\frac{2S_ {\alpha}(s)}{|\mathbf{T}_{\alpha}(s)|}\ \left[|\mathbf{t}_{\alpha}(s)|-|\mathbf{T}_{\alpha}(s)| \right]\left[\varphi^{\prime}(s)-\Phi^{\prime}(s)\right]+\frac{I_{\alpha}(s)} {|\mathbf{T}_{\alpha}(s)|}\ \left[\varphi^{\prime}(s)-\Phi^{\prime}(s)\right]^{2}\right\}, \tag{37}\]
The differential formula for the three factors \(F_{\alpha}(s)\), \(S_{\alpha}(s)\) and \(I_{\alpha}(s)\) are obtainable from the Eq.s (28), (29) and (30):
\[F_{\alpha}(s)=\left|\frac{\mathbf{T}_{\alpha}(s)}{\Phi^{\prime}(s)}\right| \ln\left(1+h\frac{|\Phi^{\prime}(s)|}{\min[|\mathbf{T}_{0}(s)|\,,|\mathbf{T}_ {1}(s)|]}\right), \tag{38}\]
\[S_{\alpha}(s)=-\frac{|{\bf T}_{\alpha}(s)|}{\Phi^{\prime}(s)}\left(h-F_{\alpha}(s)\right) \tag{39}\]
and
\[I_{\alpha}(s)=-\frac{|{\bf T}_{\alpha}(s)|}{\Phi^{\prime}(s)}\left[\left(\frac{1 }{2}-\alpha\right)h^{2}-S_{\alpha}(s)\right]. \tag{40}\]
It is clear that, when \(\min[|{\bf T}_{0}(s)|,|{\bf T}_{1}(s)|]=|{\bf T}_{0}(s)|\), we can express \(|{\bf T}_{\alpha}(s)|=|{\bf T}_{0}(s)|-\alpha h\Phi^{\prime}(s)\); conversely, when \(\min[|{\bf T}_{0}(s)|\,,|{\bf T}_{1}(s)|]=|{\bf T}_{1}(s)|\), then \(|{\bf T}_{\alpha}(s)|=|{\bf T}_{1}(s)|+(1-\alpha)h\Phi^{\prime}(s)\). Thus, the functional forms of \(F_{\alpha}(s)\), \(S_{\alpha}(s)\) and \(I_{\alpha}(s)\) in Eqs.(38), (39) and (40) highlight the local character of these quantities and the fact that they are fully established by the values of \((\alpha,\Phi^{\prime},\min[|{\bf T}_{0}|\,,|{\bf T}_{1}|])\).
To uncouple the bending and stretching contributions in the continuum energy expression \({\mathscr{E}}_{\alpha}\), the conditions \(S_{\alpha}(s)=0\) in (39) requires that
\[\alpha_{U}(s)=\frac{|{\bf T}_{0}(s)|}{h\Phi^{\prime}(s)}-\frac{|\Phi^{\prime}( s)|}{\Phi^{\prime}(s)}\frac{1}{\ln\left(1+h\frac{|\Phi^{\prime}(s)|}{\min[|{\bf T }_{0}(s)|,|{\bf T}_{1}(s)|]}\right)}, \tag{41}\]
which is also expressible as
\[\alpha_{U}(s)=1-\left[\frac{|\Phi^{\prime}(s)|}{\Phi^{\prime}(s)}\frac{1}{\ln \left(1+h\frac{|\Phi^{\prime}(s)|}{\min[|{\bf T}_{0}(s)|,|{\bf T}_{1}(s)|]} \right)}-\frac{|{\bf T}_{1}(s)|}{h\Phi^{\prime}(s)}\right]. \tag{42}\]
The two formulae (41) and (42) are equivalent and can be obtained recalling that \(|{\bf T}_{1}(s)|=|{\bf T}_{0}(s)|-h\Phi^{\prime}(s)\). Therefore the uncoupling condition is a local property of the elastica and, for either choice of \(\alpha_{U}(s)\), the three factors (38)-(40) reduce to
\[F_{\alpha_{U}}(s)=bh, \tag{43}\]
\[S_{\alpha_{U}}(s)=0 \tag{44}\]
and
\[I_{\alpha_{U}}(s) = -\frac{|{\bf T}_{\alpha_{U}}(s)|}{\Phi^{\prime}(s)}\left[\frac{1 }{2}-\alpha_{U}(s)\right]bh^{2} \tag{45}\] \[= -bh^{3}\frac{|\Phi^{\prime}(s)|}{\Phi^{\prime}(s)}\frac{1}{\ln \left(1+h\frac{|\Phi^{\prime}(s)|}{\min[|{\bf T}_{0}(s)|,|{\bf T}_{1}(s)|]} \right)}\left[\frac{1}{2}-\frac{|{\bf T}_{0}(s)|}{h\Phi^{\prime}(s)}+\frac{| \Phi^{\prime}(s)|}{\Phi^{\prime}(s)}\frac{1}{\ln\left(1+h\frac{|\Phi^{\prime} (s)|}{\min[|{\bf T}_{0}(s)|,|{\bf T}_{1}(s)|]}\right)}\right]. \tag{46}\]
The _neutral_ arc-length parametrization requires that
\[|{\bf T}_{\alpha_{U}}(s)|=1. \tag{47}\]
Therefore, adopting this parametrization and the neutral curve as representative of the whole elastica we have that for a generic transformation the energy is expressible as
\[{\mathscr{E}}_{\alpha_{U}}=\frac{bY}{2}\int_{s_{m}}^{s_{M}}ds\left\{h\,\left[| {\bf t}_{\alpha_{U}}(s)|-1\right]^{2}-\frac{h^{2}}{\Phi^{\prime}(s)}\left[ \frac{1}{2}-\alpha_{U}(s)\right]\,\left[\varphi^{\prime}(s)-\Phi^{\prime}(s) \right]^{2}\right\}. \tag{48}\]
## II Energy invariance under change of reference frame
Consider the Fig. 4. The undeformed elementary block is represented on the left A, and its shape upon deformation is displayed on the right A\({}^{\prime}\). Let us assign to the block an integral planar reference system, where the \(\xi\) and \(\eta_{\alpha}\) axes define respectively the block's longitudinal and transverse directions. The origin of such reference system is placed at an height \(\alpha h\) (\(0\leq\alpha\leq 1\)) from the block's bottom surface, and on the left lateral block boundary. The quantity \(\Delta L(\eta_{\alpha})\) corresponds to the length of a generic material line placed at the height \(\eta_{\alpha}\), with \(-\alpha h\leq\eta_{\alpha}\leq(1-\alpha)h\). By definition, the value of \(\Delta L(\eta_{\alpha}=0)\) is the representative material line length \(\Delta L_{\alpha}\). Now, a translation of the reference system along the axis \(\xi=0\) is equivalent to a linear change of variables:
\[\eta_{\beta}=\eta_{\alpha}+(\alpha-\beta)h \tag{49}\]
where \(\eta_{\beta}\) is the new axis pointing along the block transverse direction. However, the material line lengths \(\Delta L(\eta_{\alpha})\) have to be invariant under the transformation (49):
\[\Delta L\left(\eta_{\beta}\right)=\Delta L\left(\eta_{\alpha}=\eta_{\beta}-( \alpha-\beta)h\right). \tag{50}\]
It is also clear that in the reference system \(\xi\)-0-\(\eta_{\beta}\), the representative material line has length \(\Delta L_{\beta}=\Delta L\left(\eta_{\beta}=0\right)=\Delta L\left(\eta_{ \alpha}=(\beta-\alpha)h\right)\) and \(-\beta h\leq\eta_{\beta}\leq(1-\beta)h\).
Let us turn to the deformed configuration A\({}^{\prime}\). The deformed longitudinal length \(\Delta l\) follows the same law (50) under the shift of the reference system, i.e.
\[\Delta l\left(\eta_{\beta}\right)=\Delta l\left(\eta_{\alpha}=\eta_{\beta}-( \alpha-\beta)h\right). \tag{51}\]
Therefore, since the extension is defined as \(u=\Delta l-\Delta L\), thanks to Eqs.(50) and (51) we have that the following equality holds
\[u\left(\eta_{\beta}\right)=u\left(\eta_{\alpha}=\eta_{\beta}-(\alpha-\beta)h \right). \tag{52}\]
The strain energy of the block calculated in the \(\xi\)-0-\(\eta_{\alpha}\) reference frame is
\[\Delta E_{\alpha}=\frac{bY}{2}\int_{-\alpha h}^{(1-\alpha)h}\frac{u(\eta_{ \alpha})^{2}}{\Delta L(\eta_{\alpha})}\ d\eta_{\alpha}. \tag{53}\]
By applying the change of variables (49), the equality of the integral expression (53) \(\Delta E_{\alpha}=\Delta E_{\beta}\) follows from the relations (50) and (52).
Figure 4: Block’s energy invariance under change of reference material line. **A:** In red is represented the reference frame \(\xi\)-0-\(\eta_{\alpha}\) integral with the block, when the reference material segment is placed at an height \(\alpha h\) from the block’s bottom surface. The length of the reference segment is \(\Delta L_{\alpha}\). When the reference material segment is placed at a different height \(\beta h\), the corresponding frame \(\xi\)-0-\(\eta_{\beta}\) is depicted in green and its size is \(\Delta L_{\beta}\). **A\({}^{\prime}\):** The block undergoes a deformation from its natural shape (light blue): the size of the reference fiber changes to \(\Delta l_{\alpha}\) or \(\Delta l_{\beta}\). The energy cost associated to this deformation is the same whether the reference segment is placed at \(\alpha h\) or \(\beta h\).
## III Derivation of the strain energy from a flat undeformed configuration
Let us consider the deformation of the elementary block presented in Fig.5. The undeformed flat condition is depicted by a dotted black line, and it has the peculiarity that the longitudinal length is equal to \(\Delta L\) for any choice of the representative segment. The plane integral reference frame is identified by the \(\xi\) and \(\eta_{\alpha}\) axes, pointing respectively towards the block's longitudinal and transverse directions. The origin is placed at an height \(\alpha h\) (\(0\leq\alpha\leq 1\)) from the block's bottom surface, and on the left lateral block boundary. Any fiber placed at an height \(\eta_{\alpha}\) attains a length \(\Delta l(\eta_{\alpha})\) upon deformation, with \(\Delta l(\eta_{\alpha}=0)=\Delta l_{\alpha}\). According to the geometrical construction in Fig.5, the Eq.(2), is equivalent to
\[\Delta l(\eta_{\alpha})=\Delta L+u(\eta_{\alpha}). \tag{54}\]
From Eqs.(54), (2) and (5) the elongation of any fiber can be expressed as
\[u(\eta_{\alpha})=\Delta l_{\alpha}-\Delta L+\eta_{\alpha}\frac{\Delta l_{ \alpha}}{r_{\alpha}}. \tag{55}\]
In this condition, the strain energy of the block takes the form
\[\Delta E_{\alpha}=\frac{bY}{2}\int_{-\alpha h}^{(1-\alpha)h}\left[\frac{ \Delta l_{\alpha}-\Delta L}{\Delta L}+\frac{\eta_{\alpha}}{r_{\alpha}}\frac{ \Delta l_{\alpha}}{\Delta L}\right]^{2}\Delta L\ d\eta_{\alpha}, \tag{56}\]
where we have inserted the relation (55). Solving the integral and defining the strain as \(\varepsilon_{\alpha}=\frac{\Delta l_{\alpha}-\Delta L}{\Delta L}\), we arrive at the expression
\[\Delta E_{\alpha}=\frac{bY}{2}\left[h\,\Delta L\,\varepsilon_{\alpha}^{2}+h^{ 2}(1-2\alpha)\,\varepsilon_{\alpha}\frac{\Delta l_{\alpha}}{r_{\alpha}}+h^{3} \left(\frac{1}{3}-\alpha+\alpha^{2}\right)\frac{1}{\Delta L}\left(\frac{ \Delta l_{\alpha}}{r_{\alpha}}\right)^{2}\right]. \tag{57}\]
It is clear that the value of \(\alpha\) which guarantees the axial-bending uncoupling is the line of the centroids \(\alpha_{U}=1/2\).
Figure 5: Strain energy from a flat undeformed configuration. The block in its undeformed configuration has a longitudinal size of \(\Delta L\) (dotted black vertical line). When deformed, the size of the representative fiber varies from \(\Delta L\) to \(\Delta l_{\alpha}\). In the reference frame integral with the block, any fiber undergoes a displacement \(u(\eta_{\alpha})\) (green arrows). At the same time, the size of any deformed material fiber can be expressed as \(\Delta l(\eta_{\alpha})=\Delta l_{\alpha}+\lambda_{\alpha}(\eta_{\alpha})\), where \(\lambda_{\alpha}(\eta_{\alpha})\) are represented by red arrows.
The strain energy of the whole elastica is given by the sum over the block contributions, formally by \(E_{\alpha}=\sum_{i=1}^{N}\Delta E_{\alpha}^{(i)}\). We aim at furnishing, however, its analytical expression in the laboratory frame (Fig. 6).
The noticeable block's representative segment size \(\Delta l_{\alpha}^{(i)}\) is a positive quantity, being \(\Delta l_{\alpha}^{(i)}=\left|\mathsf{I}_{\alpha}^{(i)}-\mathsf{I}_{\alpha}^ {(i-1)}\right|\). \(\mathsf{I}_{\alpha}^{(i)}\equiv\left(x_{\alpha}^{(i)},y_{\alpha}^{(i)}\right)\) are the vertices of the reference polygonal curve \(l_{\alpha}\) in the lab reference system (Fig. 6). The polygonal curve is defined as the ordered sequence of the representative segments \(\Delta l_{\alpha}^{(i)}\). The quantity \(r_{\alpha}^{(i)}\), on the other side, can be positive or negative. This is required by the consistency of the Eq.(5). As a matter of fact, \(\left|r_{\alpha}^{(i)}\right|\) becomes the radius of the osculating circle which locally approximates the reference segment in the continuum limit, and the sign of \(r_{\alpha}^{(i)}\) is assigned in the following way. It is clear that the intersection between the sidelines containing the block's sections lies on the axis \(\xi=0\) of the local reference system integral with any block (see Fig. 2). Hence, in this reference system the coordinates of the circle's center are defined as \(\left(0,-r_{\alpha}^{(i)}\right)\): this establishes uniquely the sign of \(r_{\alpha}^{(i)}\). Now, moving to the laboratory frame we find that the relation (24) is always satisfied, with \(\Delta\varphi^{(i)}=\varphi^{(i)}-\varphi^{(i-1)}\), and \(\varphi^{(i)}\) corresponds to the bending angle between the \(i-\)th block and the \(x\) axis. In the small deflection limit, i.e. \(\tan\Delta\varphi^{(i)}\simeq\Delta\varphi^{(i)}\), the discrete strain energy for the entire slender beam is therefore framed as
\[E_{\alpha}=\frac{bY}{2}\sum_{i=1}^{N}\left[h\Delta L\,{\varepsilon_{\alpha}^{( i)}}^{2}-h^{2}(1-2\alpha)\,\varepsilon_{\alpha}^{(i)}\Delta\varphi^{(i)}+h^{3} \left(\frac{1}{3}-\alpha+\alpha^{2}\right)\,\frac{{\Delta\varphi^{(i)}}^{2}}{ \Delta L}\right]. \tag{58}\]
The bending measure is defined as
\[\mu^{(i)}=\frac{-\Delta\varphi^{(i)}}{\Delta L} \tag{59}\]
or, thanks to (24), as
Figure 6: Beam discrete strain energy from a flat undeformed configuration. The polygonal, connecting the vertices of each deformed representative segments, is represented by a dotted black thick line. In the laboratory frame, the vertices have planar coordinates \(\left(x_{\alpha}^{(i)},y_{\alpha}^{(i)}\right)\) shown as green dots. The morphological change of each block is determined by the radius of curvature that can be positive (as \(r_{\alpha}^{1}\)) or negative (as \(r_{\alpha}^{3}\)). The connection between local longitudinal deformation of the representative fiber, \(\Delta l_{\alpha}^{(i)}\), and its radius of curvature \(r_{\alpha}^{(i)}\), is encapsulated in the relation (24), where the bending angles \(\varphi^{(i)}\) define the deflection of the deformed blocks from the external \(x\) axis.
\[\mu^{(i)}=\frac{\Delta l_{\alpha}^{(i)}}{\Delta L}\frac{1}{r_{\alpha}^{(i)}}. \tag{60}\]
Using these definitions, the energy (61) takes the following form
\[E_{\alpha}=\frac{bY\Delta L}{2}\sum_{i=1}^{N}\left[h\,{\varepsilon_{\alpha}^{(i )}}^{2}+h^{2}(1-2\alpha)\varepsilon_{\alpha}^{(i)}\mu^{(i)}+h^{3}\left(\frac{1 }{3}-\alpha+\alpha^{2}\right)\mu^{(i)2}\right] \tag{61}\]
Let us introduce two parametric expressions for the undeformed and deformed reference plane curves as \(\mathscr{L}_{\alpha}:[0,L]\rightarrow\mathds{R}^{2}\) and \(\ell_{\alpha}:[0,L]\rightarrow\mathds{R}^{2}\) respectively. As a consequence, the Cartesian coordinates of the undeformed reference curve in the laboratory frame are
\[\mathbf{L}_{\alpha}(s)=\left\{\begin{array}{l}X_{\alpha}(s)\ =\ s\\ Y_{\alpha}(s)\ =\ 0,\end{array}\right. \tag{62}\]
and those of the deformed curve \(\ell_{\alpha}(s)\) are \(\mathbf{l}_{\alpha}(s)\equiv(x_{\alpha}(s),y_{\alpha}(s))\). Taking an uniform partition of \([0,L]\), i.e. \(0=s_{0}<s_{1}<s_{2}<\cdots<s_{N}=L\) such that \(s_{i}-s_{i-1}=\Delta s\equiv\Delta L\) for any \(i\), we obtain that the polygonal vertices are \(\mathbf{l}_{\alpha}(s_{i})=\mathbf{l}_{\alpha}^{(i)}\) and the local longitudinal strain \(\varepsilon_{\alpha}^{(i)}\) is given by
\[\varepsilon_{\alpha}(s_{i})=\frac{|\Delta\mathbf{l}_{\alpha}(s_{i})|}{\Delta s }-1. \tag{63}\]
Analogously, the bending angles at the polygonal vertices are \(\varphi(s_{i})=\varphi^{(i)}\). The continuum limit is taken by increasing \(N\) until the length of the polygonal chain \(l_{\alpha}\) approaches from below that of the curve \(\ell_{\alpha}\), i.e. \(\frac{|\Delta\mathbf{l}_{\alpha}(s_{i})|}{\Delta s}\rightarrow|\ell_{\alpha} ^{\prime}(s)|\) as \(\Delta s\to 0\). The curve derivative is defined as \(\ell_{\alpha}^{\prime}(s)\equiv\mathbf{t}_{\alpha}(s)\), where we have introduced the tangent of the curve \(\mathbf{t}_{\alpha}(s)=\frac{d\mathbf{l}_{\alpha}(s)}{ds}\). Finally, if the continuum limit entails that \(\frac{\Delta\varphi(s_{i})}{\Delta s}\rightarrow\varphi^{\prime}(s)\) and \(\sum_{i=1}^{N}\Delta L\rightarrow\int_{0}^{L}ds\), by substitution of \(\varepsilon_{\alpha}(s)=|\mathbf{t}_{\alpha}(s)|-1\) the energy (61) takes the form
\[\mathscr{E}_{\alpha}=\frac{bY}{2}\int_{0}^{L}ds\left\{h\,\left[|\mathbf{t}_{ \alpha}(s)|-1\right]^{2}-h^{2}(1-2\alpha)\,\left[|\mathbf{t}_{\alpha}(s)|-1 \right]\varphi^{\prime}(s)+h^{3}\left(\frac{1}{3}-\alpha+\alpha^{2}\right)\, \varphi^{\prime}(s)^{2}\right\}. \tag{64}\]
## IV Strain energy limiting cases: Reagaining the flat undeformed condition
In the present section we show how to recover the straight beam strain energy (57), from the energy (8) calculated from a generic undeformed configuration. To this aim, it will be sufficient to study the behaviour of \(F_{\alpha}\), \(S_{\alpha}\) and \(I_{\alpha}\) in the limit of \(\frac{h}{\min[|R_{0}|,|R_{1}|]}\to 0\).
Let us firstly express the relations (9)-(11) as
\[F_{\alpha}=|R_{\alpha}|\ln\left(1+\frac{h}{\min[|R_{0}|,|R_{1}|]}\right), \tag{65}\]
\[S_{\alpha}=R_{\alpha}\left[h-|R_{\alpha}|\ln\left(1+\frac{h}{\min[|R_{0}|,|R_ {1}|]}\right)\right] \tag{66}\]
and
\[I_{\alpha}=R_{\alpha}\left[\left(\frac{1}{2}-\alpha\right)h^{2}-hR_{\alpha}+R _{\alpha}^{2}\ln\left(1+\frac{h}{\min[|R_{0}|,|R_{1}|]}\right)\right], \tag{67}\]
Then we consider the condition \(\frac{h}{\min[|R_{0}|,|R_{1}|]}\ll 1\) and expand the logarithm to the third order:
\[\ln\left(1+\frac{h}{\min[|R_{0}|,|R_{1}|]}\right)\simeq\frac{h}{\min[|R_{0}|,|R_ {1}|]}-\frac{h^{2}}{2\min[|R_{0}|,|R_{1}|]^{2}}+\frac{h^{3}}{3\min[|R_{0}|,|R_{ 1}|]^{3}}. \tag{68}\]
Hence we get
\[F_{\alpha}=\left\{\begin{array}{cc}h+\frac{h^{2}}{R_{0}}\left(\alpha-\frac{ 1}{2}\right)+\frac{h^{3}}{R_{0}^{2}}\left(\frac{1}{3}-\frac{\alpha}{2}\right) &\min[|R_{0}|,|R_{1}|]=|R_{0}|\\ h+\frac{h^{2}}{R_{1}}\left(\alpha-\frac{1}{2}\right)+\frac{h^{3}}{R_{1}^{2}} \left[\frac{1}{3}-\frac{(\alpha-1)}{2}\right]&\min[|R_{0}|,|R_{1}|]=|R_{1}| \end{array}\right., \tag{69}\]
\[S_{\alpha}=\left\{\begin{array}{cc}h^{2}\left(\alpha-\frac{1}{2}\right)- \frac{h^{3}}{R_{0}}\left(\frac{1}{3}-\alpha+\alpha^{2}\right)&\min[|R_{0}|,|R _{1}|]=|R_{0}|\\ h^{2}\left(\alpha-\frac{1}{2}\right)-\frac{h^{3}}{R_{1}}\left(\frac{1}{3}- \alpha+\alpha^{2}\right)&\min[|R_{0}|,|R_{1}|]=|R_{1}|,\end{array}\right. \tag{70}\]
\[I_{\alpha}=h^{3}\left(\frac{1}{3}-\alpha+\alpha^{2}\right). \tag{71}\]
By substitution of the former relations into (8), the Eq. (57) is correctly reestablished.
## V Macroscopic constitutive equations under change of material curve
When the natural state is flat, the strain energy function is defined as \(W_{\alpha}=\frac{\Delta E_{\alpha}}{\Delta L}\), where \(\Delta E_{\alpha}\) is given in Eq. (57):
\[W_{\alpha}=\frac{bY}{2}\left[h\,\varepsilon_{\alpha}^{2}+h^{2}(1-2\alpha) \varepsilon_{\alpha}\mu+h^{3}\left(\frac{1}{3}-\alpha+\alpha^{2}\right)\mu^{ 2}\right]. \tag{72}\]
The usual choice of the middle fiber as the representative medium (\(\alpha=1/2\)) yields the expression commonly used in several contexts [5; 6; 7; 8]. However, for a generic choice of the representative fiber, the constitutive equations for the axial force and the bending moment are readily obtained:
\[\left\{\begin{array}{cc}N_{\alpha}=\frac{\partial W_{\alpha}}{\partial \varepsilon_{\alpha}}=bY\left[h\varepsilon_{\alpha}+h^{2}\mu\left(\frac{1}{2} -\alpha\right)\right]\\ M_{\alpha}=\frac{\partial W_{\alpha}}{\partial\mu}=bY\left[h^{2}\varepsilon_{ \alpha}\left(\frac{1}{2}-\alpha\right)+h^{3}\mu\left(\frac{1}{3}-\alpha+ \alpha^{2}\right)\right].\end{array}\right. \tag{73}\]
the axial force exerted on a material line \(\alpha\) in (73) can be transformed into \(N_{\beta}\) by applying the change of material line
\[\varepsilon_{\alpha}=\varepsilon_{\beta}+h(\alpha-\beta)\mu. \tag{74}\]
It results immediately \(N_{\alpha}=N_{\beta}\). The bending moment can be recast as
\[M_{\alpha}=bY\left[h^{2}\varepsilon_{\alpha}\left(\frac{1}{2}-\alpha\right)+h ^{3}\frac{\mu}{12}+h^{3}\mu\left(\frac{1}{2}-\alpha\right)^{2}\right]. \tag{75}\]
Therefore, from the expression of the axial force we have
\[M_{\alpha}=M_{1/2}-h\left(\alpha-\frac{1}{2}\right)N_{\alpha}. \tag{76}\]
Hence by subtracting the expression for \(M_{\beta}\) from (76) and recalling the axial force invariance we have
\[M_{\alpha}=M_{\beta}-h\left(\alpha-\beta\right)N_{\beta}. \tag{77}\]
The case of a general undeformed condition can be determined as follows. First we notice how the strain transforms under change of material line
\[\varepsilon_{\alpha}=\frac{\Delta L_{\beta}}{\Delta L_{\alpha}}\left[ \varepsilon_{\beta}+h(\alpha-\beta)\mu_{\beta}\right]. \tag{78}\]
Moreover, the reduced area, the reduced axial-bending coupling moments and the reduced moment of inertia change under material line transformation as
\[F_{\alpha}=\frac{1}{\Delta L_{\beta}}\left[\Delta L_{\beta}-(\alpha-\beta)h \Delta\Phi\right]F_{\beta} \tag{79}\]
\[S_{\alpha}=\frac{1}{\Delta L_{\beta}}\left[\Delta L_{\beta}-(\alpha-\beta)h \Delta\Phi\right]\left[S_{\beta}-(\alpha-\beta)hF_{\beta}\right] \tag{80}\]
\[I_{\alpha}=\frac{1}{\Delta L_{\beta}}\left[\Delta L_{\beta}-(\alpha-\beta)h \Delta\Phi\right]\left[I_{\beta}-2(\alpha-\beta)hS_{\beta}+(\alpha-\beta)^{2 }h^{2}F_{\beta}\right]. \tag{81}\]
Inserting the previous relations into
\[\begin{cases}N_{\alpha}=\frac{\partial W_{\alpha}}{\partial\varepsilon_{ \alpha}}=Y\left(F_{\alpha}\varepsilon_{\alpha}+S_{\alpha}\mu_{\alpha}\right) \\ \\ M_{\alpha}=\frac{\partial W_{\alpha}}{\partial\mu_{\alpha}}=Y\left(S_{\alpha} \varepsilon_{\alpha}+I_{\alpha}\mu_{\alpha}\right),\end{cases} \tag{82}\]
it easily turns out that the equality \(N_{\alpha}=N_{\beta}\) holds also in this case. Moreover, plugging the same transformations into the second of Eqs.(82), we recover the Eq. (77) also in the case of generic initial conditions.
## VI The neutral fiber: from flat to ring and viceversa
Let us consider the deformation depicted in Fig.7A-A\({}^{\prime}\), where a slender beam of length \(L\) is deformed into a circle. Let us take as the representative fiber the curve placed at an height \(\alpha h\) from the bottom surface, so that the equation representative of the elastica undeformed configuration is
\[\mathbf{L}_{\alpha}(s)=\begin{cases}X_{\alpha}(s)=s\\ \\ Y_{\alpha}(s)=\alpha h,\end{cases} \tag{83}\]
with \(s\in[0,L]\). The tangent is expressed as
\[\mathbf{T}_{\alpha}(s)=\begin{cases}\frac{dX_{\alpha}(s)}{ds}=1\\ \\ \frac{dY_{\alpha}(s)}{ds}=0,\end{cases} \tag{84}\]
and \(\Phi(s)=0\). By deforming the representative fiber into a circle of radius \(r_{\alpha}\), we easily obtain
\[\mathbf{l}_{\alpha}(s)=\begin{cases}x_{\alpha}(s)=r_{\alpha}\cos\left(\frac{2\pi s }{L}\right)\\ y_{\alpha}(s)=r_{\alpha}\sin\left(\frac{2\pi s}{L}\right),\end{cases} \tag{85}\]
\[\mathbf{t}_{\alpha}(s)=\begin{cases}\frac{dx_{\alpha}(s)}{ds}=-\frac{2\pi r_{ \alpha}}{L}\sin\left(\frac{2\pi s}{L}\right)\\ \frac{dy_{\alpha}(s)}{ds}=\frac{2\pi r_{\alpha}}{L}\cos\left(\frac{2\pi s}{L} \right),\end{cases} \tag{86}\]
and \(\varphi(s)=\frac{\pi}{2}-\frac{2\pi s}{L}\). The energy necessary for the complete bending of the beam into the circle is given by (64)
\[\mathscr{E}_{\alpha}(L;r_{\alpha})=\frac{bY}{2L}\left[h\ (2\pi r_{\alpha}-L)^{2}+2 \pi h^{2}(1-2\alpha)\ (2\pi r_{\alpha}-L)+4\pi^{2}h^{3}\left(\frac{1}{3}-\alpha+\alpha^{2} \right)\right]. \tag{87}\]
Without loss of generality, let us adopt the line of centroid as the representative material line, namely \(\alpha=1/2\). We know that this choice has the only advantage of yielding the axial-bending uncoupling in Eq.(87):
\[\mathscr{E}_{1/2}(L;r_{1/2})=\frac{bY}{2L}\left[h\ \left(2\pi r_{1/2}-L \right)^{2}+\frac{\pi^{2}h^{3}}{3}\right]. \tag{88}\]
Nonetheless, if the transformation is such that \(r_{1/2}=\frac{L}{2\pi}\), i.e. the middle fiber maintains its length constant (zero strain condition), the energy has a minimum. In other words, among all the possible deformations that transform a bar into a circle, that one which leaves unvaried the middle fiber (the neutral fiber) costs the minimum amount of work:
\[\mathscr{E}_{1/2}\left(L;r_{1/2}=\frac{L}{2\pi}\right)=\frac{bY\pi^{2}h^{3}}{ 6L}. \tag{89}\]
This minimum principle can be seen as the straightforward application of Parent's principle \(N_{1/2}=0\).
Figure 7:
Now let us consider the opposite situation, where a naturally curved beam is flattened into a bar as in Fig.7B-B\({}^{\prime}\). The undeformed configuration is given by
\[\mathbf{L}_{\alpha}(\theta)=\begin{cases}X_{\alpha}(\theta)=R_{\alpha}\cos( \theta)\\ Y_{\alpha}(\theta)=R_{\alpha}\sin(\theta),\end{cases} \tag{90}\]
with \(\theta\in[0,2\pi)\) being the internal parameter which is now adimensional, rather than having the dimension of an internal length.
\[\mathbf{T}_{\alpha}(\theta)=\begin{cases}\frac{dX_{\alpha}(\theta)}{d\theta}=- R_{\alpha}\sin(\theta)\\ \\ \frac{dY_{\alpha}(\theta)}{d\theta}=R_{\alpha}\cos(\theta),\end{cases} \tag{91}\]
so that \(\Phi(\theta)=\frac{\pi}{2}-\theta\). On the other side the equation for the deformed bar of length \(l\) is
\[\mathbf{l}_{\alpha}(\theta)=\begin{cases}x_{\alpha}(\theta)=\frac{l\theta}{2 \pi}\\ \\ y_{\alpha}(\theta)=\alpha h,\end{cases} \tag{92}\]
\[\mathbf{t}_{\alpha}(\theta)=\begin{cases}\frac{dx_{\alpha}(\theta)}{d\theta}= \frac{l}{2\pi}\\ \\ \frac{dy_{\alpha}(\theta)}{ds}=0,\end{cases} \tag{93}\]
and \(\varphi(\theta)=0\). Hence, the energy cost connected to such a transformation is
\[\begin{split}\mathscr{E}_{\alpha}(R_{\alpha};l)=\pi bY\left\{ \left(\frac{l}{2\pi}-R_{\alpha}\right)^{2}\ln\left(1+\frac{h}{R_{0}}\right)-2 \left[h-R_{\alpha}\ln\left(1+\frac{h}{R_{0}}\right)\right]\left(\frac{l}{2\pi} -R_{\alpha}\right)+\right.\\ \left.+\left[\left(\frac{1}{2}-\alpha\right)h^{2}-hR_{\alpha}+R_{ \alpha}^{2}\ln\left(1+\frac{h}{R_{0}}\right)\right]\right\}.\end{split} \tag{94}\]
In analogy to the previous case, we choose the value of \(\alpha\) which entails the axial-bending uncoupling, namely, according to Eq.(41),
\[\alpha_{U}=\frac{1}{\ln\left(1+\frac{h}{R_{0}}\right)}-\frac{R_{0}}{h}. \tag{95}\]
Thanks to the fact that \(R_{\alpha}=R_{0}+\alpha h\), from (95) it results \(R_{\alpha_{U}}=\frac{h}{\ln\left(1+\frac{h}{R_{0}}\right)}\). Hence the Eq.(94) becomes
\[\mathscr{E}_{\alpha_{U}}(R_{0};l)=\pi bY\left\{\left(\frac{l}{2\pi}-\frac{h} {\ln\left(1+\frac{h}{R_{0}}\right)}\right)^{2}\ln\left(1+\frac{h}{R_{0}} \right)+\left[\frac{1}{2}-\frac{1}{\ln\left(1+\frac{h}{R_{0}}\right)}\right]h ^{2}+hR_{0}right\} \tag{96}\]
Thus, it is possible to see that the minimum of energy necessary to flatten the ring is achieved only if the chosen uncoupling representative fiber keeps its length constant, i.e. \(l=\frac{2\pi h}{\ln\left(1+\frac{h}{R_{0}}\right)}=2\pi R_{\alpha_{U}}\). Such amount of energy turns out to be
\[\mathscr{E}_{\alpha_{U}}\left(R_{0};l=\frac{2\pi h}{\ln\left(1+\frac{h}{R_{0}} \right)}\right)=\pi bY\left\{R_{0}h+\left[\frac{1}{2}-\frac{1}{\ln\left(1+\frac {h}{R_{0}}\right)}\right]h^{2}\right\}. \tag{97}\]
Again, the minimum of the energy is consistently required by the validity of Parent's principle.
So far we have considered the generic situation where circle in panel B of Fig.7 and that in panel A\({}^{\prime}\) are different. If we set the same dimensions for both of them, we have \(R_{0}=\frac{L}{2\pi}-\frac{h}{2}\) that, inserted into the energy expression (96) yields
\[\mathscr{E}_{\alpha_{U}}(L;l)=\pi bY\left\{\frac{l^{2}}{4\pi^{4}}\ln\left( \frac{L+\pi h}{L-\pi h}\right)+\frac{h}{\pi}\left(\frac{L}{2}-l\right)\right\}. \tag{98}\]
The amount of work needed to stretch the ring out, keeping constant the length of the neutral fiber, is
\[\mathscr{E}_{\alpha_{U}}\left(L;l=\frac{2\pi h}{\ln\left(\frac{L+\pi h}{L-\pi h }\right)}\right)=\pi bYh\left\{\frac{L}{2\pi}-\frac{h}{\ln\left(\frac{L+\pi h }{L-\pi h}\right)}\right\}. \tag{99}\]
Conversely if we want to stretch the ring keeping constant the line of the centroids, therefore it is sufficient to replace \(l=L\) into the expression (98):
\[\mathscr{E}_{\alpha_{U}}(L;l=L)=\pi bY\left\{\frac{L^{2}}{4\pi^{4}}\ln\left( \frac{L+\pi h}{L-\pi h}\right)-\frac{hL}{2\pi}\right\}. \tag{100}\]
|
2309.15622 | Pushing Alias Resolution to the Limit | In this paper, we show that utilizing multiple protocols offers a unique
opportunity to improve IP alias resolution and dual-stack inference
substantially. Our key observation is that prevalent protocols, e.g., SSH and
BGP, reply to unsolicited requests with a set of values that can be combined to
form a unique device identifier. More importantly, this is possible by just
completing the TCP hand-shake. Our empirical study shows that utilizing readily
available scans and our active measurements can double the discovered IPv4
alias sets and more than 30x the dual-stack sets compared to the
state-of-the-art techniques. We provide insights into our method's accuracy and
performance compared to popular techniques. | Taha Albakour, Oliver Gasser, Georgios Smaragdakis | 2023-09-27T12:42:11Z | http://arxiv.org/abs/2309.15622v1 | # Pushing Alias Resolution to the Limit
###### Abstract.
In this paper, we show that utilizing multiple protocols offers a unique opportunity to improve IP alias resolution and dual-stack inference substantially. Our key observation is that prevalent protocols, e.g., SSH and BGP, reply to unsolicited requests with a set of values that can be combined to form a unique device identifier. More importantly, this is possible by just completing the TCP hand-shake. Our empirical study shows that utilizing readily available scans and our active measurements can double the discovered IPv4 alias sets and more than 30x the dual-stack sets compared to the state-of-the-art techniques. We provide insights into our method's accuracy and performance compared to popular techniques.
Alias Resolution, Protocol Dual-Stack, Network Measurement +
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: journal: JML
+
Footnote †: JML
+
Footnote †: journal: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
Footnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnoteFootnote †: JML
+
FootnoteFootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnote †: JML
+
FootnoteFootnoteFootnote †: J
## 2. Methodology
Scanning for active services is a widely used technique in Internet measurement and security analysis (Berger et al., 2017; Berger et al., 2017). In this paper, we show that utilizing service scanning results for two popular protocols, namely, SSH and BGP, enables large-scale alias and dual-stack inference. By analyzing these protocols and their specifications (Kumar et al., 2019; Kumar et al., 2019), we identify unique host identifiers that can be used to group IP addresses belonging to the same host in both IPv4 and IPv6.
### Service Scan Data
We perform active service scans for SSH and BGP in two phases:
(1) An Internet-wide TCP scan sending a single SYN packet on port 22 and 179 using ZMap (Berger et al., 2017).
(2) A service scan using ZGrab2 (Zi et al., 2018) targeting IPs, which are responsive to the Internet-wide ZMap scan.
In the service scan, specifically for SSH, we complete the TCP handshake and subsequently send a protocol-specific payload to solicit banner information from the target IP. For BGP, the target IP sends an open message after we complete the TCP handshake without the need for any additional data exchange.
To complement our view of active services, we leverage the Censys dataset (Kumar et al., 2019), in addition to our own active measurements. Censys perform service scan on the 65k ports. However, we only consider hosts that are running SSH and BGP on the default ports, i.e., TCP/22 for SSH and TCP/179 for BGP.
### SSH Identifier
The Secure Shell (SSH) protocol, initially introduced in RFC 4253 (Kumar et al., 2019), provides a mechanism to establish a secure network connection. We utilize ZGrab2's SSH module, which handle the SSH handshake, to perform our service scan. Upon completion of the TCP handshake, the server and the client send their respective service string banner and then proceed to exchange a series of plain text message before transitioning to an encrypted session. During this exchange, both the server and client communicate their respective capabilities regarding encryption, authentication, and compression algorithms. This exchange enables both endpoints to convey to the other the algorithms they support. RFC 4253 (Kumar et al., 2019) states that each supported algorithm MUST be listed in order of preference, from most to least. This requirement results in a signature that can be used to identify the client and the server implementation (Kumar et al., 2019; Kumar et al., 2019). We use this information, and the service banner as the first part of our SSH host identifier.
SSH server requires a pair of host keys. These keys are typically generate during the service setup. The client and server exchange the public key components during the connection setup phase. We use the server public key as the second part of our SSH identifier. While the SSH public key itself is likely to be unique per host, our active scan shows that 0.4% of non-singleton hosts communicate different algorithmic capabilities. Therefore, combining the key with the host's algorithmic capabilities can enhance the uniqueness of the SSH identifier. We highlight (in blue) the various parts of our SSH identifier in a snippet of SSH connection setup in Figure 1.
### BGP Identifier
The BGP protocol is used to facilitate the exchange of routing information between BGP-speaking routers. To that end, BGP speakers establish and maintain a TCP session, typically over port 179. When scanning for host running BGP, we complete the TCP handshake and wait for data. We simply close the connection after 2 seconds timeout, or after receiving any data. We find that more than 5.8M BGP speakers close the connection immediately after completing the TCP handshake. However, 364k IPs close the connection after sending an OPEN and a Notification message stating that the connection is rejected. Figure 2 shows an example of a dissected BGP OPEN message from our service scan.
The OPEN message of a BGP speaker contains multiple fields that, when combined, can serve as a globally unique identifier. The first notable field is the BGP identifier. The BGP identifier is used as part of a loop and collision prevention mechanism and defined in
Figure 1. Snippet of a Dissected SSH Connection Setup
Figure 2. A Dissected BGP OPEN Message
RFC 4271 (Wang et al., 2018) as 4-octet unsigned integer that uniquely identifies the BGP speakers within an Autonomous System (AS). Moreover, it should have the same value for every local interface. The OPEN message also contains the Autonomous System Number (ASN) of a BGP speaker's network. The ASN is a globally unique number that is associated with a single AS (Kang et al., 2018). Some OPEN messages may contain optional parameters field that indicate the supported capabilities (Beng et al., 2019). The additional fields within the OPEN message such as Length, Version, and Hold Time are host-wide, and shared across all interfaces. Combining the values of those fields results in a unique identifier that we use to group alias and dual stack addresses. We highlight (in blue) the relevant parts of the identifier in a dissected BGP message in Figure 2.
### Alias and Dual-Stack Inference
For every IP that is responsive to the BGP and SSH service scan, we extract the respective identifier. We group IP addresses that shares the same identifier into SSH and BGP alias sets, respectively. We group IPv4 and IPv6 addresses that share the same identifier into dual-stack sets.
### Datasets
We leverage two different types of datasets. First, we use active measurement data in the IPv4 and IPv6 Internet. In IPv4, we perform Internet-wide scans for the SSH and BGP protocols using ZMap (Kang et al., 2018) and ZGrab2 (Kang et al., 2018). In IPv6, we use an IPv6 Hilitsit (Kang et al., 2018; Wang et al., 2018) to identify potentially active addresses in the vast IPv6 address space. The active measurement data was collected on April 18, 2023, utilizing a single vantage point located in a data center in Germany. Our dataset, including our analysis, are publicly available (Beng et al., 2019). Second, we use data obtained from Censys (Kang et al., 2018) to identify additional responsive hosts to SSH or BGP. We selected a Censys snapshot that closely matches the date of our active measurement, March 28, 2023.
In Table 1 we show an overview of these two datasets as well as the union, where applicable, of both sources. In IPv4, we find that both Censys as well as our active scans cover a similar number of ASes for both SSH and BGP. Censys does, however, find around 6M more IPs for SSH and 35k more IPs for BGP. This might be linked with Censys performing distributed measurements, which reduces the likelihood of triggering rate-limiting or intrusion detection system filters (Wang et al., 2018). Further, censys also finds an additional 5.6M IPs running SSH on 60,806 different ports. We do not consider non-standard ports from Censys since our active scan only covers port 22. The union of both IPv4 data sources provides additional coverage compared to just a single source, both with respect to the number of covered IPs as well as ASes. Therefore, unless explicitly stated otherwise, we use the union of both data sources in the remainder of the paper for our IPv4 analysis.
In IPv6, our active scans find more than 1M SSH IPs and 67k BGP IPs. In contrast, Censys reports only 944 SSH IPs and no IPs for BGP. Further, the SSH IPs are running the service on a non-standard port, namely 80 and 443. We believe that the variation attribute to the IPv6 Initiists used. Due to its limited coverage, we exclude Censys IPv6 data from our analysis. However, as of August 15, 2023, Censys IPv6 snapshot reports more than 415k IPv6 addresses running SSH on port 22. We expect this number to increase overtime as Censys scans for IPv6 more rigorously.
In addition to SSH and BGP services, we conduct an SNMPv3 scan for both IPv4 and IPv6. We utilizing an already established methodology (Beng et al., 2019) to identify alias and dual-stack sets. We then use the results for validation purposes and as a supplement to our results. The SNMPv3 data also serve as baseline for comparison. We note that Censys data primarily reports SNMPv2 hosts and does not seem to include any information on SNMPv3. Consequently, we do not include it as an additional source.
### Validation
We take a cross-protocol validation approach and compare sets derived from IP addresses responsive to different protocol pairs. We also utilize MIDAR (Kang et al., 2018) as an additional source for validation. Specifically, we test a random sample of 61k alias sets using MIDAR and check whether the resulting sets perfectly match the ones we identify with SSH. We ensure that each sample set contains at most ten IPv4 addresses to ensure completing the MIDAR run in a close time frame to the SSH service scan. We provide a summary of our validation results in Table 2 where we report the test sample size, the number of sets that exactly match, and the number of sets with mismatching IPs.
In cross-protocol validation, we initially compare the alias sets obtained from SSH and BGP. Our active scan data contains a total of 7.8k responsive addresses, common to both protocols. We identify 1.34k alias sets using SSH and 1.35k alias sets using BGP. The validation between SSH and BGP protocols shows that 96% of the SSH sets have a perfect match with the BGP sets.
Next, we examine the results of SSH and SNMPv3 pairs. Our active scan data contains a total of 63k responsive addresses to both protocols, resulting in 13.6k alias sets using SSH and 14.5k alias sets using SNMPv3. The validation between SSH and SNMPv3 protocols shows a 97% agreement.
Finally, we compare the BGP and SNMPv3 pairs with 37k responsive addresses to both protocols. We identify 1.84k alias sets using BGP and 1.9k alias sets using SNMPv3. The validation between BGP and SNMPv3 shows a 95% agreement.
When comparing our results with MIDAR, we focus solely on SSH-based alias sets due to the time required to run MIDAR against all alias sets. We find that only 13% of the sampled sets can be verified with MIDAR. This low coverage can be attributed to two
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Active measurements} & \multicolumn{2}{c}{Censys} & \multicolumn{2}{c}{Union} \\ \cline{3-7} Protocol & \# IPs & \# ASN & \# IPs & \# ASN & \# IPs & \# ASN \\ \hline SSH & 15.9M & 46.1k & 21.7M & 47.6k & 24.4M & 48.9k \\ BGP & 364k & 6.5k & 391k & 7k & 409k & 7.5k \\ SNMPv3 & 20.8M & 50.2k & n.a & n.a & n.a & n.a \\ \hline Union & 36.7M & 59.6k & 22.1M & 48.5k & 24.7M & 49.7k \\ \hline SSH ( IPv6) & 1.01M & 10.8k & n.a & n.a & n.a & n.a \\ BGP ( IPv6) & 67k & 3.1k & n.a & n.a & n.a & n.a \\ SNMPv3 ( IPv6) & 337k & 10.8k & n.a & n.a & n.a & n.a \\ \hline Union & 1.3M & 14.4k & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Service Scanning Dataset Overview
reasons: (a) the majority of these addresses do not utilize an incremental IPID counter, or (b) targets with large traffic volume resulting in a high velocity IPID counter. MIDAR is able to verify 8.5k alias sets with a 96% agreement with our SSH results. The remaining 4% alias sets are split into two or three alias sets by MIDAR, while SSH groups them into a single set. We suspect that the disagreement can be attributed to IP churn given that the MIDAR run took three weeks to complete. It is also possible that some of these sets share the same host key.
In summary, the validation results confirm that our technique has at least a 95% agreement with state-of-the-art.
### Limitations
Our methodology provides the largest sets of alias and dual-stack addresses to date. However, we do note a few limitations:
* First, our methodology relies on application-level data. As such, it is only applicable to IPs responsive to SSH and BGP. Firewalls and access control may block or restrict access to the these services which can limit the alias inference.
* Second, in the case of BGP, BGP speakers can have a non-unique BGP identifier due to mis-configuration which can lead to incorrect inferences.
* Third, our defined SSH identifier, might not be unique in all cases. It is in fact possible for multiple host to share the same identifier, e.g., SSH servers can be shipped with factory-default keys (Han et al., 2017; D'Amica et al., 2018). It is unlikely for two different hosts to generate the exact same host key, however, unless an administrator chose to use the same key pair across multiple hosts.
* Lastly, our validation is limited by the relatively small number of overlapping sets with other techniques, the responsiveness of a service on all IPs in a given set, and the possibility of IP churn.
## 3. Ethical Considerations
For our active experiments we do our best to minimize additional load or harm on the destination devices. BGP, SSH, and SNMPv3 load is very low (only a few packets per destination). Moreover, we randomly distribute our measurements over the address space for our experiment, ensuring that at most one packet reaches a target IP each second. Furthermore, we coordinate with local network administrators to ensure that our scanning efforts do not harm the local or upstream network. For the active scanning we use best current practices (Han et al., 2017; D'Amica et al., 2018; D'Amica et al., 2018) to ensure that our proper IP address has a meaningful DNS PTR record. Additionally, we show information about our measurements and opt-out possibilities on a website of our scanning servers. During our active experiments, we did not receive any complaints or opt-out requests.
## 4. Analysis
In this section we present our results, consisting of alias resolution and dual-stack statistics as well as AS-level analyses.
### Alias Resolution
To identify alias sets, we group IP addresses with identical unique identifiers for SSH and BGP. We also supplement our findings with SNMPv3 as described in (Berger et al., 2017). In Table 3 we report the number of non-singleton alias sets and the contribution of each individual protocol, data source, and the union of all. In IPv4, the SSH active scan results in 505k alias sets, which cover over 3.2M unique IPv4 addresses. Similarly, the Censys dataset results in 699k alias sets, covering more than 4.6M IPv4 addresses. Censys data provide a notable increase of 70% and 80% in the number of IPv4 addresses and resulting alias sets compared to the active measurement alone.
With BGP, both Censys and the active scan produce similar results, with 12k alias sets covering 175k IPv4 addresses. In contrast, our SNMPv3 scan results in 557k alias sets covering 6.1M IPv4 addresses. By consolidating these findings, we can effectively cover more than 11.8M IPv4 addresses.
Interestingly, a substantial majority of 97% of these addresses only respond to a single service, while only 3% are responsive to two or three services. Consequently, this stark difference increases the resulting alias sets, exceeding 1.4M, of which 40% can only be identified with SNMPv3 and 60% (which is more than double what can be achieved by SNMPv3 alone) with SSH or BGP. We note however, that the majority of these sets comes from SSH. In Figure 3 we show the distribution of IPv4 addresses per alias set. We find that the majority of the sets contain less than 100 addresses. Additionally, more that 60% of SSH alias sets contain only two addresses compared to less than 30% for BGP and SNMPv3. BGP sets are also more likely to contain more addresses compared to sets derived from SSH and SNMPv3. We also note a similar set size regardless of the data source.
For IPv6, the active SSH scan results in 47k alias sets that cover 266k unique IPv6 addresses. Moreover, we find 8.3k and 16.7k alias sets, covering 48k and 71k IPv6 addresses with BGP and SNMPv3, respectively. Merging these results we obtain over 66k IPv6 alias sets, with a coverage of more than 340k unique IPv6 addresses. Similar to our IPv4 results, a majority of 94% of these addresses are only responsive to a single service, while 6% are responsive to two or three services. This results in 25% of the IPv6 alias sets being identifiable only with SNMPv3, while 75% can be identified with SSH and BGP. In Figure 4 we show the distribution of IPv6 addresses per alias set. Similar to IPv4, the majority of sets contain less that 100 addresses. Additionally, SSH sets are more likely to contain fewer IPv6 addresses compared to BGP and SNMPv3. We also note a similar set size for BGP and SNMPv3.
### Dual-Stack Inference
Next, we shift our attention to the results of dual-stack identification, as summarized in Table 4. We merge alias sets from IPv4 and IPv6, if they use the same unique identifier. The SSH active scan results in more than 634k dual-stack alias sets, which cover 1.05M IPv4 addresses and 771k IPv6 addresses. With BGP, we identify 4.2k dual-stack sets, covering 78k IPv4 addresses and 16.3k IPv6
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Sample size & Agree & Disagree \\ \hline SSH-BGP & 1.34k & 1.29k & 53 \\ SSH-SNMPv3 & 13.6k & 13.2k & 398 \\ BGP-SNMPv3 & 1.84k & 1.76k & 87 \\ SSH-MIDAR & 8.5k & 8.1k & 366 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Alias Sets Validation
addresses. Additionally, SNMPv3 discovers 21k dual-stack alias sets that cover 1.1M IPv4 addresses and 45k IPv6 addresses. Consolidating these findings results in a total of 650k dual-stack alias sets, of which 3% can only be identified with SNMPv3, while 97% (30x compared to SNMPv3 alone) can only be identified with SSH or BGP. Further, these sets cover a total of 2.2M IPv4 addresses and 830k IPv6 addresses. Notably, more than 88% of the dual-stack sets contains a single IPv4 and a single IPv6 addresses, 7% set with 2-10 addresses, and only 2% with more than 10 addresses. It is worth noting that our IPv6 sample size is relatively small compared to IPv4. Nonetheless, these results indicate that a substantial portion of known IPv6 addresses are exclusively IPv6-enabled, with just 64% of the IPv6 addresses having an IPv4 counterpart. However, it is also possible that some host are only responsive over IPv6 due to policy as shown by previous work (Beng et al., 2018).
### AS-Level Analysis
Figure 5 shows the distribution of Autonomous System Numbers (ASNs) per IPv4 alias set. We find that less than 10% of SSH and SNMPv3 sets contain addresses associated with two or more ASes. In contrast, over 35% of BGP sets contain addresses associated with multiple ASes. This outcome aligns with expectations, as BGP typically consist of border routers that connect different ASes.
In Figure 6, we show the distribution of the number of alias and dual-stack sets per AS. We find that over 37k ASes contain at least one set. The majority of ASes have fewer than 100 sets, and only 3% of ASes have more than 100 alias sets.
To better understand the main contributors of alias sets, we now focus on the top 10 ASes. In Table 5, we report the largest AS based on different protocols as well as the union of all three protocols for IPv4. We expect SSH to be predominantly prevalent in cloud provider networks, whereas BGP and SNMPv3 to be more prevalent in ISP networks. Indeed, among the top 10 ASes for SSH, 8 are cloud service providers, including DigitalOcean (rank 1, AS14061), Amazon (rank 3, AS16509; rank 6, AS14618), and OVH (rank 4, AS16276). Surprisingly, however, we also observe two major ISPs:
\begin{table}
\begin{tabular}{l c c c} \hline \hline & IPv4 addr & IPv6 addr & Dual Stack Sets \\ \hline SSH & 1.05M & 771k & 634k \\ BGP & 78k & 16.3k & 4.2k \\ SNMPv3 & 1.1M & 45k & 21k \\ \hline Union & 2.2M & 830k & 650k \\ \hline \hline \end{tabular}
\end{table}
Table 4. Dual-Stack Sets
Figure 4. IPv6 addresses per alias sets
Figure 5. ASN per IPv4 Alias Set
Figure 3. IPv4 addresses per alias sets
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{Source} & Active (IPs) & \multicolumn{1}{c}{Censys (IPs)} & Union (IPs) \\ \hline \multirow{3}{*}{IPv4} & SSH & 505k (3.2M) & 699k (4.6M) & 926k (5.7M) \\ & BGP & 12k (175k) & 12k (175k) & 12k (175k) \\ & SNMPv3 & 557k (6.1M) & n.a & 557k (6.1M) \\ \cline{2-4} & Union & 1.04M & 708k & 1.4M (11.8M) \\ \hline \multirow{3}{*}{IPv6} & SSH & 47k (266k) & n.a & n.a \\ & BGP & 8.3k (48k) & n.a & n.a \\ \cline{1-1} & SNMPv3 & 16.7k (71k) & n.a & n.a \\ \cline{1-1} \cline{2-4} & Union & 66k & & \\ \hline \hline \end{tabular}
\end{table}
Table 3. Alias Sets Overview
Telefonica de Argentina (rank 2, AS22927) and China Telecom (rank 8, AS4134). Shifting our focus to the top 10 ASes in the BGP and SNMPv3 data, we find that 8 of them are ISPs, while the remaining 2 are cloud service providers. The top three ASes for BGP are Zenlayer (AS21859), Verizon (AS701), and Glide (AS42689); the top three for SNMPv3 are Telecom Italia (AS3269), Vodafone Italy (AS30722), and Deutsche Telekom (AS3320). Lastly, we consider the union of all data sources. We find this to be dominated by similar ASes as in the SSH data set, with a split of 6 cloud service providers and 4 ISPs.
We conclude our analysis by considering the largest 10 ASes with IPv6 alias sets and dual-stack alias sets. Table 6 shows the union results of all three protocols for IPv6 and IPv4-IPv6 dual-stack alias sets. The IPv6 alias sets spread over 7k ASes in total. The top 10 are split between 7 ISPs (e.g., Hurricane Electric, AS6939; China Unicom, AS4837; Chinnat, AS4134) and 3 cloud service providers (e.g., Akamai, AS6394; Dreamhost, AS26347). Finally, our dual-stack alias sets cover more than 9.5k ASes. Note that this includes sets with at least a single IPv4 and a single IPv6 address. We find that the top 3 ASes are cloud service providers (DigitalOcean, ASAS14061; Linode, AS63949; OVH, AS16276) and cover more than 54% of the total dual-stack sets. The remaining 7 are ISPs and cover only 10% of all dual-stack alias sets.
## 5. Conclusion
In this paper we introduced a multi-protocol approach to improve IP alias resolution and dual-stack identification. Our key observation is that a unique identifier for each protocol can be used to group different subsets of alias sets. We evaluated our method with two popular protocols, namely, SSH and BGP, and we showed that our technique substantially increases both the number of alias as well as dual-stack sets, compared to similar protocol-centric technique such as SNMPv3. Our results showed that we can supplement previous work and identify up to 1.4 million non-singleton IPv4 alias sets, i.e., double compared to what can be achieved with previously known technique. Our results also showed that we can identify more than 650 thousand dual-stack alias sets. By a large margin (30\(\times\)), this is the largest set reported to date.
As part of our future research agenda, we plan to investigate if other popular protocols are associated with unique identifiers that will further increase the IP coverage of alias and dual-stack sets. We also plan to inspect SSH identifiers more in-depth, specifically in terms of consistency and stability. Moreover, we plan to use updated IPv6 hit-list as we were limited to these publicly available in this paper. Our initial results are very encouraging, and we plan to perform additional measurements from multiple vantage points (VPs) to understand the effect of geographical VP location.
## Acknowledgements
We would like to thank our shepherd, Liz Izhikevich, and the anonymous reviewers for their valuable comments. This work was supported in part by the European Research Council (ERC) under Starting Grant ResolutioNet (ERC-StG-679158).
|
2301.10178 | Methods in Econophysics: Estimating the Probability Density and
Volatility | We discuss and analyze some recent literature that introduced pioneering
methods in econophysics. In doing so, we review recent methods of estimating
the volatility, volatility of volatility, and probability densities. These
methods will have useful applications in econophysics and finance. | Moawia Alghalith | 2022-11-03T20:05:17Z | http://arxiv.org/abs/2301.10178v1 | # Methods in econophysics: Estimating the probability density and volatility
###### Abstract
We discuss and analyze some recent literature that introduced pioneering methods in econophysics. In doing so, we review recent methods of estimating the volatility, volatility of volatility, and probability densities. These methods will have useful applications in econophysics and finance.
21 September 2022
Accepted 10 October 2022
published 21 October 2022
Citation
Algorithm M (2022), Methods in econophysics: Estimating the probability density and volatility.
_Fromt. Phys._ 10:1050277.
doi: 10.3389/fphy.2022.1050277
## 1 Introduction
The volatility estimation is a key topic in finance and econophysics. It is an indicator of the movement in the asset price. For example, see [1, 2]. Recently, the literature focused on the volatility of volatility. Examples include [3, 4]. Closely related to the volatility estimation is the probability density estimation. The density estimation can be used to estimate the volatility and volatility of volatility. Needless to say, the probability density has many other applications.
In this note, we briefly discuss recent methods in the estimation of the volatility, volatility of volatility, and probability densities.
## 2 Review
There are typically two methods of density estimation: parametric and non-parametric methods. For example, [5, 6] adopted the parametric method. [7, 8, 9, 10, 11, 12, 13] used the non-parametric approach. [14, 15, 16, 17] provided empirical estimation. [18] used copulas. [19] used histograms and numerical simulations. [20] employed orthogonal polynomials.
A limitation of the parametric method is that it requires knowing the marginal distributions [5, 21]. While the bandwidth selection problems, the high computational cost, and the kernel specification are some of the limitations of the non-parametric approach.
In response to some of these limitations, [22] introduced non-parametric methods for estimating the marginal and joint probability densities. The advantage of these methods is their relative simplicity. In particular, it allows us to circumvent the bandwidth selection problem and the kernel specification. Accordingly, the joint density can be calculated as
\[f\left(x,y\right)=\frac{\triangle^{2}F\left(x,y\right)-\triangle f_{x}\left(x \right)\triangle x-\triangle f_{\gamma}\left(y\right)\triangle y}{2\triangle x \triangle y}, \tag{1}\]
where \(F\left(x,y\right)\) is the joint cumulative density, \(f\left(x,y\right)\) is the joint density, \(f_{x}\left(x\right)\) and \(f_{\gamma}\left(y\right)\) are the marginal densities, \(x\) is the outcome of \(X\), \(y\) is the outcome of \(Y\), and \(\triangle\) is the difference operator. The limitation of this method is that it requires high-frequency data for a high level of accuracy.
Using Taylor's expansions, [6, 23] introduced parametric methods for estimating the joint, marginal, conditional, and cumulative probability densities. In doing so, they relied on estimating regressions. For example, the joint density can be given by
\[f\left(x,y\right)=c_{1}+c_{2}x+c_{3}y+c_{4}x^{2}+c_{5}y^{2}+c_{6}x\,y, \tag{2}\]
where \(c_{i}\) is a constant. The marginal density can be obtained by integrating the above equation.
The advantage of this method is its simplicity and the fact that the marginal distributions need not be known. Moreover, the estimation accuracy can be improved by increasing the order of the Taylor expansion. The limitation of this method is that we need to ensure the goodness-of-fit of the regression.
Previous literature on volatility typically considered time series, such as the generalized autoregressive conditional heteroskedasticity GARCH models. For example, see the excellent surveys by [1,2,24]. Asai and McAleer [25] adopted a Wishart stochastic volatility model. Bollerslev et al (2011) investigated asymmetry in volatility. Asai et al [26] assumed a noisy realized volatility. Muhle-Karbe et al [27] considered multivariate stochastic volatility. Sahiner [28] used the GARCH method. Mastroeni [29] considered vanishing stochastic volatility.
Alghalith [4] provided novel, parametric methods for estimating the volatility and volatility of volatility. According to this model, volatility data are needless. Also, the method can be applied to cross-sectional data. Furthermore, estimating the volatility matrix can be avoided. The limitation of this model is that we need to ensure the validity of the non-linear regression results.
Alghalith et al [30] introduced a simple, non-parametric method to estimate both the volatility and volatility of volatility. Accordingly, the volatility of the asset returns and volatility of volatility can be estimated, respectively, as
\[v_{t}=\sqrt{\frac{\left(\Delta S_{t}\right)^{2}}{S_{t}^{2}}}, \tag{3}\]
where \(S_{t}\) is the price of the asset (typically a stock) at time \(t\) and \(v_{t}\) is the estimated volatility at time \(t\).
\[\gamma_{t}=\sqrt{\frac{\left(\Delta v_{t}^{2}\right)^{2}}{v_{t}^{2}}}, \tag{4}\]
where \(\gamma_{t}\) is the estimated volatility of volatility at time \(t\).
Also, [30] explored the possibility that the volatility of volatility is not constant. The advantage of this approach is its simplicity. Its limitation is that it requires high-frequency data for a high level of accuracy.
In sum, these methods introduced by Alghalith and co-authors were reasonably accurate when they were applied to practical examples. In general, they were at least as accurate as other methods. However, the accuracy can be improved by increasing the frequency of the data or the order of the Taylor expansion.
## 3 Conclusion
We introduced simpler and less restrictive methods for estimating the volatility, volatility of volatility, and probability densities. In general, the other methods are more technical. Future research can utilize these methods of density estimation to estimate the volatility and volatility of volatility. Moreover, future research can apply these methods to other areas of econophysics.
## Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
## Author contributions
The author confirms being the sole contributor of this work and has approved it for publication.
## Acknowledgments
I'm very grateful to Editor SS and the reviewers for their excellent and fast comments.
## Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated
organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2301.13471 | What if planet 9 has satellites? | In the past decade, numerical simulations started to reveal the possible
existence of planet 9 in our solar system. The planet 9 scenario can provide an
excellent explanation to the clustering in orbital elements for Kuiper Belt
objects. However, no optical counterpart has been observed so far to verify the
planet 9 scenario. Therefore, some recent studies suggest that planet 9 could
be a dark object, such as a primordial black hole. In this article, we show
that the probability of capturing large trans-Neptunian objects (TNOs) by
planet 9 to form a satellite system in the scattered disk region (between the
inner Oort cloud and Kuiper Belt) is large. By adopting a benchmark model of
planet 9, we show that the tidal effect can heat up the satellites
significantly, which can give sufficient thermal radio flux for observations,
even if planet 9 is a dark object. This provides a new indirect way for
examining the planet 9 hypothesis and revealing the basic properties of planet
9. | Man Ho Chan | 2023-01-31T08:37:25Z | http://arxiv.org/abs/2301.13471v1 | # What if planet 9 has satellites?
###### Abstract
In the past decade, numerical simulations started to reveal the possible existence of planet 9 in our solar system. The planet 9 scenario can provide an excellent explanation to the clustering in orbital elements for Kuiper Belt objects. However, no optical counterpart has been observed so far to verify the planet 9 scenario. Therefore, some recent studies suggest that planet 9 could be a dark object, such as a primordial black hole. In this article, we show that the probability of capturing large trans-Neptunian objects (TNOs) by planet 9 to form a satellite system in the scattered disk region (between the inner Oort cloud and Kuiper Belt) is large. By adopting a benchmark model of planet 9, we show that the tidal effect can heat up the satellites significantly, which can give sufficient thermal radio flux for observations, even if planet 9 is a dark object. This provides a new indirect way for examining the planet 9 hypothesis and revealing the basic properties of planet 9.
Planet, Solar System
## 1 Introduction
Currently, there are 8 planets officially identified in our solar system. Most of the newly discovered large astronomical objects outside Neptune are dwarf planets or large asteroids called trans-Neptunian objects (TNOs). In view of the TNOs, the new discovery of 2012 VP113 and some potential members of the inner Oort cloud has revealed a strange clustering in orbital elements (Trujilio & Sheppard, 2014). The perihelion distance have arguments of perihelia \(\omega\) clustered approximately round zero (Trujilio & Sheppard, 2014; Batygin & Brown, 2016). Later analysis shows that the chance for this strange clustering due to random is just 0.0007% (Batygin & Brown, 2016). Therefore, a dynamical mechanism involving a new planet located at more than 100 AU has been suggested
(Batygin et al., 2019). Many studies have constrained the mass and the orbital properties of the hypothesized planet 9 (P9) (Batygin & Brown, 2016; Sheppard & Trujillo, 2016; Gomes, Deienno & Morbidelli, 2016; Becker et al., 2018; Sheppard et al., 2019). Current benchmark models suggest that P9 has mass \(M_{9}\sim 5-10M_{\oplus}\), orbital semi-major axis \(a_{9}\sim 400-800\) AU and eccentricity \(e_{9}\sim 0.2-0.5\)(Batygin et al., 2019). However, the in-situ formation of P9 is strongly disfavored so that P9 might be a captured planet from the free-floating objects nearby the solar system (Batygin et al., 2019; Kenyon & Bromley, 2016). A more detailed assessment of the probability of capture can be found in Li & Adams (2016).
Current benchmark models of P9 suggest that it has a temperature \(\sim 40\) K and a radius \(\sim 3-4R_{\oplus}\)(Batygin et al., 2019). The possible location of P9 in the celestial sphere is also constrained (Batygin et al., 2019; Fienga et al., 2016; Socas, 2022). Based on these properties, various observations, such as optical and microwave/infrared observations, have been deployed to observe the hypothesized P9 (Meisner et al., 2017, 2018; Naess et al., 2021). However, no electromagnetic wave signal has been detected for P9 (Meisner et al., 2017, 2018; Linder & Mordasini, 2016). Careful examinations based on previous optical surveys also do not reveal the existence of P9 (Linder & Mordasini, 2016). Therefore, these null results have made the P9 hypothesis more mysterious.
In view of these problems, some of the studies have suggested that P9 is a dark object (dark P9), such as a compact object made by dark matter (Wang et al., 2022) or a primordial black hole (PBH) (Scholtz & Unwin, 2020). In particular, the proposal of the PBH P9 has attracted many discussions because many studies beyond the standard models have already proposed the existence of PBHs with mass \(\sim M_{\oplus}\). There are various mechanisms which can generate PBHs in early universe (Carr et al., 2021). However, the direct signals emitted by the PBH P9 (e.g. Hawking radiations) are too small to detect (Arbey & Auffinger, 2020). Even if we assume dark matter can distribute around the PBH P9, the resulting gamma-ray signals might be smaller than the current observation limits (Scholtz & Unwin, 2020). Besides, a recent innovative proposal suggests that using a small laser-launched spacecraft with a velocity of order \(0.001c\) can reach the PBH P9 to detect its gravitational field, though we need to wait for the measurement after roughly a decade (Witten, 2020).
Nevertheless, there are a lot of TNOs orbiting about the sun inside the scattered disk region (\(\sim 100-1000\) AU), located between the inner Oort cloud and Kuiper Belt. These TNOs are also known as detached objects. Most of them are either scattered from the central solar system or Kuiper Belt region. In fact, we have already observed at least 47 large TNOs with orbital semi-major axis larger than 100 AU and size larger than 100 km. Therefore, it is possible that these large TNOs would be captured by P9 to become satellites of P9. Many
dwarf planets such as Pluto and TNOs outside Neptune have satellite systems (Brown et al., 2006; Grundy et al., 2019). If these small objects can have satellites, it can be conceived that the more massive P9 might also have a number of satellites. In this article, we discuss some important observable features if P9 has captured satellites. For large satellites with small orbital semi-major axis, the tidal heating effect due to P9 would be important. It can be shown that these satellites would give an observable standard thermal radio spectrum. If P9 is a dark object, observing the satellites would be another kind of investigation to examine the P9 hypothesis in the near future. In the followings, we assume that P9 is a dark object and we follow the benchmark model of P9 with mass \(M_{9}=5M_{\oplus}\), eccentricity \(e_{9}=0.2\), orbital inclination \(i=20^{\circ}\), and semi-major axis \(a_{9}=450\) AU (Batygin et al., 2019). We simply take the semi-major axis \(a_{9}=450\) AU as the average distance to the dark P9 from the Earth.
## 2 Capturing probability
There are many large TNOs moving in the scattered disk region (\(\sim 100-1000\) AU), such as 2018 AG37, 2018 VG18 and 2020 BE102. It is quite likely that some of the large TNOs (e.g. with size \(D\sim 100\) km) could be captured by the dark P9. In fact, many of the Kulper Belt dwarf planets have at least one satellite. For example, the satellite of the dwarf planet Eris has radius \(R\sim 700\) km and semi-major axis \(a\sim 4\times 10^{4}\) km (Brown & Butler, 2018).
In general, when a TNO has a close encounter to a planet, energy will be lost in the capturing process due to the inverse of the gravitational slingshot mechanism (Napier, Adams & Batygin, 2021). The maximum capturing distance between the dark P9 and any TNOs can be characterized by the impact parameter \(b\)(Napier, Adams & Batygin, 2021):
\[b\sim\frac{M_{9}}{M_{\odot}}\left(\frac{GM_{\odot}}{a_{9}}\right)^{3/2}v^{-3}a _{9}, \tag{1}\]
where \(v\) is the incoming relative velocity between the dark P9 and any TNOs. Here, \(b\) can be regarded as the closest distance between the dark P9 and the TNOs for the capturing process. Therefore, the relative velocity between the dark P9 and the TNOs is given by
\[v\sim\sqrt{\frac{GM_{\odot}}{a_{9}}}-\sqrt{\frac{GM_{\odot}}{a_{9}\pm b}}\cos \Delta i, \tag{2}\]
where \(\Delta i\) is the orbital inclination difference between the dark P9 and the TNOs. As \(b\ll a_{9}\), the relative velocity is
\[v\sim\sqrt{\frac{GM_{\odot}}{a_{9}}}(1-\cos\Delta i). \tag{3}\]
Putting Eq. (3) into Eq. (1), we get
\[b\sim a_{9}(1-\cos\Delta i)^{-3}\left(\frac{M_{9}}{M_{\odot}}\right). \tag{4}\]
The benchmark orbital inclination of the dark P9 is \(i=20^{\circ}\)(Batygin et al., 2019). Based on the catalog compiled by the International Astronomical Union 1, the orbital inclinations of the TNOs (with semi-major axis \(a>100\) AU) are quite close to \(i=20^{\circ}\), except three with \(i>100^{\circ}\). The average difference between the orbital inclinations of P9 and the TNOs is about \(\Delta i=18^{\circ}\). Including the possible uncertainty of the benchmark orbital inclination of the dark P9 \(\delta i=5^{\circ}\)(Batygin et al., 2019), we take a conservative choice of \(\Delta i=25^{\circ}\), which gives \(b\sim 8.2\) AU.
Footnote 1: The catalog compiled by the International Astronomical Union can be found in [https://minorplanetcenter.net/iau/lists/TNOs.html](https://minorplanetcenter.net/iau/lists/TNOs.html)
On the other hand, we can also apply the radius of influence \(R_{\rm in}\) discussed in Bate (1971) to characterize the value of the impact parameter (i.e. \(b\approx R_{\rm in}\)). The radius of influence defines the region where the incoming TNO switches from a two-body problem with central mass \(M_{\odot}\) to a two-body problem with central mass \(M_{9}\) in the matched conics approximation (Napier, Adams & Batygin, 2021). Based on this approximation, the impact parameter is given by (Bate, 1971)
\[b=R_{\rm in}=a_{9}\left(\frac{M_{9}}{M_{\odot}}\right)^{2/5}. \tag{5}\]
Using our benchmark parameters, the dark P9 can capture any TNOs moving within the distance of \(b\sim 5.3\) AU. To get a more conservative estimation, in the followings, we adopt the value of \(b=5.3\) AU as the impact parameter. In view of this, the dark P9 can create a 'capturing volume' when it is orbiting about the sun. All of the TNOs inside this capturing volume would be likely captured by the dark P9. The capturing volume is given by
\[V=\left(2\pi a_{9}\sqrt{1-\frac{e_{9}^{2}}{2}}\right)(\pi b^{2})=2\pi^{2}b^{2} a_{9}\sqrt{1-\frac{e_{9}^{2}}{2}}\approx 2.5\times 10^{5}\ {\rm AU}^{3}. \tag{6}\]
Generally speaking, very large TNOs (with size \(\geq 500\) km) would be easier for us to identify. Based on the catalog compiled by the International Astronomical Union, there are four TNOs with size \(\geq 500\) km (assuming a standard asteroid albedo \(p=0.1\)) and orbital semi-major axis \(a=100-1000\) AU. The number of very large TNOs can provide a standard reference for estimating the amount of TNOs with different sizes inside the scattered disk region.
Consider the region of the scattered disk for \(a=100-1000\) AU. Based on the TNO catalog, all of the reported TNOs with \(a\leq 1000\) AU are located within a scale disk thickness of \(72.5\) AU above and below the P9 orbital plane. We therefore consider the volume of the scattered disk \(V_{d}\sim(2\times 72.5)\pi(1000^{2}-100^{2})\approx 4.5\times 10^{8}\) AU\({}^{3}\). Assuming the distribution of asteroid size is same as that in Kuiper Belt \(dN/dD\propto D^{-q}\)(Fraser et al., 2014). This size distribution in Kuiper Belt is well represented by a broken power law in \(D\) for large and small Kuiper Belt objects. For cold Kuiper Belt objects, the slope \(q\) for large objects (with size \(D\geq 140\) km) is \(q=8.2\pm 1.5\) while \(q=2.9\pm 0.3\) for \(D<140\) km (Fraser et al., 2014). Since there are four TNOs with size \(\geq 500\) km, taking \(q=8.2\), the average number density of TNOs with size \(D\geq 140\) km inside \(V_{d}\) is \(8.5\times 10^{-5}\) AU\({}^{-3}\).
Since the capturing volume is \(2.5\times 10^{5}\) AU\({}^{3}\), the average number of TNOs with size \(D\geq 140\) km captured is about 20. Note that this number is close to the typical number of satellites found in Jovian planets. In fact, the Jovian planets are somewhat close to each other so that the gravitational perturbation effect is significant. This would reduce the capturing volume and the number of satellites. However, there is almost no massive perturber for P9. The closest massive object Sedna (semi-major axis \(a\sim 500\) AU) has a relatively small mass \(\sim 10^{-3}M_{\oplus}\) only, which cannot affect the capturing volume significantly. Therefore, we expect that there is a considerable amount of captured TNOs to form a satellite system for P9, like the satellite systems in Jovian planets.
## 3 The tidal heating model
Consider a fiducial radius of the satellite \(R=D/2=100\) km. For simplicity, let's assume that the satellite is spherical in shape. The tidal force on the satellite is large when the satellite is close to P9. The Roche limit is \(\sim 2\times 10^{4}\) km if we assume the density of the satellite to be \(\rho=1\) g/cm\({}^{3}\). For Uranus and Neptune, which have mass similar to the dark P9, the range of the orbital semi-major of the satellites is \(a_{s}\sim 5\times 10^{4}-5\times 10^{7}\) km. In the followings, we will mainly consider the range of the orbital semi-major axis \(a_{s}=10^{5}-10^{6}\) km. Note that captured objects generally have large semi-major axis and eccentricity initially (Goulinski & Ribak, 2018; Napier, Adams & Batygin, 2021). However, orbital evolution through tidal effects would further decrease the values of semi-major axis and eccentricity (see the discussion below).
The equilibrium temperature due to solar luminosity is approximately given by
\[T\approx 54.8\sqrt{\frac{26}{a_{9}}}\;{\rm K}, \tag{7}\]
where we have neglected the albedo and the phase integral (Stansberry et al., 2008). For
\(a_{9}=450\) AU, we get \(T=13\) K. However, if the satellite is very close to P9, the tidal heating effect would be very significant. The tidal heating model has been discussed for more than 50 years (Goldreich & Soter, 1966). In general, the tidal heating rate can be calculated by (Segatz et al., 1988; Lainey et al., 2009; Renaud & Henning, 2018)
\[\dot{E}=\frac{21C}{2}\frac{(Rn)^{5}e_{s}^{2}}{G}, \tag{8}\]
where \(n=\sqrt{GM_{9}/a_{s}^{3}}\) is the mean orbital motion, and \(e_{s}\) is the eccentricity of the satellite orbit (Segatz et al., 1988). Here, the constant \(C\) is related to the Love number \(k_{2}\) and the quality factor \(Q\) which reflects the physical properties (e.g. elastic rigidity) of the satellite (Segatz et al., 1988; Lainey et al., 2009; Hussmann et al., 2010). However, the value of \(C\) for the satellite is uncertain. Theoretical prediction shows that the value of \(C\) should be lower than 0.06 for high density satellite core (Kervazo et al., 2022). We adopt the value revealed from the observational data of the Jupiter's moon Io \(C\approx 0.02\)(Lainey et al., 2009). In equilibrium, the tidal heating rate would be equal to the radiation cooling rate. Therefore, we have
\[T=\left(\frac{\dot{E}}{4\pi\sigma_{s}\epsilon_{\nu}R^{2}}\right)^{1/4}, \tag{9}\]
where \(\sigma_{s}\) is the Stefan-Boltzmann constant and \(\epsilon_{\nu}\) is the gray-emissivity. For simplicity, we assume \(\epsilon_{\nu}=1\) here.
In Fig. 1 and Fig. 2, we plot the equilibrium temperature as a function of \(a_{s}\), for different values of \(R\) and \(e_{s}\), respectively. We can see that the temperature can be quite high for some values of \(a_{s}\), \(R\) and \(e_{s}\). Generally speaking, smaller value of \(a_{s}\) and larger values of \(R\) and \(e_{s}\) can give a higher equilibrium temperature. For the fiducial values of \(a_{s}=10^{5}\) km, \(R=100\) km and \(e_{s}=0.5\), we get \(\dot{E}=1.4\times 10^{12}\) W. The equilibrium temperature of the satellite is about 119 K, which can emit significant amount of radio radiation with frequency \(\nu>100\) GHz. Besides, we can estimate the time required for the satellite to heat up from 10 K to 100 K. Assuming a typical specific heat capacity for the satellite \(c_{s}=1000\) J kg\({}^{-1}\) K\({}^{-1}\), the time required is \(\sim 10^{4}\) yrs for the fiducial parameters used.
In the followings, we estimate the thermal radio flux emitted by the satellite with the fiducial parameters. The thermal radio flux density is given by
\[S_{\nu}=\int\frac{2h\nu^{3}}{c^{2}(e^{h\nu/kT}-1)}d\Omega\approx\frac{2\pi h \nu^{3}}{c^{2}(e^{h\nu/kT}-1)}\left(\frac{R}{a_{9}}\right)^{2}. \tag{10}\]
Therefore, we can get the expected thermal radio flux density as a function of \(\nu\) for the fiducial parameters (see Fig. 3). The radio flux density is \(\sim 2\)\(\mu\)Jy for \(\nu=300\) GHz. The observable limit for the most sensitive sub-mm interferometer (e.g. Atacama Large Millimeter Array
ALMA) is around 1 \(\mu\)Jy at \(\nu=100-300\) GHz. Hence, it is feasible to observe this small flux using current observational technologies. For lower frequencies, the expected radio flux density is \(S_{\nu}\approx 10\) nJy at \(\nu=20\) GHz. This can be observable by the future SKA radio interferometer.
Moreover, the thermal radio flux density \(S_{\nu}\) is proportional to the frequency \(\nu^{2}\). This can be differentiable from the normal background radio flux, which is usually modelled by \(S_{\nu}\propto\nu^{-\alpha}\) with \(\alpha>0\). In other words, by obtaining the radio spectrum emitted from the region of the dark P9, if we can detect a relatively strong thermal radio spectrum (\(S_{\nu}\propto\nu^{2}\)), this would be a solid evidence to verify the P9 hypothesis because there is no other astrophysical mechanism which can increase the temperature of a distant object to more than 50 K. For the conventional P9 model (not a dark object), the expected radio flux emitted by P9 should be \(\sim\) mJy at 200 GHz (Naess et al., 2021), which is 1000 times larger than that of a satellite. In any case, either if we can detect mJy signal from P9 or \(\mu\)Jy signal from the satellite, the P9 hypothesis can be verified. Besides, if there is any potential signal received from P9 or the satellites, we can track the source for a couple of years to see whether the signal would follow a nearly Keplerian orbit over time or not. This can further provide a smoking-gun evidence to verify the P9 hypothesis.
Previous studies have constrained the possible range of location for P9 (Batygin et al., 2019; Fienga et al., 2016; Socas, 2022). A recent study has further constrained the exact location of P9 to R.A. \((48.2\pm 4)^{\circ}\) and DEC \((10.3\pm 1.8)^{\circ}\)(Socas, 2022). Such a small constrained region can make the observation much easier. The telescopes or interferometers used can focus on the target region for a very long exposure time to gain enough sensitivity to detect the potential thermal signals.
Note that the tidal heating rate gained by the satellite originates from the loss rate of the gravitational potential energy of the P9-satellite system. The eccentricity would gradually decrease so that the tidal heating rate would also decrease. The eccentricity fractional change rate is given by
\[\frac{|\dot{e}_{s}|}{e_{s}}=\left(\frac{e_{s}^{2}-1}{2e_{s}^{2}}\right)\frac{ \dot{E}}{E}. \tag{11}\]
The time scale for the eccentricity shrinking is \(\tau\sim|e_{s}/\dot{e}_{s}|\), which is about 0.6 Myrs for the fiducial parameters. This timescale is short compared to the age of the solar system. In fact, there is a compromise between having the orbital parameters of the satellites such that the radio emission is detectable (e.g. with small \(a_{s}\)) and sufficiently long-lived to make the higher detection probability (e.g. with large \(a_{s}\)). Here, the range of \(a_{s}\) we considered (\(a_{s}=10^{5}-10^{6}\) km) is almost the optimal for examination. Nevertheless, the relatively short eccentricity shrinking timescale would not be a big problem if the satellite capture event is
a recent event. Also, as we have shown that the satellite capture is not a rare event, there would be more than one satellite with size \(>140\) km at \(a_{s}\sim 10^{5}\) km. Therefore, we expect that such a thermal radio signal of the satellite may still be observed.
## 4 Discussion
In this article, we have demonstrated a theoretical framework to predict the possible observable signal from the P9-satellite system. If the dark P9 has a satellite system, the only current feasible observation is to detect the possible signals from the satellites. We have shown that if a satellite with a typical size \(\sim 100\) km with average orbital radius \(a_{s}\sim 10^{5}\) km from the dark P9, the temperature can be as large as \(\sim 100\) K due to tidal heating effect. For such a high temperature, the satellite can emit strong enough thermal radio flux (\(\sim 1\)\(\mu\)Jy at 100-300 GHz) that can be observed by ALMA. Moreover, the specific thermal radio spectrum \(S_{\nu}\propto\nu^{2}\) could be easily differentiable from the background radio flux so that it can provide a smoking-gun evidence for the P9 hypothesis. The only possible reason for the existence of \(\sim 100\) K object at \(\sim 450\) AU from the sun is that it is a satellite of a host planet. It is because a host dwarf planet or a minor planet does not have enough mass to heat up the satellite to \(\sim 100\) K.
As we have shown above, there are a lot of TNOs with size \(>140\) km in the scattered disk region. Therefore, the chance for these large TNOs (with \(R\sim 100\) km) captured by P9 is not low. Besides, based on the example of Uranus (\(\approx 14M_{\oplus}\)), at least 13 satellites are located within \(10^{5}\) km, which suggests that our fiducial value of \(a_{s}=10^{5}\) km is a reasonable choice of consideration. For the eccentricity, simulations show that most of the captured objects would be orbiting with a very high eccentricity \(\approx 1\)(Goulinski & Ribak, 2018). Therefore, our fiducial value \(e_{s}=0.5\) is a conservative choice of estimation.
Since no optical and radio signals have been detected so far for P9, the suggestion of P9 being a PBH has become a hot topic recently. There are some suggestions to send detectors to visit the alleged PBH P9 (Witten, 2020; Hibberd, Lingam & Hein, 2022). It would be very exciting because this may be our only chance to visit a black hole within our approachable distance. Nevertheless, we need to wait for at least 10 years for the detectors to arrive the PBH P9. Some other studies have proposed to detect P9 by gravitational lensing (Philippov & Chobanu, 2016; Schneider, 2017; Domenech & Pi, 2022). However, the mass of P9 is very small so that it requires a very sensitive measurement for the short-live lensing event, which may not be very easy to get any good confirmation. A recent study has proposed a narrow possible locations of P9 (Socas, 2022). If P9 is a dark object and it has a satellite system, our proposal can directly observe the potential thermal signals emitted by
the satellites now. Therefore, this would be a timely and effective method to confirm the P9 hypothesis and verify whether P9 is a dark object or not.
## 5 Acknowledgements
The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. EdUHK 18300922).
|
2309.08115 | REEF: A Framework for Collecting Real-World Vulnerabilities and Fixes | Software plays a crucial role in our daily lives, and therefore the quality
and security of software systems have become increasingly important. However,
vulnerabilities in software still pose a significant threat, as they can have
serious consequences. Recent advances in automated program repair have sought
to automatically detect and fix bugs using data-driven techniques.
Sophisticated deep learning methods have been applied to this area and have
achieved promising results. However, existing benchmarks for training and
evaluating these techniques remain limited, as they tend to focus on a single
programming language and have relatively small datasets. Moreover, many
benchmarks tend to be outdated and lack diversity, focusing on a specific
codebase. Worse still, the quality of bug explanations in existing datasets is
low, as they typically use imprecise and uninformative commit messages as
explanations.
To address these issues, we propose an automated collecting framework REEF to
collect REal-world vulnErabilities and Fixes from open-source repositories. We
develop a multi-language crawler to collect vulnerabilities and their fixes,
and design metrics to filter for high-quality vulnerability-fix pairs.
Furthermore, we propose a neural language model-based approach to generate
high-quality vulnerability explanations, which is key to producing informative
fix messages. Through extensive experiments, we demonstrate that our approach
can collect high-quality vulnerability-fix pairs and generate strong
explanations. The dataset we collect contains 4,466 CVEs with 30,987 patches
(including 236 CWE) across 7 programming languages with detailed related
information, which is superior to existing benchmarks in scale, coverage, and
quality. Evaluations by human experts further confirm that our framework
produces high-quality vulnerability explanations. | Chaozheng Wang, Zongjie Li, Yun Peng, Shuzheng Gao, Sirong Chen, Shuai Wang, Cuiyun Gao, Michael R. Lyu | 2023-09-15T02:50:08Z | http://arxiv.org/abs/2309.08115v1 | # REEF: A Framework for Collecting Real-World Vulnerabilities and Fixes
###### Abstract
Software plays a crucial role in our daily lives, and therefore the quality and security of software systems have become increasingly important. However, vulnerabilities in software still pose a significant threat, as they can have serious consequences. Recent advances in automated program repair have sought to automatically detect and fix bugs using data-driven techniques. Sophisticated deep learning methods have been applied to this area and have achieved promising results. However, existing benchmarks for training and evaluating these techniques remain limited, as they tend to focus on a single programming language and have relatively small datasets. Moreover, many benchmarks tend to be outdated and lack diversity, focusing on a specific codebase. Worse still, the quality of bug explanations in existing datasets is low, as they typically use imprecise and uninformative commit messages as explanations.
To address these issues, we propose an automated collecting framework REEF to collect REal-world vulnExabilities and Fixes from open-source repositories. We focus on vulnerabilities since they are exploitable and have serious consequences. We develop a multi-language crawler to collect vulnerabilities and their fixes, and design metrics to filter for high-quality vulnerability-fix pairs. Furthermore, we propose a neural language model-based approach to generate high-quality vulnerability explanations, which is key to producing informative fix messages. Through extensive experiments, we demonstrate that our approach can collect high-quality vulnerability-fix pairs and generate strong explanations. The dataset we collect contains 4,466 VEs with 30,987 patches (including 236 CWE) across 7 programming languages with detailed related information, which is superior to existing benchmarks in scale, coverage, and quality. Evaluations by human experts further confirm that our framework produces high-quality vulnerability explanations.
Vulnerability, Data collection, Bug fix
## I Introduction
Software powers an increasing number of critical systems in the modern world, from people's daily life applications [1] to critical industrial scenarios [2] or even military systems [3]. However, software often contains bugs that can cause substantial losses [4]. In this paper, we focus on vulnerabilities since they are a more severe type of bug related to software security that may have serious consequences. The Common Vulnerabilities and Exposures (CVE) database tracks publicly disclosed cybersecurity vulnerabilities and the number of reported CVEs has been growing rapidly over the past decade [5], highlighting the scale and significance of this problem; this growth is largely attributed to the increasing number of open-source software applications, as well as the increasing sophistication of cyber-attacks and vulnerability discovery methods. To combat the potential risk in real-world software, various approaches have been proposed to detect and fix bugs.
Traditionally, static analysis tools have been used to analyze source code and detect potential bugs. A large number of mature tools such as Coverity [6] have been maintained and improved for several decades, applying to real-world projects to discover potential flaws. More recently, query-based code search tools like CodeQL [7] and Semgrep [8] have enabled developers to detect bugs by writing semantic queries to explore code bases. The core idea behind the tools is to provide basic analysis capabilities to developers and let them write queries to detect bugs. The effectiveness of these tools depends on the quality of the queries [9], which keeps the detection process both flexible and scalable.
Besides the traditional static analysis and query-based code search tools, data-driven approaches [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] are also promising for detecting and fixing bugs. Different from analyzing the source code with predefined rules along with complex analysis techniques (e.g., dataflow analysis and control flow analysis), data-driven approaches leverage large-scale data from open-source communities such as GitHub to learn the patterns of bugs and fixes. Techniques such as zero-day detector [20] generate candidate patches and validate them to find viable fixes. Other data-driven methods learn developer-written patches from open-source repositories and apply them to repair new bugs.
The advantages of the data-driven approaches are two-fold. First, they are easier to implement than traditional static analysis tools. The traditional tools require complex analysis techniques to analyze the source code, which is time-consuming and hard to scale. For example, when facing a wide variety of software vulnerabilities, professional developers must carefully design appropriate detection rules or techniques [6, 7]. In contrast, the data-driven approaches only require large-scale data from open-source repositories, which is easy to obtain
and scale. Second, they are more flexible than the traditional tools. The traditional tools are usually language-agnostic and designed for specific types of bugs, which are hard to extend to other programming languages or other types of bugs [21, 22, 23]. The data-driven approaches are more flexible, as they can be easily extended to other types of bugs by learning the patterns of bugs and fixes from the large-scale data.
Data-driven approaches for automated bug detection and repair rely on the availability of large-scale, high-quality datasets. The efficacy of these techniques is directly dependent on the precision and comprehensiveness of the data used to train machine learning models. Specifically, accurately identifying bug locations and patches can bolster the precision of the trained models, improving the accuracy of bug detection and fixing. Furthermore, incorporating metadata on bug types and commit messages provides models with more granular information about bugs, which can enhance their performance on detection and repair tasks [24]. This additional context may also aid developers during debugging and code review by giving them more targeted insights.
However, current datasets for data-driven software analysis have several key limitations that hamper their effectiveness. First, the granularity of most datasets is at the function level, lacking precise records of bug locations and fixes. In reality, bugs often span multiple levels of abstraction and a single CVE may impact various, disparate parts of a codebase [25]. Thus, there is a shortage of systematically curated, real-world vulnerability data. Second, metadata about bug types is often inaccurate or imprecise. Currently, bug types are typically inferred from commit messages, which can be inaccurate and even sometimes wrong [26]. Such an error-prone process would possibly yield incorrect or misleading labels. Third, most current datasets [27] are outdated, failing to capture the latest state of constantly evolving software systems. Bugs that were previously patched may be reintroduced, tending to render existing records obsolete.
**Technical Challenges and Solutions.** We aim to develop a framework for automatically collecting and curating high-quality code snippets containing vulnerabilities, fixes, locations, their types, and messages from open-source repositories. It is thus required to prepare a large dataset incorporating this information to gain insights into real-world bugs and fixes, facilitating further research and applications. To achieve the above goals, our approach comprises three steps: * Newest CVE capture: current datasets focus on simple and outdated vulnerability patterns without clear location information as well as the patch. To develop a comprehensive dataset that is close to real-world situations, we primarily focus on the recently revealed CVEs that share clear fix log and location information. This results in a dataset with about eighteen thousand CVEs that are recently revealed with documented fixes. * Large Language Model (LLM)-based explanation with human agreement: Current datasets adopt the commit information as the comment for the vulnerability content, which is proved to be unreliable [28, 29]. Considering the powerful capability of LLM in code understanding [30, 31], we leverage the large language model to automatically comprehend the bug patterns and fixes from the CVEs and use them as the additional message. To ensure the quality of the mined results, we carefully design the prompt system with a pilot study. Moreover, we conduct a human agreement study to evaluate the mined results. * Analysis of the collected dataset: we conduct further analysis of the dataset to understand the characteristics of real-world bugs and fixes, providing details on each data point to benefit future studies or applications.
Our contributions are summarized as follows:
* We propose REEF, a framework to mine up-to-date, real-world vulnerabilities automatically. Incorporated with the corresponding fix patches from CVEs, we have collected a large-scale dataset with 30,987 bug location, type, and fix information. Our dataset consists of a wide variety of vulnerabilities across various languages, platforms, and granularity.
* We employ large language models to generate explanatory messages for the CVEs, supplementing unreliable commit information. We carefully design prompts and conduct a human evaluation to ensure message quality.
* We conduct an extensive analysis of the collected dataset and provide detailed insights into real-world vulnerabilities and fix characteristics to guide future research and applications.
* We have publicly released the code for our REEF tool on GitHub at _[https://github.com/ASE-REEF/REEF-script_](https://github.com/ASE-REEF/REEF-script_), along with the vulnerability data we have collected, which is also available on GitHub at _[https://github.com/ASE-REEF/REEF-data_](https://github.com/ASE-REEF/REEF-data_).
## II Related work
In this section, we introduce the related work from three threads including automated program repair, static analysis tools, and large language models for code, respectively.
### _Automated Program Repair_
Automatic program repair (APR) has garnered significant attention in recent years as a crucial approach to enhancing software reliability. Various techniques have been developed within the APR domain, including template-based [32, 33], search-based [34, 35], constraint-based [36, 37], and learning-based approaches [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. Among these categories, learning-based methods have achieved the greatest success and become the most popular in recent years. SequenceR [10] combines LSTM encoder-decoder architecture with copy mechanism for program repair. DLFix [13] uses a tree-based RNN to capture the structure of the source code and learn code transformations. CoCoNuT [14] uses ensemble learning on the combination of different networks to automatically fix bugs in multiple programming languages, separating the context and buggy lines in NMT-based APR. CURE [15] integrates pre-trained programming language models and significantly improves repair quality. Recoder [11] uses a syntax-guided edit decoder to guide the generation of syntactically correct repair patches. RewardRepair [16] employs execution-based
backpropagation to enhance the compilation rate of patches generated by NMT-based APR approaches. DEAR [17] generate multi-hunk, multi-statement fix patches with a divide-and-conquer strategy. AlphaRepair [12] utilizes a large pre-trained code model and generates patches in a fill-in-the-blank way. Zhong et al. [38] build a standard benchmark dataset and an extensive framework tool to mitigate threats for comparison in program repair. Xia et al. [39] evaluate the effectiveness of LLMs on program repair. KNOD [18] incorporates domain knowledge to guide patch generation in a direct and comprehensive way. TypeFix [19] is a prompt-based approach with fix templates incorporated for repairing Python type errors.
### _Static Analysis Tools_
Static analysis tools analyze source code without executing the program to detect potential bugs and vulnerabilities. They codify definitions of unsafe coding patterns and scan codebases to identify matches. Over the past decade, static analysis has become a popular approach for detecting vulnerabilities in software [40], and techniques including data flow analysis [41, 42], typestate analysis [43], type inference [44] and specific pointer analysis [45] have been developed to improve the precision and recall of bug detection.
Recently, query-based static analysis tools like CodeQL [7] have gained increasing attention from both academia and industry. These tools codify vulnerability patterns as SQL-like queries, facilitating knowledge sharing and reuse across entities and software systems. Software is treated as data [46], with programs parsed into hierarchical representations, often stored in databases. Unlike traditional static analysis tools [6, 47], query-based tools primarily focus on parsing software into rich, query-friendly representations and rely on crowdsourced communities to continually develop queries addressing newly discovered vulnerabilities. Established query-based tools cultivate active communities and offer bounty programs [48] to encourage query contribution and improvement. In turn, these communities help enrich and refine queries to target vulnerabilities proliferating in real-world software.
There is a huge effort to establish comprehensive benchmarks to evaluate the quality of analysis tools, which further helps find real-world vulnerabilities. However, existing benchmarks for evaluating static analysis tools typically use synthetic datasets. For instance, the Juliet [21] benchmark for C/C++ and the Defects4J [22] in Java language. These follow prescribed patterns and quickly become outdated, unable to represent the complexity of real-world CVEs. Although built around the Common Weakness Enumeration (CWE) to provide reasonable, well-defined examples, they cannot capture the nuances of most vulnerabilities. Some works collect datasets from real CVEs and make complex processes to filter the suitable ones. For instance, Ruohonen [49] targets Python and collects samples from popular repositories and Linares et al. [50] analyze Android apps. These narrow scopes limit the types of vulnerabilities and coding patterns addressed, impeding holistic analysis. However, these works usually focus on particular programming languages or codebases, lacking diversity.
While static analysis shows promise for detecting vulnerabilities at scale, evaluating tools remains challenging due to the lack of comprehensive benchmarks reflecting the diversity of real-world bugs. Real CVEs offer a rich source for dataset generation but are difficult and time-consuming to gather and curate. Progress in static analysis thus depends on developing datasets that mirror the heterogeneity of vulnerabilities in real code. Automated or semi-automated methods for collecting and labeling examples from a wide range of open-source repositories directly show potential for advancing research and practice in this crucial area of software security, which is the focus of our work.
### _Large Language Models for Code_
Recently, significant advancements in SE research have been brought by Large Language Models (LLMs), which brought impressive improvements in a wide range of code-related tasks. One notable model is Incoder [51], which employs a causal masking training objective to excel in code infilling and synthesis. Another popular model is Codex [52], a sizable pre-trained code model introduced by OpenAI, which supports the Copilot service on various code-related tasks [53]. The models recently released by OpenAI, such as ChatGPT [54] and GPT-4 [55], are also pre-trained with source code data and show remarkable programming capabilities. AlphaCode [56] has been specifically trained for generating code for programming competitions like Codeforces. CodeCMR [57] and IRGEN [58] are pre-trained models designed for low-level code on various code-related tasks. CodeGen [59] is a large pre-trained model for multi-turn program synthesis with more than 16B parameters, while CodeGeeX [60] is a recently proposed open-source multilingual code generation model with 13 billion parameters. BigCode Project has developed and open sourced StarCoder [61] which contains 15.5B parameter. A recent work WizardCoder [62] is fine-tuned with Evol-Instruct and achieves state-of-the-art performance surpassing all existing open-source Code LLMs.
In-context learning (ICL) [63, 64] is a recent paradigm that enables LLMs to learn from just a few examples without fine-tuning. It concatenates a few input-output examples with the query question to form an input for the language model and get the prediction. Recently, there has been increasing interest in applying in-context learning to code-related tasks [65, 66, 39, 67]. CEDAR [66] retrieves similar examples and constructs the demonstrations for assert generation and program repair. Synchronesh [68] retrieves few-shot examples by Target Similarity Tuning and samples programs using Constrained Semantic Decoding. A recent work [69] empirically studies the impact of three demonstration construction factors on in-context learning in code intelligence tasks. Geng et al. [70] enhance in-context learning for multi-intent code comment generation by selecting similar examples and re-ranking the output candidates. Ahmed et al. [71] propose to incorporate
static analysis results into the in-context prompt to code summarization.
## III Workflow
Fig. 1 presents our pipeline for collecting real-world vulnerabilities and constructing the dataset. Our pipeline consists of three steps: \(\blacklozenge\) CVE capturing and collection. To address the limitations of current benchmarks, we gather recent real-world vulnerabilities from CVEs that have been newly disclosed, including related bug reports and source code. We use an automated crawler to collect the latest CVEs and then filter out irrelevant ones based on metrics like CVSS scores. \(\blacklozenge\) LLM message supplementation. To account for uneven commit message quality and potential bias, we leverage the ability of large language models [72] to generate vulnerability explanations for each commit. Following standard bug description guidelines [73], we design a prompt for the vulnerability explanations using advanced LLM, enabling the model to formulate consistent, unified messages through contextual learning. \(\blacklozenge\) Dataset analysis. We introduce metrics to assess our dataset's quality and compare it with current benchmarks to determine effectiveness in supporting existing tools. We analyze the generated messages and compare them with committed information to evaluate our approach. Finally, we conduct human studies to assess the quality of the generated messages.
To evaluate the effectiveness of our approach to collect code vulnerability and the quality of our proposed dataset, we investigate the following three research questions (RQs):
* What is the advantage of our dataset compared to existing benchmarks?
* To what extent the prompt design affects the generated message?
* How are the generated messages in alignment with experts?
Specifically, we analyze our collected dataset and compare it with current benchmarks to explore whether it is effective and diverse in RQ1 (in Section IV-A). As we use the code understanding ability of LLM, in RQ2 we further study how the prompt would affect the performance of generated bug explanation compared to the commit information in Section IV-B. Finally in RQ3, by comparing the patches we collected as well as the generated message, we study to what extent humans are in agreement with the generated code explanation as discussed in Section IV-C.
Notably, we focus on a wide variety of vulnerabilities across various languages and platforms, not only including the commonly-seen pattern bugs occur at the function level or statement level, but also the complex vulnerabilities across multiple files and functions, which we believe is more challenging for the current tools to detect and can better benefit the community.
### _Data Collection_
Creating an exhaustive dataset including real-world code snippets with vulnerabilities, fixes, locations, and types is challenging, let alone including vulnerability explanations. To develop a dataset reflecting real-world scenarios, we first gather real-world vulnerabilities from multiple sources, including the NVD database and CVE list maintained by Mend [74], which is a comprehensive open-source vulnerabilities database from hundreds of both popular and under-the-radar community resources. Users can also specify additional sources as needed. If a vulnerability has a clear report, proof-of-concept, and publicly available source code before and after fixing, we collect the related bug reports and source code, and store them in our raw dataset. Notably, it would be possible that a single vulnerability may be linked to multiple commits and files; we gather all related commits and files accordingly.
As shown in Fig. 1, we design a filter to remove less severe vulnerabilities from the raw dataset. We first eliminate those with a low CVSS score which indicates relatively low impact and damage potential. We then assign a "fix score" based on the number of related commits and a weighted score to each commit based on the number of files modified. Vulnerilities with low "fix scores" are excluded from the final dataset.
To expand the dataset's potential applications and adapt it for various downstream tasks, we incorporate disclosure date information for each item. Specifically, we keep the complete CVE name indicating when each vulnerability was disclosed and assigned a unique number. Users preferring more recent data can easily filter out vulnerabilities disclosed during a given time period.
### _LLM Message Supplementation_
To construct a comprehensive, informative dataset, vulnerability descriptions are crucial since they provide rationale and fix details, helping downstream tools better understand vulnerabilities. This is especially useful for data-driven methods, as natural language is easier to comprehend than code [75]. However, commit messages are not always available and may be uninformative, biased [76], or misleading [26] in explaining vulnerabilities. Worse yet, the existing commit messages could be missing, impairing downstream approaches. To address this, we leverage large language models to generate vulnerability explanations for each commit. Following standard bug description guidelines [73], we empirically design a prompt for the vulnerability explanations using advanced LLM, enabling the model to formulate consistent, unified messages through in-context learning.
After collecting the responses from the large language model, we further conduct a human inspection to evaluate whether the generated message is in alignment with experts and whether it is informative enough to explain the vulnerability. Only the generated message that is in agreement with the experts would be included in the final dataset.
We provide a complete list of field names for our collected dataset in Table I. The fields can be categorized into four groups: (1) metadata, (2) vulnerability information, (3) LLM-enhanced information, and (4) project information. The metadata describes the programming language and index for each data item. The vulnerability information contains CVE details documenting the real-world impact of the bugs. The
LLM-enhanced messages are generated descriptions of the vulnerabilities.
The project information stores data on the code repositories mined for bugs, including website details. In summary, our dataset incorporates comprehensive metadata, security details, generated explanatory messages, and provenance information to enable in-depth analysis of real-world vulnerabilities. The multi-faceted data provides insights into vulnerability properties, remediation, language model performance, and codebase characteristics.
### _Dataset Analysis_
To rigorously evaluate our dataset's quality, we conduct an exhaustive analysis. Conceptually, we compare our dataset to current benchmarks across several axes: supported programming languages, fixing information integrity, granularity, data source, corpus size, and CWE coverage, respectively. A comparative analysis along these dimensions provides a holistic sense of relative strengths. A systematic assessment of vulnerability type diversity examines coverage of the Common Weakness Enumeration (CWE). We posit that a dataset exhibiting a wider range of CWE types affords more comprehensive vulnerability modeling, with greater potential to generalize across systems. By comparing the number of CWE types, we gain quantitative insight into our dataset's diversity. We further evaluate the challenge to current tools using Semgrep [8], a ubiquitous query-based static analysis tool, to detect vulnerabilities in our dataset. A higher proportion of vulnerabilities evading detection suggests greater resilience against existing methods, indicating the dataset poses a more formidable evaluation benchmark. Missed vulnerabilities point to remaining gaps in vulnerability modeling and detection that the dataset could help address through continued research. Finally, we conduct an in-depth statistical analysis of our dataset itself to glean qualitative details that inform future work. Summary statistics on parameters like vulnerabilities' average severity and exploitability, bug fix length, and explanation shine a light on real-world characteristics underrepresented in synthetic data. A granular exploration of attributes exposes new problem dimensions beyond the capabilities of simplified synthetic benchmarks.
These complementary analyses, systematically connected through a logical pipeline, provide empirical evidence and qualitative characterization to demonstrate our dataset's diversity, challenge, and fidelity in emulating real-world scenarios. The rigor and depth of our evaluative approach underline the dataset's potential to serve as a foundation for future research advancing the state of the art in software vulnerability detection and automatic program repair.
Fig. 1: The pipeline of our REEF for gathering vulnerabilities, enriching data, and analyzing the dataset.
Fig. 2: Example of data instance with enhanced patch messages.
## IV Evaluation
### _RQ1: What is the advantage of our dataset compared to existing benchmarks?_
**Conceptual comparison.** We first conduct a conceptual comparison between our selected dataset and current datasets, in which we compare the differences during the dataset collection process, including the multi-language support, fix information, location, related message, granularity, source, and size.
As shown in Table II, our dataset has the following advantages compared to current benchmarks: (1) Detailed fix information. Compared to other benchmarks that only include fix patterns, our dataset contains detailed fix information, including the bug location and fix information. (2) Multi-language support. Different from the existing datasets that focus on specific languages, ours incorporate vulnerabilities in multiple languages, including C/C++, Java, C#, and Python. (3) Multi-level granularity. Our dataset contains vulnerabilities at multiple levels, including function-level, statement-level, and expression-level. (4) Real-world CVE. The dataset we collected contains real-world CVEs, which are more representative than synthetic vulnerabilities. (5) Large-scale. A vast volume of 30,987 vulnerability patches enables robust statistics and enhances machine learning via increased instances, far more than other benchmarks' limited samples.
In summary, at the conceptual level, our dataset is more comprehensive and representative than existing ones.
**Dataset coverage.** As discussed in Sec. III-C, we further analyze the coverage of Common Weakness Enumeration (CWE) types in our dataset compared to other benchmarks. Our hypothesis is that a dataset exhibiting a wider range of CWE types will enable more comprehensive vulnerability modeling with greater potential for generalization across systems.
As shown in Table II, our dataset covers more CWE types than all other benchmarks. Due to the limitation of space, we only show the total number of all CWE types across languages, but our dataset covers more CWE types even when focusing on one specific language. For example, our dataset covers 134 CWE types in C/C++, while Juliet-C++ benchmark only covers 118 CWE types. Notably, it would be hard to estimate a clear number of potential CWEs in a specific language, since some of the CWEs are not language-specific. However, we can still observe that our dataset covers more CWE types than other benchmarks, which indicates that our dataset provides more comprehensive vulnerability coverage compared to existing benchmarks.
Beyond CWE coverage analysis, we also evaluate detected CWE coverage using a static analysis tool. The intuition is that a lower proportion of successfully detected vulnerabilities suggests a more challenging dataset, as existing flaws are harder to discover with standard tools. Such difficulty highlights the potential utility of advanced models. Specifically, we use Semgrep [8], a popular query-based static analysis tool, to detect potential vulnerabilities. We use its default ruleset, which contains 1,088 rules for Java, 655 rules for Python, and 133 rules for JS, to detect vulnerabilities in our dataset and other benchmarks. Defects4J only provides code patches, which makes analyzing it difficult. Consequently, we do not compare our benchmark with Defects4J. Juliet-C++ has a notably high detection rate at 35.7%, presumably because it is a synthetic dataset. LinuxFlaw and FUNDED, the benchmarks containing real-world CVEs, have substantially lower detection rates of 2.2% and 4.0%, respectively. In contrast, our dataset has an even lower detection rate of 1.2%, indicating that our dataset poses a more challenging problem. This is likely due to the fact that our dataset comprises more recent and intricate real-world vulnerabilities mined from up-to-date CVEs. Consequently, defects in our dataset may be more difficult to be detected using current approaches.
In summary, our collected dataset covers more CWE types and is more challenging for standard static analysis tools to assess, suggesting it is more comprehensive and representative than existing benchmarks. The broader range of hard-to-detect vulnerabilities in our dataset could support more robust vulnerability modeling and lead to repair systems with stronger generalization ability.
**Dataset statistics.** We first analyze the statistics of our collected dataset. As shown in Table III, our dataset contains 4466 vulnerabilities across seven languages. For each vulnerability, we report the average number of changed files, patch numbers, and changed lines of code, with results shown in the table. These results demonstrate that Java and C# vulnerabilities are substantially more complex than those in the other languages, with average values for all metrics nearly double those of the other languages. This aligns with our expectation that these are usually served as Object-Oriented (OO) languages, and appear more compact in the coding specification. Moreover, we observe that C language still consists of a large proportion of vulnerabilities, which indicates that (1) C language is still widely used in practice. (2) The unfamiliarity with the C language leads to ongoing vulnerability discovery. As an old language compared to Python, the language abstraction of C is relatively low, and developers need to manage memory manually (e.g. buffer overflows and memory leaks). This feature makes C language more error-prone, which leads to a large number of vulnerabilities.
Beyond analysis at the source code level, we also examine our dataset in terms of vulnerability types. As shown in Fig. 3, the top 15 CWE types constitute over 55% of our dataset, indicating it is comprehensive and representative. We also observe that Java and Python have similar CWE type distributions, while C/C++ has a distinct distribution. The top five CWE types are CWE-79 (Cross-Site Scripting), CWE-125 (Out-of-bounds Read), CWE-787 (Out-of-bounds Write), CWE-20 (Improper Input Validation), and CWE-119 (Improper Restriction of Operations within the Bounds of a Memory Buffer). It is unsurprising that the dataset over-represents these CWE types, which are archetypical examples of memory-related vulnerabilities. Overall, these statistics suggest that our dataset effectively samples from the space of vulnerabilities and contains a diversity of complexity levels and types, especially for C/C++ and low-level languages.
**Finding 1:** REEF, the framework we propose to collect the code vulnerabilities, considers more aspects than existing benchmarks, which in turn makes the dataset more practical to the real world. Through comprehensive analysis, we demonstrate that our dataset covers more CWE types and is more challenging than other benchmarks, indicating its comprehensiveness and representativity than other benchmarks.
### _RQ2: To what extent does the prompt design affect the generated message?_
We answer RQ2 from two aspects. First, we conduct a pilot study on C/C++ projects to analyze the generated message from the perspective of their basic understanding of vulnerabilities and fix records. Second, we generate the message by using three commonly-used prompt patterns and compare them with the original message.
**Pilot Study.** Inspired by [78], we list the investigated LLMs in Table IV. We use GPT-Neo [79], Llama-7B, and Llama-13B [80] fine-tuned on Alpaca [81] and released by LMFlow [82]. Vicuna is Llama-based and fine-tuned on user-shared conversations. ChatGLM [83] and Vicuna [84] models use official code. We also include commercial LLMs ChatGPT [54] and GPT-4 [55]. To make a fair comparison, we use the same prompt for all models. Moreover, we randomly select 15 CVE cases from C/C++ vulnerabilities. After generating the message, we manually label the message as "completely traceable", "somewhat traceable" and "non-traceable". Where if the message incorporates the information in the CVE patch or the vulnerability rationale, we label it as "completely traceable"; if the message incorporates the information in the CVE patch or the vulnerability rationale partially, we label it as "somewhat traceable"; otherwise, we label it as "non-traceable".
From the results, we observe that ChatGPT and Tongyi gain competitive performance in generating messages that are traceable to the CVE patch or the vulnerability rationale besides GPT-4. Considering the limitations of APIs' accessibility and academic resources, we use Tongyi as the baseline model in the following experiments.
**Prompt pattern comparison.** Notably, we only generate one summarization message for each CVE, regardless of the number of commits and files. It could be the situation that one CVE contains multiple commits and files, which means the generated message is a summarization of all the commits and files. However, since the input token length is limited, we set the maximum number of generated explanation tokens to 256. Once the input surpasses the limit, we truncate the input and generate the message based on the truncated input. Users can increase this number to generate more detailed explanations if needed.
Besides the token length, the prompt pattern is also a key point. It has been demonstrated that the output performance for code-related tasks is largely influenced by the type of prompt
Fig. 3: Top-15 CWE types of our dataset.
pattern [86]. Specifically, we select 20 cases to generate messages based on the three prompt patterns, named "Zero-shot", "One-shot" and "Few-shot".
The prompts used to generate patch messages in the zero-shot setting are presented in Figure 4. For one-shot and few-shot prompts, we give the large language model one or a few human-authored examples to guide the model in generating high-quality patch messages. Here we do not consider the Chain-of-thought [87] pattern since no suitable rationale for code is available. For each prompt pattern, we generate the message based on the CVE patch and the vulnerability rationale. We then invite five experts to evaluate the generated message based on the following criteria:
* **Comprehensiveness**: whether the message is comprehensive enough to explain the vulnerability and fix.
* **Consistency**: whether the message is consistent with the CVE patch and the vulnerability rationale.
* **Traceability**: whether the message is traceable to the CVE patch and the vulnerability rationale.
Each expert is asked to give a score range of 0 to 1 to demonstrate the generated quality regarding the corresponding criteria. After collecting the evaluation results, we calculate the average score for each prompt pattern. The results are shown in Table V. From the results, we observe that the "Few-shot" pattern achieves the highest score in all three criteria, while the "One-shot" pattern shares competitive results. As a trade-off between the performance and computation cost, we adopt "One-shot" as the default prompt pattern in this work.
**Finding 2:** Large language models differ in their ability to generate vulnerability explanations traceable to source code and rationale. Tongyi and GPT models excelled in a pilot study, while a "one-shot learning" prompt achieved a satisfying performance for generating comprehensive, consistent, and traceable explanations in a prompt pattern comparison with an affordable query budget.
### _RQ3: How are generated messages in alignment with experts?_
To answer RQ3, we first present statistics comparing our generated messages to the original commit messages. We then conduct a human evaluation to assess the quality of our generated messages relative to the corresponding commit messages.
**Statistics comparison** We report statistics for our generated messages and the original commit messages in Table VI. On average, our generated messages contain 397.08 characters, 1.92 times more than the original commit messages. Moreover, the median length of our generated messages is 356, also higher than for the original commit messages. We attribute this to the fact that original commit messages are written by developers with varying perspectives and goals, yielding greater diversity than our generated messages.
Notably, some original commit messages are of low quality for two reasons: (1) They are auto-filled by the GitHub platform. (2) They are too short (less than 20 characters.) We count the number of these low-quality commit messages and report them in the "Lcmsg" column of Table III. Of the original commit messages in our dataset, nearly 5% of messages are viewed as of low quality. They are unsuitable for providing informative vulnerability explanations and were thus excluded from our analyses, further motivating our work to generate comprehensive, high-quality vulnerability explanations.
The statistics demonstrate that our generated messages are substantially more detailed and consistent than the original commit messages. The higher word count suggests our messages provide more structured and in-depth vulnerability explanations overall compared to original commit messages, which are often quite brief and arbitrary. The filtering of low-quality messages is also prudent, as their inclusion could skew the statistics and make the dataset unsuitable for modeling. These results thus indicate we achieve our aim of generating high-quality, comprehensive vulnerability explanations.
**Human Study.** We conduct a human evaluation to assess the quality of the generated messages. We recruit five experts, including two industrial developers and three academic researchers with expertise in software vulnerability detection, as participants. We randomly select 40 samples and create an online questionnaire for them. For each sample, we provide two messages without specifying their origin. On a scale of 1 to 5 (1 being completely unsatisfactory, 5 being fully satisfactory), participants score each message. To ensure participants understand the task, we include five sanity check (SC) test items, considering only participants who answer all SC items correctly. Each participant evaluates 35 real samples and five
Fig. 4: Example of the prompt we used to query large language models.
SC items; we assign each sample to five participants. All participants pass the SC, taking an average of 35 minutes.
The human evaluation finds an average score of 3.05 for original messages and 3.70 for generated messages, a 21.31% relative gain. Analysis of all responses shows that for 7.14% of cases, the generated message seems worse, while for the remaining 92.86%, the generated message is equal to or better than the original. The Fleiss' Kappa [88] of 0.92 indicates "almost perfect agreement" between participants. These results suggest the generated messages are well-aligned with expert assessments and higher in quality than the original commit messages. The small minority of cases where the generated message seems inferior could be anomalous or reflects subtle aspects not captured in our message-generation approach. But the overwhelming expert preference for the generated messages, further evidenced by strong inter-rater agreement, indicates their quality is superior overall.
**Finding 3: Our generated vulnerability explanations are of superior quality to original commit messages according to both expert human evaluations and quantitative message statistics. Though a small fraction of generated messages are inferior, experts overwhelmingly prefer our generated explanations, indicating they are well-aligned with human assessments of explanation quality.**
## V Discussion
**Limitations and Threats to Validity.** We now discuss the validity and limitations of this work. In this research, we collect the dataset from the real-world CVEs where the open-source projects are hosted on GitHub. However, not all open-source projects are hosted on GitHub, and we may miss some projects that are hosted on other platforms. Moreover, since we only collect the CVEs from 2016, the previous CVEs are not included in our dataset. This may cause bias in the dataset, as some of the critical vulnerabilities may be discovered and fixed in previous years. In addition, we only collect the CVEs that are fixed by the developers. However, some CVEs are not fixed by the developers, or the developers have already fixed the vulnerabilities but did not report or confirm the CVEs. In summary, our collection framework tried to collect the CVEs as much as possible, but it is still possible that some CVEs are missing in our dataset, where future work can improve.
**Message bias.** In this work, we leverage the code understanding ability of large language models to generate additional messages for the collected vulnerabilities. Although we have specified strict rules as well as a dedicated designed prompt system, and our human evaluation shows that the generated messages are of high quality, some of the generated messages may still be biased or unsatisfactory. For example, when the input code snippet is either too long or too short, the generated message may not be as good as the other cases. However, their superior ratings compared to original commit messages, as assessed by experts, indicate they still achieve the aim of producing informative vulnerability explanations, even if imperfect. Moreover, the quality of generated messages can be further improved with the help of more advanced language models.
## VI Conclusion
In this paper, we propose a novel and practical approach to collect the real-world CVEs with detailed information automatically, and we leverage the code understanding ability of large language models to generate additional messages for the collected vulnerabilities. Our framework successfully collects 4,466 CVEs from 2016 to 2023 and incorporates 30,987 messages for the collected patches. The collected dataset has been evaluated by the developers, and the results show that the generated messages are of high quality and can help the developers to understand the vulnerabilities. This work serves as a roadmap for researchers to construct better data-driven bug detection and auto-fix techniques.
|
2309.07226 | Energy-resolved spin correlation measurements: Decoding transverse spin
dynamics in weakly interacting Fermi gases | We study transverse spin dynamics on a microscopic level by measuring
energy-resolved spin correlations in weakly interacting Fermi gases (WIFGs).
The trapped cloud behaves as a many-body spin-lattice in energy space with
effective long-range interactions, simulating a collective Heisenberg model. We
observe the flow of correlations in energy space in this quasi-continuous
system, revealing the connection between the evolution of the magnetization and
the localization or spread of correlations. This work highlights energy-space
correlation as a new observable in quantum phase transition studies of WIFGs,
decoding system features that are hidden in macroscopic measurements. | J. Huang, J. E. Thomas | 2023-09-13T18:00:55Z | http://arxiv.org/abs/2309.07226v2 | # Decoding transverse spin dynamics by energy-resolved correlation measurement
###### Abstract
We study transverse spin dynamics on a microscopic level by measuring energy-resolved spin correlations in a weakly-interacting Fermi gas. The trapped cloud behaves as a many-body spin-lattice in energy space with effective long-range interactions, simulating a collective Heisenberg model. We observe the flow of correlations in energy space in this quasi-continuous system, revealing the connection between the evolution of the magnetization and the localization or spread of correlations. This work highlights energy-space correlation as a new observable in quantum phase transition studies, decoding system features that are hidden in macroscopic measurements.
Collective spin dynamics plays a central role in spin-lattice models, such as Heisenberg models of quantum magnetism [1], Anderson pseudo-spin models of superconductivity [2], and Richardson-Gaudin models of pairing [3]. These models have been simulated in discrete systems, including ion traps [4; 5; 6], quantum gas microscopes [7], and cavity-QED experiments [8], which achieve single-site resolution. In contrast, a weakly interacting Fermi gas provides a powerful many-body platform for realizing spin lattice models in a quasi-continuous system. In the nearly collisionless regime, the energy states of the individual atoms are preserved over experimental time scales, creating a long-lived synthetic lattice [9] in energy space that simulates a collective Heisenberg Hamiltonian with tunable long-range interactions [10; 11; 12; 13; 14; 15; 16; 17].
In this work, the energy-space spin-lattice is studied at a microscopic level using energy-resolved spin correlation measurements, which enable a deeper look into the signatures of macroscopic "phase transitions" and the origins of the macroscopic properties, such as the magnetization, which are usually measured. In a many-body spin lattice with a collective Heisenberg Hamiltonian, the interplay between the site-dependent energy and site-to-site interactions leads to a transition to a spin-locked state as the interaction strength is increased, producing a large total transverse spin. This transition has been observed in a weakly-interacting Fermi gas of \({}^{40}\)K [16], using the total transverse magnetization as the order parameter. A deeper picture of the spin-locking transition is provided by our energy-resolved measurements, which illustrate the emergence of strong correlations between transverse spin components in localized low-energy and high-energy subgroups and the spread of these correlations throughout the energy lattice as interaction strength increases.
The observation of energy-resolved transverse correlation is implemented in degenerate Fermi gas, consisting of \(6.2\times 10^{4}\,\)Li atoms. The cloud is confined in an optical trap and cooled to temperature \(T=0.29\,T_{F}\), where Fermi temperature \(T_{F}\approx 0.73\,\mu\)K. The ratio between radial and axial trap frequencies is \(\sim 27\), allowing a quasi-1D approximation during modeling. A superposition of two lowest hyperfine-Zeeman states, which are denoted by \(|1\rangle\equiv|\uparrow_{z}\rangle\) and \(|2\rangle\equiv|\downarrow_{z}\rangle\), is prepared by a coherent excitation RF (radiofrequency) pulse at the beginning of experimental cycle.
The collision rate is controlled to be negligible during each cycle by tuning the bias magnetic field \(B\), providing a sufficiently small scattering length \(a(B)\). Therefore, in such a weakly interacting regime, the energy and the energy state of each particle are conserved, allowing us to simulate the system as a 1D lattice in energy space. Each lattice site "\(i\)" represents the \(i^{th}\) harmonic oscillator state along the axial direction of the sample, with an energy \(E_{i}\!=\!(n_{i}\!+\!1/2)\,h\nu_{x}\) and dimensionless collective spin vector \(\vec{s}\,(E_{i})\equiv\vec{s}_{i}\). Hence this synthetic lattice can be described by a Heisenberg Hamiltonian:
\[\frac{H(a)}{\hbar}=\sum_{i,j\neq i}\!g_{ij}(a)\,\vec{s}_{i}\cdot\vec{s}_{j}+ \sum_{i}\Omega^{\prime}E_{i}\,s_{zi}. \tag{1}\]
The first term represents the effective long-range interactions between energy lattice site \(i\) and \(j\) due to the overlap of probability densities in real space for the energy states \(i\) and \(j\). \(g_{ij}(a)\) is the coupling parameter, scaling linearly with scattering length \(a\). In our system, the average \(g_{ij}(a)\) is \(\bar{g}(a=5.2\,a_{0})\approx 1.7\) Hz \(\times 2\pi\).
The second term arises from the magnetic field variation along the axial direction of the cloud, resulting in an effective spin-dependent harmonic potential. \(\Omega^{\prime}=-\delta\omega_{x}/(\hbar\omega_{x})\), with \(\delta\omega_{x}/2\pi=14.06\) mHz for our trap. For the mean energy \(\vec{E}_{x}\simeq k_{B}T_{F}/4\), \(\Omega^{\prime}\,\vec{E}_{x}\simeq 2.0\) Hz \(\times 2\pi\). The statistical standard deviation of \(\Omega^{\prime}\,E_{x}\) is calculated to be \(\sigma_{\Omega_{z}}\approx 1.4\) Hz and determines the spread in the spin-precession rates.
The ratio of these two terms in Eq. 1 determines the behavior of the system during evolution. For this reason, we define the dimensionless interaction strength \(\zeta\equiv\bar{g}(a)/(\sqrt{2}\sigma_{\Omega_{z}})\). Here, larger \(\zeta\) represents a stronger mean-field interaction, and for small \(\zeta\), the system is dominated by the spread in Zeeman precession.
To predict the dynamics of the system, a mean-field approximation is applied. Collective spin vectors are obtained by neglecting quantum correlations in the Heisenberg equations: \(\dot{\vec{s}}_{i}(t)=\vec{\omega}_{i}(t)\times\vec{s}_{i}(t)\)[12; 19]. The components of collective spin vector for different energy groups \(s_{\sigma}(E_{i})\) are obtained by numerical integration.
To observe the transverse component of the spin vector, a Ramsey sequence is applied. Starting from an initially \(z\)-polarized state, the first excitation \((\frac{\pi}{2})_{y}\) RF pulse produces an \(x\)-polarized sample. After that, the system is allowed to evolve for a period \(\tau\) at the scattering length \(a\) of interest. Then, a second \((\frac{\pi}{2})_{y}\) RF pulse is applied to collectively rotate the spin vectors about the \(y\)-axis, projecting the \(x\)-component onto the measurement \(z\)-axis, ideally. Immediately after the last RF pulse, two spin states \(|\uparrow_{z}\rangle\) and \(|\downarrow_{z}\rangle\) are imaged. In reality, as discussed below, \(s_{z}(x)=(n_{\uparrow}(x)-n_{\downarrow}(x))/2\equiv s^{meas}(x)\) measures a combination of transverse components of the spin vector in the Bloch resonant frame, \(\tilde{s}_{x}\) and \(\tilde{s}_{y}\), just prior to imaging. Abel inversion is applied to \(s^{meas}(x)\) to obtain the energy-resolved spin density \(s^{meas}(E)\) with a bin width of \(\Delta E=E_{F}/50\)[18; 20]. Note that even though the energy bin size is finite, the system itself is quasi-continuous because of the large atom number and closely spaced energy levels.
During the experimental cycle, magnetic field fluctuation, at even \(10^{-4}\) G level, causes imperfectly controlled RF detuning and subsequent phase \(\varphi\) accumulation, changing the relative contribution of the \(x\) and \(y\) components of spin vectors in the measurement, \(s^{meas}=\cos(\varphi)\tilde{s}_{x}+\sin(\varphi)\tilde{s}_{y}\). With a broad spread \(\varphi\in[0,2\pi]\), a multi-shot average \(\langle s^{meas}\rangle\) tends to vanish. As the \(\varphi\) distribution for each data set is usually irreproducible, the contribution of the \(x\) and \(y\) components in \(\langle s^{meas}\rangle\) cannot be controlled in an efficient and reliable way even with data selection [18]. This problem is circumvented in analyzing the correlation between measured operators with energy \(E_{i}\) and \(E_{j}\), which has the form [18]:
\[\mathcal{C}_{ij}^{\perp}\equiv\langle s_{i}^{meas}s_{j}^{meas}\rangle= \frac{1}{2}\langle\tilde{s}_{xi}\tilde{s}_{xj}+\tilde{s}_{yi} \tilde{s}_{yj}\rangle \tag{2}\] \[+ \frac{1}{2}\langle\cos(2\varphi)\rangle\langle\tilde{s}_{xi} \tilde{s}_{xj}-\tilde{s}_{yi}\tilde{s}_{yj}\rangle\] \[- \frac{1}{2}\langle\sin(2\varphi)\rangle\langle\tilde{s}_{xi} \tilde{s}_{yj}+\tilde{s}_{yi}\tilde{s}_{xj}\rangle,\]
where \(\langle\cdots\rangle\) denotes an average over multi-shots, and \(\tilde{s}_{\sigma i}\) is the \(\sigma\) component of spin vector in Bloch frame before the last \((\frac{\pi}{2})_{y}\) pulse. In the data analysis, a data group is selected with a specific phase distribution [18] to enforce \(\langle\cos(2\varphi)\rangle=\langle\sin(2\varphi)\rangle=0\), estimated using the quasi-classical spin model. This method ensures that the correlation obtained by averaging selected single shots is \(\mathcal{C}_{ij}^{\perp}=\frac{1}{2}\langle\tilde{s}_{xi}\tilde{s}_{xj}+ \tilde{s}_{yi}\tilde{s}_{yj}\rangle\), without making assumptions about the \(\varphi\) distribution for the whole data set.
In contrast, both \(\langle\tilde{s}_{zi}\rangle\) and \(\langle\tilde{s}_{zi}\tilde{s}_{zy}\rangle\) can be measured easily without data selection, as this measurement does not require the last \((\frac{\pi}{2})_{y}\) RF pulse, and therefore, is insensitive to the RF detuning. We have conducted ensemble averaged \(\tilde{s}_{z}\) measurement and found that \((\langle\tilde{s}_{zi}\tilde{s}_{zy}\rangle-\langle\tilde{s}_{zi}\rangle \langle\tilde{s}_{zy}\rangle)/(N_{i}N_{j}/4)\) has a value of \(\sim 5\times 10^{-3}\), which is comparable to spin projection noise, indicating the system is not quantum correlated. In addition, as our previous single-shot measurements showed, this large spin system can be well explained by a quasi-classical model [19]. Therefore, we expect this system evolves classically, where the classical correlation \(\mathcal{C}_{ij}^{\perp}\) is of interest. By construction, \(\mathcal{C}_{ij}^{\perp}\) also detects quantum correlations when they are present.
Fig. 1 shows the evolution of \(\mathcal{C}_{ij}^{\perp}\) with interaction
Figure 1: Correlation function \(\mathcal{e}_{ij}^{\perp}\), ensemble-averaged over 30 shots with a selected \(\varphi\) distribution, at different evolution times with interaction strength \(\zeta=1.2\) (top row (a-e)) and \(\zeta=1.8\) (bottom row (f-j)). Figures in the same row share the same color bar on the right. In this figure, only the lowest 70% of energy bins are adopted in data analysis as higher energy groups contain very few particles. \(E_{i}\) and \(E_{j}\) are in units of effective Fermi energy \(E_{F}\). The \(\mathcal{e}_{ij}^{\perp}\) values shown here and correlation plots (b-d,f-g) in Fig. 2 are amplified by dividing an energy-dependent attenuation coefficient \(\Gamma(E_{i}|\sigma_{E_{F}},\alpha_{r})\) arising from the finite energy resolution (\(\lesssim 0.08\sqrt{E_{i}}\)) to restore the amplitudes to their correct values [18].
strengths \(\zeta=1.2\) (\(a=5.19\,a_{0}\)) (top row (a-e)) and \(\zeta=1.8\) (\(a=8.05\,a_{0}\)) (bottom row (f-j)), normalized by the product of atom numbers in the corresponding energy partitions \(i\) and \(j\). We define \(\mathbf{c}_{ij}^{\perp}\equiv\mathcal{C}_{ij}^{\perp}/(N_{j}N_{j}/4)\) for convenience. Therefore, each pixel represents \(\mathbf{c}_{ij}^{\perp}\), the correlation between one pair of particles in energy groups \(E_{i}\) and \(E_{j}\), with maximum and minimum possible values \(\pm\frac{1}{2}\) by construction from Eq. 2. It is observed that the system evolves in a qualitatively different way as the interaction strength increases. At \(\zeta=1.2\), the single particle pair correlation tends to be localized between multiple specific energy subgroups. At \(\zeta=1.8\), the correlation tends to become uniform across all pairs of energy groups. This qualitatively distinct behavior of microscopic correlations reveals the source of the transition in macroscopic quantities such as magnetization.
The system magnetization is related to the ensemble-averaged correlation functions by definition. The square of total transverse magnetization \(\mathcal{M}_{\perp}^{2}=S_{x}^{2}+S_{y}^{2}\) is the double summation of the perpendicular correlation in energy space: \(\frac{1}{2}\mathcal{M}_{\perp}^{2}=\sum_{i,j}\mathcal{C}_{ij}^{\perp}\).
Fig. 2 shows the time-evolution of \(\mathcal{M}_{\perp}^{2}\) with different interaction strengths. The qualitative change in the behavior of the magnetization is also observed in the microscopic correlation \(\mathbf{c}_{ij}^{\perp}\) between energy subgroups, which are shown as correlation plots in (b-d,f-h). For all pairs of correlation plots, the left one corresponds to \(\tau=80\) ms and the right one corresponds to \(\tau=200\) ms. For the lighter blue data in (a), where the interaction strength is very small \(\zeta=0,\,0.6\), \(\frac{1}{2}\mathcal{M}_{\perp}^{2}(t\rightarrow\infty)=0\). For the darkest blue data in (a), where \(\zeta=1.2\), \(\frac{1}{2}\mathcal{M}_{\perp}^{2}(t\rightarrow\infty)\) asymptotes to a non-zero, small value. For small scattering lengths, correlation figures in (b-d) show that as time evolves, the largest correlations \(|\mathbf{c}_{ij}^{\perp}|\) (either positive or negative) arise between certain localized energy groups, either forming thin stripes or forming islands. Note that, with the absence of mean-field interaction, i.e., at \(0\,a_{0}\), the uniform stripes shown in (b) are the Ramsey fringes in energy space. By analyzing the width of the Ramsey fringes, the Zeeman tuning rate \(\Omega^{\prime}\) can be tested [18]. For stronger interactions shown in (e), where \(\zeta\geq 1.8\), \(\frac{1}{2}\mathcal{M}_{\perp}^{2}(t\rightarrow\infty)\) tends to oscillate relative to a larger static level as \(\zeta\) increases. In addition, the behavior in pair correlation shown in (f-h) is totally different from that for small \(\zeta\) (b-d). With strong interactions, high correlation regions tend to spread over all pairs of energy groups \(i\) and \(j\).
The transition in behavior of \(\frac{1}{2}\mathcal{M}_{\perp}^{2}\), which occurs between Fig. 2(a-d) and (f-h) matches the one observed between Fig. 1 top row (a-e) and bottom row (f-j). From the measured energy-space correlation function \(\mathbf{c}_{ij}^{\perp}\), we conclude that a system with a more localized transverse correlation between multiple specific energy group pairs tends to be demagnetized as time evolves (Fig. 2(a)). In contrast, a system with the transverse correlation spread over most energy group pairs maintains the high initial magnetization (Fig. 2(e)).
Furthermore, even when \(\mathcal{M}_{\perp}^{2}\) has the same value at
Figure 2: Time-dependent magnetization with different interaction strength and corresponding \(\mathbf{c}_{ij}^{\perp}\) correlation plots. Blue circles in (a) and (e) are averaged data over multiple shots with desired \(\varphi\) distribution. Darker blue corresponds to stronger interaction. A detailed description of data selection and error bar calculation is in Supplement [18] § A.4. Dashed lines are predictions from the quasi-classical spin model. Correlation plots (b-d) and (f-h) show \(\mathbf{c}_{ij}^{\perp}\) at \(\tau=80\) ms (left of each pair) and \(200\) ms (right of each pair). (b) \(\zeta=0\,(a=0\,a_{0})\), (c) \(\zeta=0.6\,(a=2.62\,a_{0})\), (d) \(\zeta=1.2\,(a=5.19\,a_{0})\), (f) \(\zeta=1.8\,(a=8.05\,a_{0})\), (g) \(\zeta=2.3\,(a=10.54\,a_{0})\), (h) \(\zeta=2.9\,(a=13.20\,a_{0})\). Same as Fig. 1, only \(E_{i},E_{j}\in[0,0.7]E_{F}\) are shown in these correlation plots.
two different times, it is observed that the corresponding correlation plots can have totally different structures. As shown in Fig. 2(e), \(\mathcal{M}_{\perp}^{2}(80\,\mathrm{ms})=\mathcal{M}_{\perp}^{2}(200\,\mathrm{ms})\) for \(\zeta=1.8\), but the corresponding \(\mathbf{c}_{ij}^{\perp}\) (Fig. 2(f)) shows different features for these two times. Similarly, for \(\zeta=1.2\) in Fig. 2(a), \(\mathcal{M}_{\perp}^{2}(200\,\mathrm{ms})=\mathcal{M}_{\perp}^{2}(280\, \mathrm{ms})\), but Fig. 1(e)(f) show different behaviors of \(\mathbf{c}_{ij}^{\perp}\). Therefore, the observation of energy-resolved correlation provides a new probe to characterize the spin dynamic more deeply than simply measuring macroscopic quantities.
Fig. 3 shows the emergence of \(xy\)-plane magnetization versus interaction strength at four evolution times. Blue circles are measured \(\mathcal{M}_{\perp}^{2}\), for the same sample selection method described above. Predictions of \(\mathcal{M}_{\perp}^{2}\) (red curves) are obtained using the quasi-classical model. We find that, as the interaction strength increases, \(\mathcal{M}_{\perp}^{2}\) surges, simulating a transition to a ferromagnetic state.
A sharp rise in \(|\mathcal{M}_{\perp}|\) has been observed and considered as a transition between dynamical states [16]. The spin vector picture provides a physical illustration of this transition. Recall that, by definition, \(\mathcal{M}_{\perp}^{2}=S_{x}^{2}+S_{y}^{2}\). Thus, the magnetization is related to the dispersion of the spin vector in the \(xy\)-plane: the more spins cluster, the larger magnitude \(\mathcal{M}_{\perp}^{2}\) has. This can be considered as a spin-locking effect. Fig. 4 depicts this phenomenon using the quasi-classical spin model. (a1,a2) shows the spin vectors with different energies after evolving for 200 ms with a small interaction strength \(\zeta=1.2\). (a2) is the top view of (a1) and clearly shows spin vectors in different energy partitions are largely spread out over all four quadrants in the \(xy\)-plane. In the microscopic correlation picture, spins with the same or opposite azimuthal angles are strongly correlated, and positive and negative single-pair correlations tend to cancel each other, leaving a weak magnetization after double summation over all energy partitions, corresponding to low \(\mathcal{M}_{\perp}^{2}\) value in Fig. 3(d). In contrast, (c1,c2) demonstrate a spin-locked state, with \(\zeta=4.1\) after evolving for 80 ms (green), 140 ms (blue), and 200 ms (red). For all three evolution times, the spin vectors in all energy partitions tend to congregate. In this situation, spins in all energy partitions are strongly and positively correlated, resulting in a highly magnetized state, in agreement with Fig. 3(a-d) for \(\zeta=4.1\). (b1,b2) shows an intermediate stage between (c1,c2) and (a1,a2): the spin vectors have not formed a bundle at \(\tau=80\) ms (green), but start showing this trend at \(\tau=140\) ms, (blue) and 200 ms (red). Further, as interaction strength increases, \(s_{zi}\) also tends to cluster, with \(\langle S_{z}^{2}\rangle\) becoming small as \(\langle S_{x}^{2}+S_{y}^{2}\rangle\) increases.
In summary, we have developed energy-space spin correlation measurement as a method for characterizing the spin dynamics of quasi-continuous, weakly interacting quantum gases, which simulate a synthetic lattice of spins pinned in energy space. This method enables a full view of how correlations develop between the extensive subsets of spins in energy space on a microscopic level, associating the evolution of the macroscopic properties with the
Figure 4: Modeled spin vector for different energy partitions (longer segments represent spin vectors with lower energy and vice versa). (a1,a2) are spin vectors with \(\zeta=1.2\) at 200 ms. (b1,b2,c1,c2) are spin vectors at different \(\tau\) with \(\zeta=2.3\) and 4.1 respectively. (a2,b2,c2) are the top views of (a1,b1,c1). Red, blue, and green segments are spins at 200, 140, and 80 ms respectively.
Figure 3: Observing the emergence of spin locking by measuring \(\frac{1}{2}\mathcal{M}_{\perp}^{2}\) for various interaction strength \(\zeta\) (top axis) and corresponding scattering lengths \(a\) (bottom axis) at (a) 80 ms, (b) 120 ms, (c) 160 ms, and (d) 200 ms. Blue circles are averaged data over multiple shots with the same averaging and error bar calculation for Fig. 2. Bright red curves are predictions with the quasi-classical spin evolution model and the pink bands correspond to a 2% standard deviation in cloud size \(\sigma_{Fx}\).
local correlation behavior. Utilizing this idea, we connect the spread and localization of correlations to the system magnetization and demagnetization by observing the correlation distribution as a function of time and interaction strength. This energy-resolved probe can be exploited in studies of macroscopic out-of-equilibrium dynamics and critical dynamics across quantum phase transitions.
We thank Ilya Arakelyan for helpful discussions. Primary support for this research is provided by the Air Force Office of Scientific Research (FA9550-22-1-0329). Additional support is provided by the National Science Foundation (PHY-2006234 and PHY-2307107).
\({}^{*}\)Corresponding authors: [email protected]
[email protected]
|
2307.16687 | DiffPose: SpatioTemporal Diffusion Model for Video-Based Human Pose
Estimation | Denoising diffusion probabilistic models that were initially proposed for
realistic image generation have recently shown success in various perception
tasks (e.g., object detection and image segmentation) and are increasingly
gaining attention in computer vision. However, extending such models to
multi-frame human pose estimation is non-trivial due to the presence of the
additional temporal dimension in videos. More importantly, learning
representations that focus on keypoint regions is crucial for accurate
localization of human joints. Nevertheless, the adaptation of the
diffusion-based methods remains unclear on how to achieve such objective. In
this paper, we present DiffPose, a novel diffusion architecture that formulates
video-based human pose estimation as a conditional heatmap generation problem.
First, to better leverage temporal information, we propose SpatioTemporal
Representation Learner which aggregates visual evidences across frames and uses
the resulting features in each denoising step as a condition. In addition, we
present a mechanism called Lookup-based MultiScale Feature Interaction that
determines the correlations between local joints and global contexts across
multiple scales. This mechanism generates delicate representations that focus
on keypoint regions. Altogether, by extending diffusion models, we show two
unique characteristics from DiffPose on pose estimation task: (i) the ability
to combine multiple sets of pose estimates to improve prediction accuracy,
particularly for challenging joints, and (ii) the ability to adjust the number
of iterative steps for feature refinement without retraining the model.
DiffPose sets new state-of-the-art results on three benchmarks: PoseTrack2017,
PoseTrack2018, and PoseTrack21. | Runyang Feng, Yixing Gao, Tze Ho Elden Tse, Xueqing Ma, Hyung Jin Chang | 2023-07-31T14:00:23Z | http://arxiv.org/abs/2307.16687v2 | # DiffPose: SpatioTemporal Diffusion Model for Video-Based
###### Abstract
Denoising diffusion probabilistic models that were initially proposed for realistic image generation have recently shown success in various perception tasks (e.g., object detection and image segmentation) and are increasingly gaining attention in computer vision. However, extending such models to multi-frame human pose estimation is non-trivial due to the presence of the additional temporal dimension in videos. More importantly, learning representations that focus on keypoint regions is crucial for accurate localization of human joints. Nevertheless, the adaptation of the diffusion-based methods remains unclear on how to achieve such objective. In this paper, we present DiffPose, a novel diffusion architecture that formulates video-based human pose estimation as a conditional heatmap generation problem. First, to better leverage temporal information, we propose SpatioTemporal Representation Learner which aggregates visual evidences across frames and uses the resulting features in each denoising step as a condition. In addition, we present a mechanism called Lookup-based Multi-Scale Feature Interaction that determines the correlations between local joints and global contexts across multiple scales. This mechanism generates delicate representations that focus on keypoint regions. Altogether, by extending diffusion models, we show two unique characteristics from DiffPose on pose estimation task: (i) the ability to combine multiple sets of pose estimates to improve prediction accuracy, particularly for challenging joints, and (ii) the ability to adjust the number of iterative steps for feature refinement without retraining the model. DiffPose sets new state-of-the-art results on three benchmarks: PoseTrack2017, PoseTrack2018, and PoseTrack21.
## 1 Introduction
Human pose estimation has been extensively studied in computer vision, with the aim of detecting all instances of people from images and localizing anatomical keypoints for each individual [20, 55, 61, 66]. It finds numerous applications ranging from human-computer interaction and augmented reality to behavior analysis and surveillance tracking [34, 38, 54, 62, 63, 64]. Conventional approaches [53, 67, 80] mainly employ the probabilistic graphical model or the pictorial structure model. Fueled by the explosion of deep learning, _Convolutional Neural Networks_[8, 37, 38, 61] and _Vision Transformers_[36, 73, 78] have witnessed significant progress in this task.
Until recently, denoising diffusion probabilistic models [28, 56], which are a type of generative models, have received much research attention for surpassing other methods such as GANs and achieving state-of-the-art generative results [6, 14]. The superior performance of the diffusion model has facilitated its expansion in diverse applications, such as super-resolution [52], inpainting [40], and image deblurring [50]. Following the demonstration of the effectiveness of diffusion models as representation learners for discriminative computer vision problems [6], several contemporary approaches have successfully employed the diffusion model for perception tasks, including object detec
Figure 1: **(a)** Illustration of the original diffusion model where \(q\) and \(p_{\theta}\) refer to the diffusion and denoising process, respectively. **(b)** In this work, we propose a novel framework named DiffPose which formulates video-based human pose estimation as a generative process of keypoint heatmaps.
tion [10] and image segmentation [1, 6, 24].
Despite the considerable attention that diffusion models have gained following their achievements, their adaptation for video-based human pose estimation has significantly trailed that of other vision tasks, such as segmentation and object detection. We conjecture two primary reasons that underlie this disparity: **(i)** Effectively leveraging temporal information is crucial for video-based human pose estimation [37]. However, despite the success of various diffusion architectures in perception tasks, they are primarily designed for static images and incapable to capture temporal dependencies across frames. **(ii)** Real world images typically contain many task-irrelevant cues and accurately estimating human poses requires focusing on specific body joint regions [21]. However, it is still an open question on how to guide the diffusion model to filter out the unnecessary details and only attend to the keypoint regions.
In this paper, we present a novel architecture, termed SpatioTemporal **Diff**fusion Model for **Pose** Estimation (DiffPose). By extending the framework of diffusion model, DiffPose presents a new approach to video-based human pose estimation. Specifically, it reformulates this problem as a conditional generative task of keypoint-wise heatmaps, as illustrated in Fig. 1. DiffPose consists of two primary stages: a forward diffusion stage that gradually introduces Gaussian noise to the ground truth heatmaps, and a reverse denoising stage that utilizes a Pose-Decoder to recover the original heatmap from the noisy input progressively.
Unlike the vanilla diffusion model [28], which simply uses U-Net [51] for denoising, we propose two novel designs that enhance the capabilities of the Pose-Decoder. These modifications enable the Pose-Decoder to better _utilize temporal information_ and _focus on joint regions_. **(i)** We design a SpatioTemporal Representation Learner (STRL) which sequentially performs spatial information extraction within each frame and integrates cross-frame knowledge through cascaded Transformers. The resulting features, which contain rich temporal priors, are subsequently utilized as a fixed condition at each denoising step by the Pose-Decoder. **(ii)** In addition, we propose a Lookup-based MultiScale Feature Interaction mechanism (LMSFI), which guides the Pose-Decoder to learn intricate representations for pose prediction by inductively leveraging information from both noisy heatmaps and spatiotemporal features. To be specific, we first construct probabilistic joint fields based on the noisy heatmaps, and perform lookups over spatiotemporal features accordingly to activate keypoint region features. Then, we model fine-grained correlations between the retrieved local joint features and original global contexts over multiple scales to produce the final representations. By conducting feature interaction through LMSFI, we can explicitly reason about the relationships between joints and global contexts. As shown in Figure 4, our proposed method is able to learn representations that converge around keypoint regions consistently.
An important feature of the diffusion-based framework is the separation of model training and evaluation. To provide more context, DiffPose is trained to reverse the forward diffusion process (_i.e._, predict ground truth heatmaps from noises) and performs multi-step denoising to generate predictions based on randomly generated noisy heatmaps at inference. Benefiting from such framework, we demonstrate two distinct properties that appeal to human pose estimation task. **(i)** As DiffPose can generate multiple plausible pose estimates by sampling random noises, they can be combined to improve the prediction robustness, especially for challenging joints such as wrists and ankles. **(ii)** In contrast to existing methods that adopt a fixed iterative refinement structure [9, 69, 43], DiffPose can adaptively vary the number of denoising steps without retraining the model. From extensive experiments, we show that DiffPose consistently outperforms existing well-established approaches on three benchmark datasets. Furthermore, each of our proposed design choices is verified through ablation studies.
The key contributions of this work are summarized as follows: (1) To our best knowledge, we are the first to investigate video-based human pose estimation from the lens of generative modeling. In particular, we propose DiffPose, the first model that applies diffusion model to multi-frame human pose estimation. (2) We demonstrate two properties of DiffPose that are effective on pose estimation: the ability to enhance performance by aggregating multiple pose estimations and to perform flexible iterative refinement without model retraining. (3) We show that our DiffPose delivers state-of-the-art results on three benchmark datasets, Pose-Track2017, PoseTrack2018, and PoseTrack21.
## 2 Related Work
**Human pose estimation.** Early efforts on human pose estimation focus on static images, starting from building probability graphical structures [33, 26] to model the relations between body joints. With the advancement of deep learning [27, 65] and the availability of large-scale benchmark datasets [31, 2, 15], various deep architectures (_e.g._, CNN and Transformer)-based methods are currently the dominating solutions [70, 71, 61, 36, 78, 75, 73]. There are two mainstream paradigms: i) regressing the position of keypoints from image directly, and ii) estimating probability heatmaps to represent keypoints locations. Representation using heatmap has gained more popularity due to the performance derived from faster optimization convergence.
Conversely, various studies [58, 45, 37, 38, 68] have attempted to estimate human poses in videos. [37] merges heatmaps of consecutive frames and computes their residuals to obtain joint-level features for pose estimation. [38] performs implicit motion compensation using deformable
convolutions for better feature aggregation and heatmap prediction. These approaches usually predict a deterministic pose solution for each frame and lack effective re-calibrations, which might suffer from localized detection failure especially for challenging joints. In contrast, Our approach benefiting from probabilistic diffusion models is able to combine multiple pose solutions naturally to provide more robust estimations.
**Diffusion model.** Diffusion models [28, 56] are a type of deep generative models that utilize the final state of a Markov chain originating from a standard Gaussian distribution to approximate the distribution of natural images. Neural network is typically trained to reverse this diffusion process for each Markov step. Within this framework, diffusion models have recently demonstrated remarkable results in a wide spectrum of generative tasks from visual images [52, 4, 24, 44, 18, 42] to nature language [3, 23, 35, 29, 46, 74]. Diffusion models have also proven to be useful in various discriminative computer vision problems [10, 1, 6, 24]. The pioneer work [1] presents a diffusion model conditioned on an input image for image segmentation. [10] proposes DiffusionDet which formulates object detection as a generative denoising process from noisy boxes to object boxes. [24] further extends [10] to perform instance segmentation. To the best of our knowledge, there have been no previous successful attempts to adapt diffusion models for multi-frame human pose estimation. This paper introduces DiffPose, which explores the potential of diffusion models in video-based human pose estimation and is the first diffusion model to achieve state-of-the-art performance for this task.
## 3 Our Approach
### Preliminaries
**Problem Formulation.** Following the top-down pose estimation approach, we first obtain all human bounding boxes per frame \(I_{t}\) using an off-the-shelf object detector. Each bounding box is then enlarged by \(25\%\) to crop the same individual in consecutive frames \(\mathbf{\mathcal{I}_{t}}=\langle I_{t-\delta},...,I_{t},...,I_{t+\delta}\rangle\) with \(\delta\) being a predefined temporal interval. In this way, we obtain the cropped video segment \(\mathbf{\mathcal{I}_{t}^{i}}=\left\langle I_{t-\delta}^{i},...,I_{t}^{i},...,I_{t+ \delta}^{i}\right\rangle\) for person \(i\). Given an image sequence \(\mathbf{\mathcal{I}_{t}^{i}}\) centered on the key frame \(I_{t}^{i}\), our goal is to estimate the keypoint heatmaps for \(I_{t}^{i}\).
**Diffusion Model.** Inspired by non-equilibrium thermodynamics [59, 60], diffusion models are under the category of latent variable models which aim to reconstruct a task-specific distribution that starts from random noise. These models typically consist of two basic processes: 1) a forward process that gradually adds Gaussian noise to sample data, and 2) a reverse process that learns to invert the forward diffusion. To be specific, the forward diffusion process is defined as:
\[\begin{split}& q\left(\mathbf{x}_{t}|\mathbf{x}_{0}\right):= \mathcal{N}\left(\mathbf{x}_{t};\sqrt{\bar{\alpha_{t}}}\mathbf{x}_{0},\left( 1-\bar{\alpha_{t}}\right)\mathbf{I}\right),\\ &\mathbf{x}_{t}=\sqrt{\bar{\alpha_{t}}}\mathbf{x}_{0}+\sqrt{1- \bar{\alpha_{t}}}\epsilon,\epsilon\sim\mathcal{N}\left(0,1\right),\end{split} \tag{1}\]
where \(\bar{\alpha_{t}}:=\prod_{s=1}^{t}\alpha_{s}=\prod_{s=1}^{t}(1-\beta_{s})\) and \(\beta_{s}\) denotes the noise variance schedule [28]. The operation in Eq. 1 adds noise to the original data sample \(\mathbf{x}_{0}\) and transforms it into a latent noisy sample \(\mathbf{x}_{t}\) at an arbitrary sampling step \(t\in\{0,1,...,T\}\). During training, a neural network \(f_{\theta}(\mathbf{x}_{t},t)\) is trained to perform the denoising task either by predicting \(\mathbf{x}_{0}\) or \(\epsilon\) (we choose the former as done in [10, 11]), with the constraint of \(L_{2}\) loss. This process is expressed as:
\[\mathcal{L}_{\mathbf{x}_{0}}=\left\|f_{\theta}(\mathbf{x}_{t},t)-\mathbf{x}_ {0}\right\|^{2}. \tag{2}\]
In inference, the learned denoising (reverse) function \(f_{\theta}\) is applied to a random noise sample \(\mathbf{x}_{T}\) along with a preset updating rule [28, 57], to reconstruct the data sample \(\mathbf{x}_{0}\) in an iterative way \(\mathbf{x}_{T}\rightarrow\mathbf{x}_{T-\Delta}\rightarrow\cdots\rightarrow \mathbf{x}_{0}\).
In this paper, we propose a novel framework that enables the diffusion model to better process dynamic contexts for video-based human pose estimation. Specifically, we present DiffPose which modulates the vanilla diffusion model to _incorporate temporal information_ and _attend to keypoint region cues_, resulting in a paradigm more aligned with multi-frame human pose estimation. Our proposed framework is illustrated in Fig. 2. In our problem setting, the original data sample is the ground truth heatmap \(\mathbf{x}_{0}=\mathbf{H}_{t}^{i}\). This heatmap is generated using a 2D Gaussian centered at the annotated joint location. We train a Pose-Decoder \(f_{\theta}(\mathbf{x}_{t},\mathbb{F}_{t}^{i},t)\) to recover \(\mathbf{x}_{0}\) from the noisy heatmap \(\mathbf{x}_{t}\) by conditioning on the spatiotemporal feature of the input sequence \(\mathbb{F}_{t}^{i}\) which is derived by the SpatioTemporal Representation Learner (STRL).
In the following, we first detail the architecture of STRL (Sec. 3.2) and Pose-Decoder (Sec. 3.3). Then, we explain the training and inference algorithms (Sec. 3.4) as well as providing discussions on the favorable properties of DiffPose for pose estimation in Sec. 3.5.
### Spatiotemporal Representation Learner
Inspired by the success of Vision Transformers [17, 73, 39], we employ cascaded Transformers to capture the spatial-temporal dependencies among video frames. Given the sequence data \(\mathbf{\mathcal{I}_{t}^{i}}=\langle I_{t-\delta}^{i},...,I_{t}^{i},...,I_{t+ \delta}^{i}\rangle\) as input, we first employ a plain Vision Transformer [17, 73] pretrained on ImageNet [13] as the backbone network to extract spatial features \(\left\langle F_{t-\delta}^{i},...,F_{t}^{i},...,F_{t+\delta}^{i}\right\rangle\) for each frame. Subsequently, each frame feature is spatially rearranged and fed into a patch embedding layer, which embeds the feature into tokens \(\left\langle\bar{F}_{t-\delta}^{i},...,\bar{F}_{t}^{i},...,\bar{F}_{t+\delta}^{ i}\right\rangle\). Then, we concatenate all embedded tokens, retain their spatial information through a learnable position embedding \(E_{POS}\), and feed them into
cascaded Transformer encoders where each encoder consists of a Multi-Head Self-Attention (MHSA) layer and a feed-forward neural network (FFN). Finally, the encoded deep features of all frames are aggregated via a Multilayer Perceptron (MLP) to produce the spatiotemporal feature \(\mathbb{F}_{t}^{i}\). The above procedures can be formulated as:
\[\begin{split}\tilde{F}_{t}^{0}&=\operatorname{ Concat}\left(\bar{F}_{t-\delta}^{i}+E_{POS}^{t-\delta},\cdots,\bar{F}_{t+\delta}^{i}+E_{POS}^{t+ \delta}\right),\\ \tilde{F}_{t}^{l}&=\tilde{F}_{t}^{l-1}+\operatorname {MHSA}(\operatorname{LN}(\tilde{F}_{t}^{l-1})),\\ \tilde{F}_{t}^{l}&=\tilde{F}_{t}^{{}^{\prime}l}+ \operatorname{FFN}(\operatorname{LN}(\tilde{F}_{t}^{{}^{\prime}l})),\\ &\qquad\qquad\vdots\\ \mathbb{F}_{t}^{i}&=\operatorname{MLP}(\operatorname {LN}(\tilde{F}_{t}^{L})),\end{split} \tag{3}\]
where the superscript \(l\in[1,2,...,L]\) denotes the output of \(l\)-\(th\) Transformer layer and \(\tilde{F}_{t}^{0}\) represents the initial feature. The function \(\operatorname{LN}(\cdot)\) indicates the LayerNorm layer. Note that the spatial (_i.e._, the number of tokens) and channel dimensions within each transformer layer remain constant.
### Pose-Decoder
After obtaining the spatiotemporal feature \(\mathbb{F}_{t}^{i}\), the Pose-Decoder denoises the heatmap \(\mathbf{x}_{t}\) by taking \(\mathbb{F}_{t}^{i}\) together with the sampling step \(t\) as conditions, and output the predicted heatmap \(\hat{\mathbf{x}}_{0}=\hat{\mathbf{H}}_{t}^{i}\). Specifically, we first project the step index \(t\) into an embedding and utilize the embedding to rescale the initial noisy heatmap \(\mathbf{x}_{t}\), attaining the step-adaptive version \(\bar{\mathbf{x}}_{t}\). Then, we model the global correlations between \(\mathbb{F}_{t}^{i}\) and \(\bar{\mathbf{x}}_{t}\) across multiple scales via Transformer or convolutional structures, and obtain multi-scale representations \(\mathcal{F}_{t}^{i}=\{\mathcal{F}_{t}^{i,1},\mathcal{F}_{t}^{i,2},\mathcal{F }_{t}^{i,3}\}\). Finally, these features are integrated and passed to a detection head to predict the pose heatmap \(\hat{\mathbf{H}}_{t}^{i}\).
In order to encourage the representations \(\mathcal{F}_{t}^{i}\) to focus on keypoint regions, we propose a Lookup-based Multi-Scale Feature Interaction mechanism (LMSFI) to inductively model correlations between \(\mathbb{F}_{t}^{i}\) and \(\bar{\mathbf{x}}_{t}\). It consists of two procedures: pairwise size-matched feature generation and lookup-based feature interaction.
**Pairwise size-matched feature generation.** Given \(\mathbb{F}_{t}^{i}\in\mathbb{R}^{C\times H\times W}\) and \(\bar{\mathbf{x}}_{t}\in\mathbb{R}^{c\times 4H\times 4W}\) with different spatial dimensions, we perform upsampling and downsampling separately to construct size-matched feature pairs \(\langle\mathbb{F}_{t}^{i},\bar{\mathbf{x}}_{t}\rangle\). Specifically, we adopt several deconvolution layers to perform \(1\times\), \(2\times\), and \(4\times\) upsampling of resolution over \(\mathbb{F}_{t}^{i}\), and obtain corresponding features \(\{\mathbb{F}_{t}^{i,1},\mathbb{F}_{t}^{i,2},\mathbb{F}_{t}^{i,3}\}\). Similarly, stride convolutions are used to downsample on \(\bar{\mathbf{x}}_{t}\) to produce \(\{\bar{\mathbf{x}}_{t}^{1},\bar{\mathbf{x}}_{t}^{2},\bar{\mathbf{x}}_{t}^{3}\}\) (\(\bar{\mathbf{x}}_{t}=\bar{\mathbf{x}}_{t}^{3}\)). With the above process, we attain multi-scale size-matched feature pairs \(\langle\mathbb{F}_{t}^{i,J},\bar{\mathbf{x}}_{t}^{J}\rangle\). The superscript \(J=\{1,2,3\}\) refers to resolutions of different levels from low to high.
**Lookup-based feature interaction.** Upon constructing multi-scale feature pairs \(\langle\mathbb{F}_{t}^{i,J},\bar{\mathbf{x}}_{t}^{J}\rangle\), we model interactions between the spatiotemporal feature \(\mathbb{F}_{t}^{i,j}\) and the noisy heatmap \(\bar{\mathbf{x}}_{t}^{j}\) at each resolution \(j\) individually to obtain corresponding feature representation \(\mathcal{F}_{t}^{i,j}\). A naive approach would be to _directly_ concatenate and aggregate \(\mathbb{F}_{t}^{i,j}\) and \(\bar{\mathbf{x}}_{t}^{j}\). In our experiments, we show that the learned keypoint features of this scheme are scattered across significant areas
Figure 2: Overall pipeline of the proposed DiffPose framework. The goal is to detect the human pose in the keyframe \(I_{t}^{i}\). Given an input sequence, our SpatioTemporal Representation Learner (STRL) extracts the spatiotemporal feature \(\mathbb{F}_{t}^{i}\). The feature \(\mathbb{F}_{t}^{i}\), the noisy heatmap \(\mathbf{x}_{t}\), and the sampling step \(t\) are then passed to the Pose-Decoder, which performs lookup-based multiscale feature interaction to obtain representations \(\mathcal{F}_{t}^{i}=\{\mathcal{F}_{t}^{i,1},\mathcal{F}_{t}^{i,2},\mathcal{F }_{t}^{i,3}\}\). Finally, these features are aggregated to attain the final pose estimation \(\hat{\mathbf{H}}_{t}^{i}\) (_i.e._, \(\mathbf{x}_{0}\)).
(see Fig. 4), resulting in performance reduction as shown in Table 4. In practice, heatmaps reveal the likelihood of the locations containing joints, whereas the noisy heatmap \(\bar{\mathbf{x}}_{t}^{j}\) is corrupted and can provide a negligible amount of valid real-valued information [12]. As a result, directly modeling correlations of \(\bar{\mathbf{x}}_{t}^{j}\) and \(\mathbb{F}_{t}^{i,j}\) is extremely challenging. Therefore, we adopt an inductive modeling strategy which first performs lookups over the spatiotemporal feature \(\mathbb{F}_{t}^{i,j}\) according to the heatmap \(\bar{\mathbf{x}}_{t}^{j}\) to retrieve local joint feature \(\bar{\mathbb{F}}_{t}^{i,j}\), and then models correlations between the local feature \(\bar{\mathbb{F}}_{t}^{i,j}\) and the vanilla global context \(\mathbb{F}_{t}^{i,j}\).
More specifically, considering that the computational complexity of self-attention increases quadratically with input resolution, we adopt a composite structure that uses Transformers and convolutions to capture feature interactions at low and high resolutions, respectively. **(i)** For the low resolution feature pair \(\langle\mathbb{F}_{t}^{i,1},\bar{\mathbf{x}}_{t}^{1}\rangle\), the noisy heatmap \(\bar{\mathbf{x}}_{t}^{1}\) is first embedded to feature tokens, and a Transformer encoder is leveraged to perform self-refinement to yield \(\bar{\mathbf{x}}_{t}^{1}\). Then, we take the maximum activations along the depth dimension over \(\bar{\mathbf{x}}_{t}^{1}\) to squeeze the global channel information into a single-channel descriptor, followed by a sigmoid function to obtain an attention mask \(A^{1}\) that indicates possible keypoint fields. Then, the mask \(A^{1}\) is used to retrieve corresponding spatiotemporal features \(\bar{\mathbb{F}}_{t}^{i,1}\). Finally, \(\bar{\mathbb{F}}_{t}^{i,1}\) and \(\mathbb{F}_{t}^{i,1}\) are concatenated and processed by cascaded Transformers, followed by upsampling of resolution to output feature \(\mathcal{F}_{t}^{i,1}\). The above process can be described as:
\[\bar{\mathbf{x}}_{t}^{1}=\mathrm{SeRef}(\bar{\mathbf{x}}_{t}^{1}), \quad A^{1}=\mathrm{Sigmoid}(\mathrm{S}(\bar{\mathbf{x}}_{t}^{1})), \tag{4}\] \[\bar{\mathbb{F}}_{t}^{i,1}=A^{1}\odot\mathbb{F}_{t}^{i,1}, \quad\mathcal{F}_{t}^{i,1}=\mathrm{Up}(\mathrm{Trans}(\bar{\mathbb{F}}_{t}^{ i,1}\oplus\mathbb{F}_{t}^{i,1})),\]
where \(\mathrm{SeRef}(\cdot)\), \(\mathrm{S}(\cdot)\), \(\odot\), \(\oplus\), and \(\mathrm{Up}(\cdot)\) denote the operations of self-refinement, squeezing, spatial-wise multiplication, concatenation, and upsampling, respectively. **(ii)** For high-resolution feature pairs \(\langle\mathbb{F}_{t}^{i,j},\bar{\mathbf{x}}_{t}^{j}\rangle\) with \(j=2,3\), an analogical procedure is executed using convolutions:
\[\bar{\mathbf{x}}_{t}^{j}=\mathrm{Conv}(\bar{\mathbf{x}}_{t}^{j}), \quad A^{j}=\mathrm{Sigmoid}(\mathrm{S}(\bar{\mathbf{x}}_{t}^{j})), \tag{5}\] \[\bar{\mathbb{F}}_{t}^{i,j}=A^{j}\odot\mathbb{F}_{t}^{i,j}, \quad\mathcal{F}_{t}^{i,j}=\mathrm{Up}(\mathrm{Conv}(\bar{\mathbb{F}}_{t }^{i,j}\oplus\mathbb{F}_{t}^{i,j})).\]
**Heatmap generation.** Ultimately, we integrate feature representations across all scales \(\{\mathcal{F}_{t}^{i,1},\mathcal{F}_{t}^{i,2},\mathcal{F}_{t}^{i,3}\}\) via element-wise addition, and employ a detection head (_i.e._, a \(3\times 3\) convolution) to yield the predicted heatmap \(\hat{\mathbf{H}}_{t}^{i}\). By inductively modeling multi-scale feature interactions, our Pose-Decoder is able to reason about the fine-grained relations of keypoints and global contexts, thereby producing more tailored representations that attend to joint areas.
### Overall Training and Inference Algorithms
**Training.** We perform diffusion process that corrupts ground truth heatmaps to noisy heatmaps, and train the Pose-Decoder for heatmap denoising to reverse this process. The overall training procedure of our DiffPose is provided in Algorithm 1 in the Appendix. Specifically, we sample Gaussian noises according to \(\alpha_{t}\) in Eq. 1 and add them to ground truth heatmaps to obtain the noisy samples. The parameter \(\alpha_{t}\) at each sampling step \(t\) is predefined by a monotonically decreasing cosine scheme, as adopted in [28]. We employ the standard pose estimation loss (mean square error) to supervise the model training:
\[\mathcal{L}=\left\|\mathbf{H}_{t}^{i}-\hat{\mathbf{H}}_{t}^{i}\right\|_{2}^{2}, \tag{6}\]
where \(\mathbf{H}_{t}^{i}\) and \(\hat{\mathbf{H}}_{t}^{i}\) denote the ground truth and predicted pose heatmaps, respectively.
**Inference.** The proposed DiffPose conducts denoising on noisy heatmaps sampled from Gaussian distribution, progressively refining its predictions over multiple sampling steps. For each sampling step, the Pose-Decoder takes random noisy heatmaps or the predicted heatmaps of the last sampling step as input and outputs the estimated heatmaps of the current step. Then, we adopt DDIM [57] to update the heatmaps for the next step. Detailed inference procedure is provided in Algorithm 2.
### Discussion
Building on diffusion-based architecture, our proposed framework DiffPose is able to decouple training and testing stages which enables a more adaptable inference process. By extending this concept, we investigate further and demonstrate the unique benefits of DiffPose for pose estimation, specifically in the areas of _flexible pose ensemble_ and _flexible iterative refinement_.
**Flexible pose ensemble.** In common practices, one usually performs inference starting with a single initial sample. However, the diffusion model is intrinsically probabilistic [1] and can generate diverse outputs for different noise inputs. Correspondingly, taking different noisy heatmaps as input, the DiffPose can yield different plausible pose predictions that possess respective keypoint recognition preferences. Ensembling these complementary pose solutions can enhance the robustness and stability of model predictions especially for challenging joints. To exploit this phenomenon, we initialize \(N\) groups of noisy heatmaps for inference and subsequently average their predictions. Experimental results show that the complementary pose ensemble brings significant performance improvement (see Table 6).
**Flexible iterative refinement.** After training the model, the DiffPose performs multi-step refinement (sampling) progressively to yield the final pose prediction. In practice, the number of sampling steps can be adjusted flexibly without retraining the model, which is preferable to the prior approaches that adopt a fixed structure of iterative refinement [9, 69, 41, 43]. By increasing iterative sampling steps,
the resulting representations would be more delicate, which fosters accurate pose estimation (Fig. 4).
## 4 Experiments
### Experimental Settings
**Datasets.** We benchmark the proposed DiffPose on three widely used benchmark datasets for video-based human pose estimation, PoseTrack2017 [31], PoseTrack2018 [2], and PoseTrack21 [15]. These datasets contain video sequences of tricky scenarios where clustered people perform rapid movement. Specifically, **PoseTrack2017** includes \(250\) video clips for training and \(50\) videos for validation (split according to the official protocol), with a total of \(80,144\) pose annotations. **PoseTrack2018** considerably increases the number of clips, containing \(593\) videos for training, \(170\) videos for validation, and a total of \(153,615\) pose annotations. Both datasets identify \(15\) keypoints, with an additional label for joint visibility. The training videos are densely annotated in the center \(30\) frames, and validation videos are additionally labeled every four frames. **PoseTrack21** further enriches and refines PoseTrack2018 especially for annotations of small persons and persons in crowds, including \(177,164\) human pose annotations.
**Evaluation metric.** The performance of the proposed method is evaluated with the widely-adopted [68, 61, 8, 37] human pose estimation metric namely average precision (**AP**). We compute the AP for each joint and then average overall joints to obtain the final performance (**mAP**).
**Implementation details.** Our DiffPose is implemented with PyTorch. The input image size is fixed to \(256\times 192\). We incorporate data augmentation including random rotation \([-45^{\circ},45^{\circ}]\), random scale \([0.65,1.35]\), truncation (half body), and flipping during training. The time interval \(\delta\) is set to \(2\). We define the total sampling steps \(T=1000\). We adopt the AdamW [49] optimizer with a base learning rate of \(5e-4\) (decays to \(5e-5\) and \(5e-6\) at the \(20^{th}\) and \(40^{th}\)epochs, respectively). We train the model using \(4\) TITAN RTX GPUs. All training process is terminated within \(60\) epochs. During inference, we initialize \(N=10\) groups of noises, and set the iterative denoising steps to \(4\).
### Comparison with State-of-the-art Approaches
**Results on the PoseTrack2017 dataset.** We first benchmark our model on the PoseTrack2017 dataset. A total of \(17\) methods are compared, and their performances on the validation set are summarized in Table 1. We can observe that our DiffPose delivers state-of-the-art pose estimation performance compared to existing well-established approaches, by adopting a generative paradigm for the first time. DiffPose attains the final performance of \(86.4\) mAP, and provides a \(1.6\) mAP gain over the previous best-performed method FAMI-Pose [38]. The performance boost for challenging joints (, wrist, ankle) is also encouraging: we achieve an mAP of \(83.5\) (\(\uparrow 3.5\)) for wrists and an mAP of \(80.2\) (\(\uparrow 3.2\)) for ankles. Such consistent performance improvements suggest the great potential of diffusion models in pose estimation. Another observation is that pose estimation methods that integrate temporal information (such as DetTrack and FAMI-Pose) indeed surpass approaches using only the single keyframe. This corroborates the importance of our design that injects spatiotemporal features into the DiffPose model. Furthermore, we show example visualization results in Fig. 3, which are indicative of the robustness of our method in tricky scenes.
**Results on the PoseTrack2018 dataset.** We further evaluate the proposed DiffPose on the PoseTrack2018 dataset, and report the detailed results on the validation set in Table 2. As shown in this table, our DiffPose once again outperforms all other approaches and delivers the best results. We obtain the final performance of \(83.0\) mAP, with an mAP of \(84.3\), \(81.5\), \(82.9\), and \(77.6\) for the elbow, wrist, knee, and ankle joints.
\begin{table}
\begin{tabular}{l|c c c c c c|c|c} \hline Method & Head & Shoulder & Elow & Wrist & Hip & Knee & Ankle & Mean \\ \hline PoseTrack21 [22] & 67.5 & 70.2 & 62.0 & 51.7 & 60.7 & 38.7 & 49.8 & 60.6 \\ PoseFlow [72] & 66.7 & 73.3 & 68.3 & 61.1 & 67.5 & 67.0 & 61.3 & 66.5 \\ JointFlow [16] & - & - & - & - & - & - & - & 60.3 \\ FastFlow [79] & 80.0 & 80.3 & 69.5 & 59.1 & 71.4 & 67.5 & 59.4 & 70.3 \\ TML+[50] & - & - & - & - & - & - & - & 71.5 \\ Simple (R-50) [71] & 79.1 & 80.5 & 75.5 & 66.0 & 70.8 & 70.0 & 61.7 & 72.4 \\ Simple (R-15) [71] & 81.7 & 83.4 & 80.0 & 72.4 & 75.3 & 74.8 & 67.1 & 76.7 \\ STFImdbding [32] & 83.8 & 81.6 & 77.1 & 70.0 & 77.4 & 74.5 & 70.8 & 77.0 \\ HRNet [61] & 82.1 & 83.6 & 80.4 & 73.3 & 75.5 & 75.3 & 68.5 & 77.3 \\ MDPS [25] & 85.2 & 85.5 & 83.9 & 77.5 & 79.0 & 77.0 & 71.4 & 80.7 \\ CorrTrack [48] & 86.1 & 87.0 & 83.4 & 76.4 & 77.3 & 79.2 & 72.3 & 80.8 \\ Dynamic-GAN (70) & 86.4 & 88.4 & 82.0 & 74.5 & 79.1 & 78.3 & 73.1 & 81.1 \\ PoseWurper [8] & 81.4 & 88.3 & 83.9 & 78.0 & 82.4 & 80.5 & 73.6 & 81.2 \\ DCPo [57] & 88.0 & 88.7 & 81.4 & 78.0 & 82.0 & 84.1 & 74.2 & 82.8 \\ DeTrack [68] & 89.4 & 89.7 & 85.5 & 79.5 & 82.4 & 80.8 & 76.4 & 83.8 \\ FAMI Pose [38] & **89.6** & 90.1 & 86.3 & 80.0 & 84.6 & 83.4 & 77.0 & 84.8 \\ \hline
**DiffPose (Ours)** & 88.0 & **91.2** & **87.4** & **83.5** & **85.5** & **87.2** & **80.2** & **86.4** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative results on the **PoseTrack2017** validation set.
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline Method & Head & Shoulder & Elow & Wrist & Hip & Knee & Ankle & Mean \\ \hline STAF [47] & - & - & - & 64.7 & - & - & 62.0 & 70.4 \\ AlphaPose [19] & 63.9 & 78.7 & 77.4 & 71.0 & 73.7 & 73.0 & 69.7 & 71.9 \\ TML+[50] & - & - & - & - & - & 74.6 \\ MDPN [25] & 75.4 & 81.2 & 79.0 & 74.1 & 72.4 & 73.0 & 69.9 & 75.0 \\ PGPT [5] & - & - & - & 72.3 & - & - & 72.2 & 76.8 \\ Dynamic-GAN (70) & 80.6 & 84.5 & 80.6 & 74.4 & 75.0 & 76.7 & 71.8 & 77.9 \\ PoseWurper [8] & 79.9 & 86.3 & 82.4 & 77.5 & 79.8 & 78.8 & 73.2 & 79.7 \\ PT-CNN+[77] & 82.4 & 88.8 & 86.2 & 79.4 & 72.0 & 80.6 & 76.2 & 80.9 \\ DCPo [57] & 84.0 & 86.6 & 82.7 & 78.0 & 80.4 & 79.3 & 73.8 & 80.9 \\ DeTrack [68] & 84.9 & 87.4 & 84.8 & 79.2 & 77.6 & 77.9 & 81.5 \\ FAMI-Pose [38] & **85.5** & **87.7** & 84.2 & 79.2 & **81.4** & 81.1 & 74.9 & 82.2 \\ \hline
**DiffPose (Ours)** & 85.0 & **87.7** & **84.3** & **81.5** & **81.4** & **82.9** & **77.6** & **83.0** \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative results on the **PoseTrack2018** validation set.
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline Method & Head & Shoulder & Elow & Wrist & Hip & Knee & Ankle & Mean \\ \hline Track2017 [31] & PoseTrack2018 [2] & - & - & - & - & - & - & 71.4 \\ ConTrack [38, 15] & - & - & - & - & - & - & 72.3 \\ ConfTrack [38, 15] & - & - & - & - & - & - & - & 72.7 \\ Track2018 [39] & - & - & - & - & - & - & - & 75.6 \\ DCPo [57] & 83.2 & 84.7 & 82.3 & 78.1 & 80.3 & 79.2 & 78.5 & 80.5 \\ FAMI-Pose [38] & 83.3 & 85.4 & 82.9 & 78.6 & 81.3
**Results on the PoseTrack21 dataset.** Performance comparisons of our model and previous state-of-the-art methods on the PoseTrack21 dataset are provided in Table 3. We observe that existing method FAMI-Pose [38] has already achieved an impressive performance of \(81.2\) mAP. In contrast, our DiffPose is able to achieve \(82.9\) (\(\uparrow 1.7\)) mAP. We also obtain an mAP of \(80.8\) for the wrist joint and \(80.0\) for the ankle joint.
### Ablation Study
We perform ablation experiments to investigate the contribution of each component in our DiffPose, including SpatioTemporal Representation Learner (STRL) as well as the Lookup-based MultiScale Feature Interaction mechanism (LMSFI). We also examine the efficacy of various design choices in LMSFI. Finally, we study the influence of modifying the number of initial noise heatmaps and iterative steps (during inference) on the final performance.
**Study on components of DiffPose.** We empirically evaluate the effectiveness of each proposed component, and report the results in Table 4. We first construct a simple baseline namely Direct-single, which takes the single keyframe as condition and directly concatenates and aggregates noisy heatmap with image features to model their interactions. This straightforward scheme produces a severely degraded pose estimation performance of \(52.5\) mAP. This is in line with our intuitions, _i.e_., the noisy heatmap is corrupted and usually contains distracting information, which incurs inherent difficulties in directly learning the correlations between noisy heatmaps and image features. **(a)** For this setting, we introduce the proposed LMSFI into the baseline Direct-single. Remarkably, the LMSFI inductively models feature interactions and improves performance from an mAP of \(52.5\) to \(82.4\). This significant performance boost (\(\uparrow 29.9\) mAP) corroborates the importance of our LMSFI in guiding the model to focus on specific joint regions. **(b)** The final setting further incorporates spatiotemporal cues as the sampling condition and corresponds to our full DiffPose. The performance improvement of \(4.0\) mAP demonstrates the effectiveness of our DiffPose in introducing temporal information to facilitate video-based pose estimation.
**Study on Lookup-based MultiScale Feature Interaction.** We further explore the influence of various designs within LMSFI, and tabulate the results in Table 5. We model feature interactions in a single low resolution to form the baseline Low-resolution. **(a)** We introduce multi-scale fusion to the baseline method (_i.e_., the complete DiffPose) and produce the final performance of \(86.4\) (\(\uparrow 2.0\)) mAP. This significant performance improvement upon incorporation of the multi-scale fusion highlights its effectiveness in learning informative representations for better performance. **(b)** We also examine the impact of fusing multi-scale features by concatenation and aggregation. The result in mAP (\(86.2\)) changes marginally.
**Study on initial noises.** As discussed in Sec. 3.5, we propose a pose ensemble strategy that randomly initializes \(N\) groups of Gaussian noises during inference and averages their predictions. Table 6 shows the effects of adopting different \(N\), where \(N\) is set to \(1\), \(5\), and \(10\). The quantitative results in mAP reflect a gradual performance improvement with increasing initial noises, from \(80.4\to 84.0\to 86.4\). This phenomenon can be attributed to the probabilistic nature of the diffusion model, whereby DiffPose is able to forecast diverse plausible poses from different noises. Ensembling such pose solutions enhances the robustness of model predictions and significantly boosts the pose estimation performance. Another observation is that the improvement in mAP with increasing \(N\) mainly stems from
\begin{table}
\begin{tabular}{c|c c|c} \hline \hline Method & Spatiotemporal. (STRL) & Lookup. (LMSFI) & Mean \\ \hline Direct-single & & & \(52.5\) \\ (a) & & ✓ & \(82.4\) \\ (b) & ✓ & ✓ & \(\mathbf{86.4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation of different components in DiffPose.
Figure 3: Visual results of our DiffPose on benchmarks. Challenging scenarios including fast motion and mutual occlusion are involved.
\begin{table}
\begin{tabular}{c|c c|c} \hline \hline Method & Multi-scale feature & Aggregation & Mean \\ \hline Low-resolution & & & \(84.4\) \\ (a) & ✓ & ✓ & \(\mathbf{86.4}\) \\ (b) & ✓ & ✓ \(\star\) & \(86.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation of various designs in LMSFI. \(\star\) denotes fusing multi-scale features with concatenation and aggregation.
challenging joints such as wrists (\(\uparrow 8.9\) mAP) and ankles (\(\uparrow 6.1\) mAP), and this fact still remains valid when compared to FAMI-Pose [38]. This suggests that the proposed pose ensemble strategy derived from the diffusion-based architecture can potentially facilitate the pose detection in intractable scenes (_e.g._, occlusions, blur).
**Study on denoising steps.** As discussed in Sec. 3.5, DiffPose can adopt an arbitrary number of iterative sampling steps. To investigate how the number of iterative steps affects the final performance, we experiment with \(Steps\in\{1,2,4\}\) and report the results in Table 7. It is clear that more iteration steps result in better performances. This is in accordance with our expectations, _i.e._, the captured features are progressively refined to focus on keypoint regions upon multiple iterations, leading to better results.
### Qualitative Analyses on DiffPose
**Representation visualization.** In addition to the quantitative results, we also provide qualitative analyses to better understand the mechanism behind DiffPose. Fig. 4 displays the intermediate activations of the Direct-multi baseline (incorporating STRL into the Direct-single) as well as our DiffPose. We observe that the DiffPose produces compact representations (b) and (c) that attend to local keypoint regions, while the features derived from the baseline (a) spread across salient areas. This provides empirical evidence that our LMSFI is effective in learning tailored representations for pose estimation. On the other hand, the features upon multi-step refinement are more attentive that encompass less task-irrelevant information.
**Visual comparisons.** We further examine the ability of our model in dealing with challenging scenarios such as mutual occlusion and fast motion. We depict in Fig. 5 the side-by-side comparisons of a) our DiffPose against state-of-the-art methods b) FAMI-Pose [38] and c) HRNet [61]. It is observed that our DiffPose consistently produces more robust pose predictions for various challenging scenes. HRNet [61] is designed for static images and dose not incorporate temporal cues, leading to suboptimal results in degraded frames. On the other hand, FAMI-Pose [38] adopts a deterministic pose estimation paradigm, yielding a single pose solution for each person. Through the principled design of the model architecture (_i.e._, STRL, LMSFI) as well as the flexible training and testing pipeline of the diffusion model, our DiffPose is more adept at handling tricky cases.
## 5 Conclusion and Future Works
In this paper, we explore the video-based human pose estimation task from the perspective of generative modeling. We present a novel framework termed DiffPose which treats multi-frame human pose estimation as a conditional generative process of keypoint heatmaps. We design a SpatioTemporal Representation Learner (STRL) to integrate temporal clues into the diffusion model, as well as a Lookup-based MultiScale Feature Interaction mechanism (LMSFI) for inducing the model to attend to keypoint regions. Furthermore, we show two attractive properties of DiffPose for pose estimation including flexible pose ensemble and iterative refinement, which enable enhanced performance without retraining the model. Empirical evaluations on three standard benchmark datasets, PoseTrack2017, PoseTrack2018, and PoseTrack21 demonstrate that DiffPose achieves state-of-the-art performance. Future works include applying DiffPose to other vision tasks such as 3D human pose estimation and pose tracking, and refining the pipeline for accelerated inference.
\begin{table}
\begin{tabular}{c|c c c c c c c c|c} \hline Method & Head & Shoulder & Flow & Wide & Hip & Rate & Askle & Mean \\ \hline FAMI-Pose [38] & 96.6 & 90.1 & 86.3 & 80.0 & 84.6 & 83.4 & 77.0 & 84.8 \\ \hline \(N-1\) & 85.4 & 87.0 & 80.1 & 74.6 & 78.1 & 82.6 & 74.1 & 80.4 \\ \(N-5\) & 87.4 & 88.8 & 83.7 & 79.6 & 83.7 & 84.8 & 78.1 & 84.0 \\ \(N-10\)_DiffPose_ & 80.9 & 91.2 & 87.4 & 83.5 & 85.5 & 87.2 & 80.2 & **86.4** \\ \hline \end{tabular}
\end{table}
Table 6: Ablation of modifying the number of initial noises \(N\).
Figure 4: Visualizations of intermediate activations of the Direct-multi baseline (a) and our DiffPose at different denoising steps (b) \(Steps=1\) and (c) \(Steps=4\).
Figure 5: Visual comparisons of the pose estimation results of our DiffPose (a), FAMI-Pose (b), and HRNet-W48 (c) on the challenging cases from PoseTrack dataset. Inaccurate detections are highlighted by the red circles.
## 6 Acknowledgements
This work is supported in part by the National Natural Science Foundation of China under grant No. 62203184 and the International Cooperation Project under grant No. 20220402009GH. This work is also supported in part by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2023-2020-0-01789), supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation).
|
2309.05425 | On optimal recovering high order partial derivatives of bivariate
functions | The problem of recovering partial derivatives of high orders of bivariate
functions with finite smoothness is studied. Based on the truncation method, a
numerical differentiation algorithm was constructed, which is optimal by the
order, both in the sense of accuracy and in the sense of the amount of Galerkin
information involved. Numerical demonstrations are provided to illustrate that
the proposed method can be implemented successfully. | Y. V. Semenova, S. G. Solodky | 2023-09-11T12:51:59Z | http://arxiv.org/abs/2309.05425v1 | # On optimal recovering high order partial derivatives of bivariate functions
###### Abstract
The problem of recovering partial derivatives of high orders of bivariate functions with finite smoothness is studied. Based on the truncation method, a numerical differentiation algorithm was constructed, which is optimal by the order both in the sense of accuracy and in the sense of the amount of Galerkin information involved. Numerical demonstrations are provided to illustrate that the proposed method can be implemented successfully.
Keywords.Numerical differentiation, Legendre polynomials, truncation method, minimal radius of Galerkin information
## 1 Introduction
The present work is devoted to the issue of research and improving the efficiency of numerical differentiation methods. As is known, the study of numerical differentiation from the theory of ill-posed problems standpoint originates from the work [12] and by now there are many different methods to the stable recovery of derivatives with respect to perturbed data (see, for example, [24], [32], [4], [6], [1], [8], [9], [10], [11], [7], [26]). As to provide stability of approximation, all mentioned papers can be fell into two directions.
The standard approach to ensuring the stability of a numerical differentiation problem is to apply the classical Tikhonov regularization methods with an appropriate selection of the regularization parameter (see, in particular, [12], [32], [2], [11]). The main efforts of researchers in this direction are focused on determining the optimal value of the regularization and discretization parameters, which is often a non-trivial task.
However, in some cases, there is an alternative approach to achieving stability. This approach is called self-regularization and consists in choosing the appropriate discretization level depending on the noise level of the input data. Here, the discretization level acts as a regularization parameter, due to which stability is ensured. Examples of using self-regularization to solve some classes of ill-posed problems can be found in [31], [27], [5]. As for the numerical differentiation problem, the idea of using self-regularization to develop stable algorithms was earlier proposed in [3] and [25]. In this paper, we will continue to study the approximation properties of self-regularization and focus on recovering the partial derivatives of bivariate functions by finite Fourier sums (spectral truncation method) and provide a stable approximation wit th an appropriate choice of summation limit. The method was first applied to the problem of numerical differentiation in [8]. Subsequently, the effectiveness of this approach was confirmed by the results of [19], [21], [26]. Continuing the series of studies devoted to the spectral truncation method, in this paper, for the approximation of partial derivatives, a modification of the spectral truncation method will be presented in combination with a priori choice of the discretization level depending on the noise. At the same time, the approach under consideration is expected not only to ensure the optimal accuracy of approximations but also the efficiency of the usage of computing resources.
The article is organized as follows. In Section 2, the necessary definitions are introduced and the problem statement for optimizing numerical differentiation methods in the sense of the minimal Galerkin information radius is given. In Sections 3 and 4, for a proposed version of the spectral truncation method its error estimates in quadratic and uniform metrics, respectively, are established. Section 5 is devoted to finding order estimates for the minimal radius of |
2309.13247 | Multi-modal Domain Adaptation for REG via Relation Transfer | Domain adaptation, which aims to transfer knowledge between domains, has been
well studied in many areas such as image classification and object detection.
However, for multi-modal tasks, conventional approaches rely on large-scale
pre-training. But due to the difficulty of acquiring multi-modal data,
large-scale pre-training is often impractical. Therefore, domain adaptation,
which can efficiently utilize the knowledge from different datasets (domains),
is crucial for multi-modal tasks. In this paper, we focus on the Referring
Expression Grounding (REG) task, which is to localize an image region described
by a natural language expression. Specifically, we propose a novel approach to
effectively transfer multi-modal knowledge through a specially
relation-tailored approach for the REG problem. Our approach tackles the
multi-modal domain adaptation problem by simultaneously enriching inter-domain
relations and transferring relations between domains. Experiments show that our
proposed approach significantly improves the transferability of multi-modal
domains and enhances adaptation performance in the REG problem. | Yifan Ding, Liqiang Wang, Boqing Gong | 2023-09-23T04:02:06Z | http://arxiv.org/abs/2309.13247v1 | # Multi-modal Domain Adaptation for REG via Relation Transfer
###### Abstract
Domain adaptation, which aims to transfer knowledge between domains, has been well studied in many areas such as image classification and object detection. However, for multi-modal tasks, conventional approaches rely on large-scale pre-training. But due to the difficulty of acquiring multi-modal data, large-scale pre-training is often impractical. Therefore, domain adaptation, which can efficiently utilize the knowledge from different datasets (domains), is crucial for multi-modal tasks. In this paper, we focus on the Referring Expression Grounding (REG) task, which is to localize an image region described by a natural language expression. Specifically, we propose a novel approach to effectively transfer multi-modal knowledge through a specially relation-tailored approach for the REG problem. Our approach tackles the multi-modal domain adaptation problem by simultaneously enriching inter-domain relations and transferring relations between domains. Experiments show that our proposed approach significantly improves the transferability of multi-modal domains and enhances adaptation performance in the REG problem.
## 1 Introduction
Domain adaptation aims to mitigate the discrepancy between domains such that a model learned from the source domain can be well generalized to the target domain. Recent methods in domain adaptation mainly study single-modality tasks, such as image classification [20][21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [90], [91], [92], [94], [95], [96], [97], [98], [99], [99], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [98], [99], [99], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [99], [99], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [99], [99], [99], [99], [90], [91], [92], [93], [94], [95], [96], [97], [98], [
ies [Chen et al.(2020)Chen, Li, Yu, El Kholy, Ahmed, Gan, Cheng and Liu,Lu et al.(2019)Lu, Batra, Parikh and Lee,Su et al.(2019)Su, Zhu, Cao, Li, Lu, Wei and Dai]. It mainly relies on a "vision-and-language" feature extractor, which is often pre-trained with large-scale and usually roughly annotated "vision-and-language" datasets (_e_.\(g\). the Conceptual Captions dataset [Sharma et al.(2018)Sharma, Ding, Goodman and Soricut]) and then fine-tuned together with the downstream classifier. However, there are several limitations in such an approach. First, there is a gap between the general "vision-and-language" feature extractor and the REG task, especially when the most important comprehension part is missing in pre-training due to the annotation limits. Secondly, the feature extractor (_e_.\(g\). transformer-based) is usually heavyweight, which results in large model size, big pre-training dataset, and long pre-training time. It also causes a burdened and slow fine-tuning process.
The goal of this paper is to effectively transfer knowledge between REG datasets to avoid the above problems. However, it is not an easy task. First, to capture the variance among different types of referring expressions, a widely-used approach is modular networks, which contain different visual modules processing different information such as name, attributes, location, or relations to other objects in the referring expression [Yu et al.(2018)Yu, Lin, Shen, Yang, Lu, Bansal and Berg]. Such modular networks lack a joint vision-language embedding [Yu et al.(2018)Yu, Lin, Shen, Yang, Lu, Bansal and Berg, Liu et al.(2019)Liu, Wang, Shao, Wang and Li, Gupta et al.(2020)Gupta, Vahdat, Chechik, Yang, Kautz and Hoiem]. Thus regular domain adaptation methods that focus on aligning features in two domains [Ganin and Lempitsky(2015), Long et al.(2017)Long, Cao, Wang and Jordan, Cai et al.(2019)Cai, Wang and He] cannot be applied. Moreover, enforcing domain-invariance for each modality (language or vision) individually fails to capture their interplay. Besides, the domain gap between two REG domains stems from multiple factors, such as visual styles, biases in the bounding box annotations, and textual mismatches in styles and dictionaries, making it difficult to straightforwardly extend existing single modality adaptation methods to the REG task.
To address the above problems, we propose to extract and transfer language/objects features in a relation-based manner rather than aligning the features. Figure 1 shows an example. In this example, relations between words/objects in the source domain are learned in the pre-training. These learned relations are universal and transferable (_e_.\(g\). computer always sit on table in both domains). Besides, they also help uncovering underlying relations in the target task. For instance, in the language modal, from source domain we learned a relation ("computer", "table"); in the target domain, "mouse" is only connected with "table" but an underlying connection ("mouse", "computer") could be discovered. In the visual modal, we have similar cases. Although our relation-based domain adaptation approach is specially tailored for the REG problem, it could be extended to other multi-modal tasks. To the best of our knowledge, we are the first to solve the problem from this angle. Our approach aims to discover and maintain the relations between language/objects and mitigate the multi-modal discrepancy between the source and target domains.
From the language aspect, the discrepancy pivots on the difference between source and target vocabularies. Since the source and target usually share parts of their vocabularies and each has its own domain-specific words, our task is to transfer learned embeddings from source-private and shared vocabularies to target-private vocabulary. To achieve it, we design the cross-attention embedding strategy, through which the two domains share common knowledge in the shared vocabulary, meanwhile maintaining sufficient expressive power to learn domain-specific knowledge.
Among the visual side, in the REG model, the relation module that handles surrounding objects of a candidate object plays an important role in comprehension. Through the relation module, the REG model recognizes objects in the same category but have different surroundings (_e_.\(g\). "man in the middle of three men"). Previous approaches use geographical distance to define the surrounding objects and pick up one with the highest matching score to represent the environment of the candidate objects [Yu et al.(2018)Yu, Lin, Shen, Yang, Lu, Bansal and Berg, Liu et al.(2019)Liu, Wang, Shao, Wang and Li]. In our approach, however, we propose to re-build the relation modular with Graph Neural Networks (GNN) [Scarselli et al.(2008)Scarselli, Gori, Tsoi, Hagenbuchner and Monfardini]. GNN could model the relations between objects in terms of graphs, which better describe the surrounding environment of a candidate object. Besides, GNNs could utilize the co-occurrences between objects in the source dataset. That also improves the features while transferring to the target dataset. In addition, we extend scene graph [Xu et al.(2017)Xu, Zhu, Choy and Fei-Fei] to enrich the graphs with semantic relations between objects.
Our main contributions are as follows. First, we formulate domain adaption for the REG problem as a multi-modal transfer learning task, which enriches the transfer learning techniques in the vision-language research. Secondly, we design a new relation-based adaptation method for the REG problem, which enables better knowledge transfer to new REG dataset. Our method results in a fast and lightweight transfer compared with traditional expensive _pretrain-then-transfer_ learning schema. Thirdly, the proposed multi-modal relation-based domain adaption method significantly outperforms other approaches for the REG problem.
## 2 Related Work
Our work is at the intersection of several subareas in the framework of deep learning. In this section, we discuss related work on REG, domain adaptation, and multi-modal adaptation for REG.
**Referring expression grounding.** Like most vision-language tasks [1][1][2], Bata, Zitnick and Parikh,Wang et al.(2016)[12], Li and Lazebnik], REG also relies on multi-modal embeddings to bridge the semantic gap between visual and textual contents. However, when data distributions embody complex multi-modal structures, general domain adaptation methods may fail to capture such multi-modal structures for a discriminative alignment of distributions. Therefore, in most cases, either training from scratch [21][22], Lin, Shen, Yang, Lu, Bansal and Berg,Liu et al.(2019)[11], Wang, Shao, Wang and Li] or large-scale pre-trained models are used to transfer knowledge from large-scale vision-language dataset to specific vision-language tasks [3][4][1], Li, Yu, El Kholy, Ahmed, Gan, Cheng and Liu, Lu et al.(2019)[10], Batra, Parikh and Lee, Su et al.(2019)[11], Zhu, Cao, Li, Lu, Wei and Dai]. In this paper, we adopt a modular structure proposed in MAttNet [21][22][23]. It has a language module and three visual modules that process the visual feature, location feature and the surrounding object feature of the candidate object, respectively. We reformulate the language module to process the relations between words and the visual relation module to model the visual object relations.
**Domain adaptation.** Domain adaptation aims to mitigate the discrepancy of distributions between source domain and target domain data. The main methods in domain adaptation are to minimize the statistical distances between source and target feature distributions. These methods usually take advantage of adversarial training [13] and include a domain classifier that discriminates between the source and the target domains during training [1, 10][13][14]. In that direction, some recent methods condition the domain discriminator to facilitate more accurate feature adaptation within each category [15][15], Cao, Wang and Jordan, Hu et al.(2020)[10], Kan, Shan and Chen]. However, the adversarial training procedure tends to be unstable and may cause performance drop. Strengthening Lipschitz continuity in the target distribution is a method that avoids adversarial training yet still supports domain adaptation [24][11][12], Wu, Narui and Ermon, Mao et al.(2019)[11][13], Ma, Yang, Chen and Li, Cai et al.(2019)[10], Wang and He].
**Domain adaptation for referring expression grounding.** Since general domain adaptation methods cannot be applied to the multi-modal problem directly, several algorithms have been proposed to align distributions of features and classes via separate domain discriminators [12][12][10], Chen, Chen, Tsai, Frank Wang and Sun, Tsai et al.(2018)[10][11], Hung, Schulter, Sohn, Yang and Chandraker]. In particular for the REG task, Liu et al. propose to transfer concept from auxiliary classification data (images from new categories) and context inheritance from REG data to ground new objects [11][12], Li, Wang, Zha, Meng and Huang]. In our approach, Graph Convolutional Network (GCN) [13] is used to transfer knowledge from source to target domains. Our approach is different from the transfer learning of graph network [3][3][10], Kim, Lee and Yoon, Zhu et al.(2020)[11], Xu, Wang, Zhang, Han and Yang], as our main task is to transfer the relations, not the graph itself. Besides, pivot-based methods [1][1], McDonald and Pereira,Ziser and Reichart(2016)[1], Ziser and Reichart(2018)[1], which recognize the frequent features in the source and target domains, also inspire our cross-attention dictionary method.
## 3 Multi-modal Domain Adaptation for REG
Our main approach is described in this section. The framework is within supervised domain adaptation, where the model has access to a small amount of labeled data from the source and target domains. This allows us to pre-train the model in the source domain and then fine-tune it in the target domain.
Different from previous REG models [21][22], Lin, Shen, Yang, Lu, Bansal and Berg,Liu et al.(2019)[11], Wang, Shao, Wang and Li, Gupta et al.(2020)[10], Vahdat, Chechik, Yang, Kautz and Hoiem], our model transfers language and visual features based on the word relations and visual object relations. Our proposed model results in less discrepancy and preserves more target domain oriented information during the transfer.
### Cross-attention Word Embedding
In the language module, the first step is to encode words in the expression sentence to fixed-length feature vectors. Under the domain adaptation scenario, the source and target domains are likely to have different vocabularies. For example, the RefCOCO dataset [21][22], Poirson, Yang, Berg and Berg] has 1999 words while the RefCOCOg dataset [11][12][13]. In particular, for the REG task, Liu et al. propose to transfer concept from auxiliary classification data (images from new categories) and context inheritance from REG data to ground new objects [11][10], Li, Wang, Zha, Meng and Huang]. In our approach, Graph Convolutional Network (GCN) [13] is used to transfer knowledge from source to target domains. Our approach is different from the transfer learning of graph network [3][3][10], Kim, Lee and Yoon, Zhu et al.(2020)[11], Xu, Wang, Zhang, Han and Yang], as our main task is to transfer the relations, not the graph itself. Besides, pivot-based methods [1][1], MccDonald and Pereira,Ziser and Reichart(2016)[1], Ziser and Reichart(2018)[1], which recognize the frequent features in the source and target domains, also inspire our cross-attention dictionary method.
## 4 Multi-modal Domain Adaptation for REG
Our main approach is described in this section. The framework is within supervised domain adaptation, where the model has access to a small amount of labeled data from the source and target domains. This allows us to pre-train the model in the source domain and then fine-tune it in the target domain.
Different from previous REG models [21][22], Lin, Shen, Yang, Lu, Bansal and Berg, Liu et al.(2019)[11], Wang, Shao, Wang and Li, Gupta et al.(2020)[10], Gupta, Vahdat, Chechik, Yang, Kautz and Hoiem], our model transfers language and visual features based on the word relations and visual object relations. Our proposed model results in less discrepancy and preserves more target domain oriented information during the transfer.
### Cross-attention Word Embedding
In the language module, the first step is to encode words in the expression sentence to fixed-length feature vectors. Under the domain adaptation scenario, the source and target domains are likely to have different vocabularies. For example, the RefCOCO dataset [21][22][11], Poirson, Yang, Berg and Berg] has 1999 words while the RefCOCOg dataset [11][12][13]. In particular, for the REG task, Liu et al. propose to transfer concept from auxiliary classification data (images from new categories) and context inheritance from REG data to ground new objects [11][11][12], Li, Wang, Zha, Meng and Huang]. In our approach, Graph Convolutional Network (GCN) [13] is used to transfer knowledge from source to target domains. Our approach is different from the transfer learning of graph network [3][3][10], Liu, Li, Wang, and Huang]. In our approach, Graph Convolutional Network (GCN) [13] is used to transfer knowledge from source to target domains. Our approach is different from the transfer learning of graph network [3][10][10], Kim, Lee and Yoon, Zhu et al.(2020)[11], Xu, Wang, Zhang, Han and Yang], as our main task is to transfer the relations, not the graph itself. Besides, pivot-based methods [1][1], MccDonald and Pereira,Ziser and Reichart(2016)[1], Ziser and Reichart(2018)[1], which recognize the frequent features in the source and target domains, also inspire our cross-attention dictionary method.
## 5 Multi-modal Domain Adaptation for REG
Our main approach is described in this section. The framework is within supervised domain adaptation, where the model has access to a small amount of labeled data from the source and target domains. This allows us to pre-train the model in the source domain and then fine-tune it in the target domain.
Different from previous REG models [21][22], Lin, Shen, Yang, Lu, Bansal and Berg, Liu et al.(2019)[11], Wang, Shao, Wang and Li, Gupta et al.(2020)[10][10], Vahdat, Chechik, Yang, Kautz and Hoiem], our model transfers language and visual features based on the word relations and visual object relations. Our proposed model results in less discrepancy and preserves more target domain oriented information during the transfer.
### Cross-attention Word Embedding
In the language module, the first step is to encode words in the expression sentence to fixed-length feature vectors. Under the domain adaptation scenario, the source and target domains are likely to have different vocabularies. For example, the RefCOCO dataset [21][22][11], Poirson, Yang, Berg and Berg] has 1999 words while the RefCOCOg dataset [11][12][13][14]. In particular, for the REG task, Liu et al. propose to transfer concept from auxiliary classification data (images from new categories) and context inheritance from REG data to ground new objects [11][11][12]. In our approach, Graph Convolutional Network (GCN) [13] is used to transfer knowledge from source to target domains. Our approach is different from the transfer learning of graph network [3][10][10], Liu, Li, Wang, and Huang]. In our approach, Graph Convolutional Network (GCN) [13] is used to transfer knowledge from source to target domains. Our approach is different from the transfer learning of graph network [3][10][10][11], Kim, Lee and Yoon, Zhu et al.(2020)[11][11], Xu, Wang, Zhang, Han and Yang], as our main task is to transfer the relations, not the graph itself. Besides, pivot-based methods [1][1][10][11], MccDonald and Pereira,Ziser and Reichart(2016)[1], Ziser and Reichart(2018)[1], which recognize the frequent features in the source and target domains, also inspire our cross-attention dictionary method.
## 6 Multi-modal Domain Adaptation for REG
Our main approach is described in this section. The framework is within supervised domain adaptation, where the model has access to a small amount of labeled data from the source and target domains. This allows us to pre-train the model in the source domain and then fine-tune it in the target domain.
Different from previous REG models [21][22], Lin, Shen, Yang, Lu, Bansal and Berg, Liu et al.(2019)[11], Wang, Shao, Wang and Li, Gupta et al.(2020)[10][11], Wang and Li, Gupta et al.(2020)[10][10], Vahdat, Chechik, Yang, Kautz and Hoiem], our model transfers language and visual features based on the word relations and visual object relations. Our proposed model results in less discrepancy and preserves more target domain oriented information during the transfer.
### Cross-attention Word Embedding
In the language module, the first step is to encode words in the expression sentence to fixed-length feature vectors. Under the domain adaptation scenario, the source and target domains are likely to have different vocabularies. For example, the RefCOCO dataset [21][22][11], Poirson, Yang, Berg and Berg] has 1999 words while the RefCOCO dataset [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][44][45][46][47][48][49][40][40][410][411][42][411][43][45][47][49][411][44][48][411][49][420][40][421][42][411][44][49][40][43][411][422][43][415][416][417][418][423][44][419][430][441][450][411][44][424][445][46][47][48][49][425][49][410][411][426][411][412][427][43][413][444][45][4151][428][44][46][47][49][411][429][420][413][414][452][415][416][417][421][430][418][421][
target domain as possible, for which the shared words are pivotal, it is not sufficient. We conjecture that not only the shared words can be transferred between domains, the correlation among words also transfers. We propose an attention mechanism to model the correlations between words, in which the embeddings of domain private vocabularies are learned from both the training data and the attended shared vocabulary embeddings. By the attention machinism, we build the relationship between shared vocabulary and private vocabularies, which also connects the source and target private words indirectly. Two benefits are brought by the relation between shared and private vocabularies : 1) words in the shared vocabulary are usually more widely used and point to more general objects and represent more common semantics which enrich the representation of private words with more general and recognizable features, and 2) the embeddings of shared vocabulary turn to be more domain-invariant to represent privates from two domains.
In the cross-attention embedding, we denote the embedding matrix by \(D\in\mathbb{R}^{d\times n}\) for a dictionary of \(n\) words, each of which is embedded to a \(d\)-dimensional vector. We denote the source and target vocabulary embeddings by \(D_{S}\) and \(D_{T}\), the shared and two private vocabulary embeddings by \(D_{S\cap T},D_{S\setminus T}\) and \(D_{T\setminus S}\), respectively. Thus, \(D=D_{S\cap T}\cup D_{S\setminus T}\cup D_{T\setminus S}\). We also have \(D_{S}=D_{S\cap T}\cup D_{S\setminus T}\) and \(D_{T}=D_{S\cap T}\cup D_{T\setminus S}\). \(D_{S\cap T}\) is learnt from the dataset, while \(D_{S\setminus T}\) and \(D_{T\setminus S}\) are attended by \(D_{S\cap T}\) with learnable matrices \(W_{S\cap T\to S\setminus T}\) and \(W_{S\cap T\to T\setminus S}\), which model the relations between the shared vocabulary and the source and target private vocabulary, respectively. These embedding matrices \(D\) and attention matrices \(W\) are trained together with the language attention module in the REG model, as shown in Figure 2.
During pretraining, the update during forward pass is:
\[\begin{split}\overline{D_{S\setminus T}}&=D_{S \setminus T}+D_{S\cap T}\cdot\text{softmax}(W_{S\cap T\to S \setminus T}),\\ \overline{D_{S\cap T}}&=D_{S\cap T},\end{split} \tag{1}\]
where \(\cdot\) is matrix multiplication. \(W_{S\cap T\to S\setminus T}\) is the learnable weights of shape \(n_{S\cap T}\times n_{S\setminus T}\). In every forward pass, the private vocabulary embedding \(D_{S\setminus T}\) is updated (denoted as \(\overline{D_{S\setminus T}}\)) with the sum of original embedding \(D_{S\setminus T}\) and \(D_{S\cap T}\) weighted by a softmax version of \(W_{S\cap T\to S\setminus T}\). After back-propagation, we update \(D\) which are trainable
Figure 3: Illustration of the workflow of cross-attention word embedding. During the pre-training, the attention (denoted by \(\oplus\)) is learnt from shared vocabulary \(D_{S\cap T}\) to source-private vocabulary \(D_{S\setminus T}\). In the fine-tuning, the attention is learnt from shared vocabulary \(D_{S\cap T}\) to target-private vocabulary \(D_{T\setminus S}\).
Figure 2: The architecture of our proposed approach. The model is composed of a language module in which the cross-attention embedding is applied and three visual modules (_i.e._, location, subject, and relation modules). The final matching score \(S\) between expression and visual modules is the weighted sum of three modules \(S_{loc}\), \(S_{sub}\) and \(S_{rel}\).
weights with learning rate \(\eta\) :
\[D_{S\setminus T}\gets D_{S\setminus T}-\eta\frac{\partial Loss}{\partial D_{S \setminus T}},\;\;D_{S\cap T}\gets D_{S\cap T}-\eta\frac{\partial Loss}{ \partial D_{S\cap T}}\]
During finetuning, \(D_{T\setminus S}\) is updated with the target data and the learned embeddings \(D_{S\cap T}\) from the source domain:
\[\overline{D_{T\setminus S}} =D_{T\setminus S}+D_{S\cap T}\cdot\text{softmax}(W_{S\cap T \to T\setminus S}), \tag{2}\] \[\overline{D_{S\cap T}} =D_{S\cap T},\]
\(W_{S\cap T\to T\setminus S}\) is of shape \(n_{S\cap T}\times n_{T\setminus S}\). \(D_{S\setminus T}\) and \(D_{T\setminus S}\) are also updated in propagation.
Since private words are usually less common and have less training samples, this mechanism improves the embedding quality in both \(D_{S\setminus T}\) and \(D_{S\setminus T}\).. At the same time, the embeddings in shared vocabulary also become more domain invariant by learning to represent private words from the source and target domains.
With the embedding matrix \(D\), we get domain invariant word embedding in pre-training and fine-tuning. After that, we apply a bi-directional LSTM [Graves et al.(2005)Graves, Fernandez and Schmidhuber] to encode the whole expression \(R=\{u_{t}\}_{t=1}^{T}\):
\[e_{t} =D(u_{t}), \tag{3}\] \[h_{t} =\text{Bi-LSTM}(e_{t},h_{t-1}),\]
\(H=\{h_{t}\}_{t=1}^{T}\) is the expression feature.
### Object Relation Transfer via GNNs
In visual module, we also conduct domain adaptation by discovering transferable relationships. For REG models, pairwise data are used in the training, where each pair of data is consisted of a sentence of expression (processed in the language module) and a candidate object. Usually a Convolutional Neural Network [Krizhevsky et al.(2012)Krizhevsky, Sutskever and Hinton] is implemented to process the visual feature (subject module) while a multi-layer perceptron without convolutions is used to process the location feature (location module) of the object. However, these two modules are not enough to location a target object. For example, in the expression "first giraffe on left", we have to locate the surrounding objects of each giraffe and only the one with no other giraffe on its left is the correct one. Here, the co-occurrence of objects also helps locate the target object, even when the expression didn't mention them.
Based on the above observations, we propose to describe the relations between objects in an image with graphs. Furthermore, we merge graphs from source and target domains into one during learning via Graph Neural Networks (GNN) [Scarselli et al.(2008)Scarselli, Gori, Tsoi, Hagenbuchner and Monfardini]. GNNs typically aggregate information from neighbor nodes to embedded information into a hidden space regardless of which domain the node is from. It utilizes the co-occurrence information between objects. Therefore when transferring to a new dataset, the model still benefits from the learned connections between objects in the source dataset. At the same time, the GNNs are also updated with the new data coming from the target dataset.
#### 3.2.1 Relation Graph Generalization
Our first step is to create the relation graph of candidate object \(v\) by \(\mathcal{G}_{v}=(\mathcal{V}_{v},\mathcal{A}_{v})\), where \(\mathcal{V}_{v}\) is the set of nodes and \(\mathcal{A}_{v}\) denotes the set of edges. Since geographical distance is one of the essential factors to locate neighbor objects, we first create the geometrical relation graph \(\mathcal{G}_{v}^{geo}\) from images by setting edges between an object \(v\) and its \(M\) nearest objects via \(\mathcal{G}_{v}^{geo}=(\mathcal{V}_{v}^{geo},\mathcal{A}_{v}^{geo})\), \(\mathcal{V}_{v}^{geo}=\{v_{1},...,v_{M}\}\) and \(\mathcal{A}_{v}^{geo}=\{(v,u)|u\in\mathcal{V}_{v}^{geo},u\neq v\}\). Figure 4 (b) shows an example of the geometrical graph.
Besides the geographical distance, we further explore the relationships between objects via scene graph [Johnson et al.(2015)Johnson, Krishna, Stark, Li, Shamma, Bernstein and Fei-Fei]. Scene graph constitutes the semantic relationships between objects. It is a visually-grounded graph over the objects in an image. The nodes denote visual objects and the directed edges depict their pairwise relations. Through the relations excavated via scene
Figure 4: An example of relation graph generalization.
cover semantic relations between objects in an image that might be physically far apart from each other. It is a necessary complement of the geographical distance. Scene graph uses triplets to represent the relations between two nodes. We denote the relation as a directed edge \(a_{s\to o}=<v_{s},r,v_{o}>\), where \(v_{s}\) denotes the subject object, \(v_{o}\) denotes the target object and \(r\) is the relation between them. For example, in the edge \(<``man"\), \(``playing"\), \(``Frisbee">\), \(v_{s}=``man"\), \(r=``playing"\) and \(v_{o}=``Frisbee"\). Figure 4 (a) shows an example of the scene graph generated from an image.
We use scene graph to complement the geographical distance graph with semantic relations. Specifically, we first use a scene graph extractor [13], Niu, Huang, Shi and Zhang] to generate the scene graph \(\mathcal{G}_{I}=(\mathcal{V}_{I},\mathcal{A}_{I})\) of image \(I\), which contains all semantic relationships within the image. Then we generate the semantic graph \(\mathcal{G}_{v}^{sem}\) via \(\mathcal{G}_{I}\) by selecting the edges and nodes from scene graph that are directly or indirectly connected to \(v\) as shown in Figure 4 (a)-(c). Note that the edges in a semantic graph do not have directions. The resulted semantic graph is denoted as \(\mathcal{G}_{v}^{sem}=(\mathcal{V}_{v}^{sem},\mathcal{A}_{v}^{sem})\)
Finally, the relation graph is the union of geographical distance graph and scene graph, which is demonstrated in Figure 4 (d).
\[\mathcal{G}_{v}=(\mathcal{V}_{v},\mathcal{A}_{v})=(\mathcal{V}_{v}^{geo}\cup \mathcal{V}_{v}^{sem},\mathcal{A}_{v}^{geo}\cup\mathcal{A}_{v}^{sem}) \tag{4}\]
#### 3.2.2 Relation Transfer via GNNs
By aggregating information from neighbor nodes in both source and target domains, Graph Neural Networks (GNN)s can jointly learn an embedding by combing information from the two domains. As shown in Figure 5, the relations between source objects are learned in pre-training, and they are then transferred to the target domain in fine-tuning.
In the GNN model, each node is naturally defined by its features and the connected nodes. GNN can learn a state embedding \(h_{v}\in\mathbb{R}^{d}\) that contains the information of neighborhood for each node. The state embedding \(h_{v}\) is a \(d\)-dimension vector of node \(v\). \(\overline{h_{v}}\) (the next state of \(h_{v}\)) is computed via:
\[\overline{h_{v}}=\sigma(\frac{1}{\sum_{u\in N_{v}}w_{u\to v}}\sum_{u\in N_{v} }w_{u\to v}h_{u}\theta+b), \tag{5}\]
where \(N_{v}\) is the neighborhood set of node \(v\), \(h_{u}\) is the state embedding of node \(u\). \(w_{u\to v}\) is the weighted edge from node \(u\) to node \(v\). For un-directed graphs, \(w_{u\to v}=w_{v\to u}\). \(\theta\) and \(b\) are the weight matrix and bias of the GNN layer, respectively. \(\sigma\) is the activation function.
We use both the geometrical distance and semantic relation to define the weight matrix \(W\). Suppose \(D\) denotes the geometrical distance metric, we have:
\[\begin{split}& W=[w_{v\to u}],\ \ (v,u)\in\mathcal{A}_{v}\\ & w_{v\to u}=\begin{cases}1,(v,u)\in\mathcal{A}_{v}^{sem}\\ \frac{\min D(\mathcal{A}_{v}^{geo})}{D(v,u)},otherwise\end{cases}\end{split} \tag{6}\]
where the edges in \(\mathcal{A}_{v}^{sem}\) are weighted by \(1\) and edges in \(\mathcal{A}_{v}^{geo}\) are weighted by their geometrical distance. If an edge exists in both semantic graph and geometrical graph, its weight is calculated based on the semantic graph.
In the training, we use the C4 feature (last convolutional output of 4th-stage) of Faster R-CNN [14], He, Girshick and Sun] as the node feature. Besides, we also encode the location of each object in the relation graph with a 5-d vector \(l_{u}=[\frac{x_{u}}{1W},\frac{x_{u}}{H},\frac{x_{v}}{W},\frac{y_{u}}{H},\frac{ w_{v}}{W\cdot H}]\). \(W,H\) are the image weight and height, respectively. Therefore the final feature of node \(u\) is the concatenation of the GNN feature \(h_{u}\) and location feature \(l_{u}\). We then apply a fully connected layer to the concatenated feature:
\[r_{u}^{rel}=W_{r}[h_{u};l_{u}]+b_{r} \tag{7}\]
\(r_{u}^{rel}\) is the relation feature of node \(u\) in graph \(\mathcal{G}_{v}\).
### Domain Invariant Modular Learning
The subject and location modules process the candidate object features, which can be easier transferred between domains. In this paper, we adopt the same structures used in
Figure 5: Object relation transfer from source domain to target domain via GNNs. Blue denotes the source domain, orange denotes the target domain and brown denotes that the weights update is affected by both source and target domains. In the GNN, features of each node are updated by aggregating information from connected neighbor nodes. node \(D\) and node \(E\) are shared by two domains thus the relations between \(D\), \(E\) and other objects could be transferred to the target domain.
MAttNet [Yu et al.(2018)Yu, Lin, Shen, Yang, Lu, Bansal and Berg] for the two modules.
**Subject module**. The subject module processes the visual feature of the candidate object. Specifically, it takes the C3 and C4 features from Faster R-CNN as input. C3 feature includes rich space information therefore an attentional pooling [Yu et al.(2018)Yu, Lin, Shen, Yang, Lu, Bansal and Berg] is applied to the C3 feature thus the salient feature is emphasized. We denote the subject feature of object \(v\) as \(r_{v}^{sub}\).
**Location module**. Similar to the relation module, a 5-d vector is used to encode the location of the candidate object feature. Then a FC layer is used to encode the 5-d vector. The final location feature of object \(v\) is \(r_{v}^{loc}\).
**Attended language embeddings for visual modules**. For each visual module (subject, location and relation), a separate attention is applied to the language expression feature \(H\) (see Section 3.1) for an attended embedding:
\[q^{m}=\sum_{t=1}^{T}\frac{\exp(f_{m}^{T}h_{t})e_{t}}{\sum_{k=1}^{T}\exp(f_{m}^ {T}h_{k})}, \tag{8}\]
where \(m=\{sub,loc,rel\}\) represents three visual modules. \(f_{m}\) is the learnable weight for each module. \(q^{m}\) are used to attend the feature in visual modules. Besides, the visual modular weights are also learned through a FC layer:
\[[w^{sub},w^{loc},w^{rel}]=\text{softmax}(W_{m}^{T}[h_{0},h_{T}]+b_{m}) \tag{9}\]
**Matching score**. For subject and location modules, matching scores \(S^{sub}\) and \(S^{loc}\) are calculated using cosine similarity (denoted by \(M\)) between the visual and the expression feature, _i.e_., \(S_{v}^{sub}=M(r_{v}^{sub},q^{sub})\), and \(S_{v}^{loc}=M(r_{v}^{loc},q^{loc})\), where \(v\) is a candidate object. For the relation module, its matching score is the maximum among all nodes in the Relation Graph:
\[S_{v}^{rel}=max_{u\in\mathcal{G}_{v},u\neq v}M(r_{u}^{rel},q^{rel}) \tag{10}\]
The overall matching score is the weighted average of three visual modules:
\[S_{v}=\sum_{m\in\{sub,loc,rel\}}w^{m}S_{v}^{m} \tag{11}\]
Finally, expression and candidate object pairs (positive and negative) are sampled for a contrastive learning strategy. Hinge loss is applied for optimization.
### Revision of Domain Adaptation Approaches
Most of the existing domain adaptation algorithms cannot be directly applied to the multi-modal REG problem. We modified several representative domain adaptation methods for the REG problem, which conduct domain adaptation from different perspectives. Notably, some of the recent works such as ensemble-based approaches [Zhou et al.(2020)Zhou, Yang, Qiao and Xiang] are not applicable thus are not included in this paper.
**Unsupervised Domain Adaptation by Backpropagation (DANN) [Ganin and Lempitsky(2015)]**. The idea of DANN is to minimize the loss of the label classifier and to maximize the loss of the domain classifier. The latter encourages domain-invariant features to emerge in the course of the optimization. Suppose \(\theta_{G}\) is the parameters in the feature extractor \(G\), \(\theta_{D}\) is the parameters in the domain classifier \(D\), and \(\theta_{C}\) is the parameters in the matching score module \(C\). The overall optimization functional is
\[\mathcal{E}(\theta_{G},\theta_{C},\theta_{D})=\sum_{i=1..N,d_{i}=0}L_{y}^{i}( \theta_{G},\theta_{C})-\lambda\sum_{i=1..N}L_{d}^{i}(\theta_{G},\theta_{D}), \tag{12}\]
where \(L_{y}^{i}\) and\(L_{d}^{i}\) denote the classification and domain loss, respectively, evaluated at the \(i^{th}\) training example. During the training, the parameters \(\theta_{D}\) of the domain classifier and parameters \(\theta_{G}\) and \(\theta_{C}\) are optimized alternately to minimize the hinge loss and maximize the domain loss. In our experiments, the DANN algorithm is applied to either/both the visual and language modules.
**Conditional Domain Adversarial Network (CDAN) [Long et al.(2017)Long, Cao, Wang and Jordan]**. Like recent popular domain adaptation algorithms [You et al.(2019)You, Long, Cao, Wang and Jordan, Hu et al.(2020)Hu, Kan, Shan and Chen], CDAN requests to involve classification outputs while aligning the features between the source and target domains. In MattNet, there is an auxiliary task to classify the target object called attribute module \(A\). We then use the output from the attribute module to help the domain classifier. The CDAN error terms are:
\[\mathcal{E}(G,C)= \sum_{i=1..N,d_{i}=0}L_{y}^{i}(C(G(x_{i}^{i})),y_{i}^{s}) \tag{13}\] \[\mathcal{E}(D,G,C)=-\mathbb{E}_{x_{i}\sim D_{s}}\log[D(h(G(x_{i}^ {s}),A(G(x_{i}^{s}))))]-\] \[\mathbb{E}_{x_{j}^{t}\sim D_{t}}\log[1-D(h(G(x_{j}^{t}),A(G(x_{j} ^{t}))))] \tag{14}\]
where \(L(.,.)\) is the cross-entropy loss, and \(h=G(x)\otimes A(G(x))\) is the joint variable of feature representation \(G(x)\) and attribute classifier prediction \(A(G(x))\). \(\otimes\) denotes the outer product. The method also utilizes a minimax game as the optimization strategy similar to DANN.
**Smooth Representation for unsupervised Domain Adaptation (SRDA) [Cai et al.(2019)Cai, Wang and He]**. Beyond reducing the divergence between source and target domains, it has been studied that the satisfaction of Lipschitz continuity also guarantees an error bound for the target distribution [Ben-David and Urner(2014)]. SRDA proposes the local smooth discrepancy (LSD) term to mea
sure the degree that a sample \(x\) breaks down the local Lipschitz property for Lipschitz continuity [Grandvalet et al.(2005)Grandvalet, Bengio et al.]. LSD is defined as:
\[LSD(x,\theta)=d(C(G(x)+r),C(G(x))),||r||\leq\epsilon, \tag{15}\]
where \(d(,)\) is a discrepancy function that measures the divergence between two outputs. \(r\) is the noise and \(\epsilon\) denotes the maximum norm of \(r\). As for the choice of \(d(,)\), we employ MSE loss instead of the cross-entropy loss function used in the original paper since our problem is not a classification problem.
During the training, all networks are first pre-trained. Then sensitive samples \(G(x)+r\) are generated in the feature space of \(G\). Finally, \(G\) is optimized to minimize LSD for target samples.
\[\min_{G}\sum_{i=1\dots N}LSD(x_{i}^{t},\theta_{G}) \tag{16}\]
## 4 Experiments
### Datasets and Experiment Settings
The proposed approach is compared with the adapted models presented in Section 3.4 on four benchmark datasets: RefClef [Kazemzadeh et al.(2014)Kazemzadeh, Ordonez, Matten and Berg], RefCOCO [Yu et al.(2016)Yu, Poirson, Yang, Berg and Berg], RefCOO+ [Yu et al.(2016)Yu, Poirson, Yang, Berg and Berg] and RefCOCOg [Mao et al.(2016)Mao, Huang, Toshev, Camburu, Yuille and Murphy]. The main differences between the four datasets are depicted in Table 1. We use RefCOCO as the source domain dataset because it has the most number of expressions, and differs from other three datasets from multiple perspectives such as annotating styles or image bases.
In the experiments, each of the models is first trained with the full RefCOCO data and then finetuned with 10% of the target data. The evaluation is conducted on the full test data. We implement two baseline models without domain adaption algorithms, one is directly trained on the 10% target data, and the other is trained on the full RefCOCO data and finetuned on 10% of the target data. The two baselines act as the lower bounds (rows 3-4 in Table 2). We also implement two upper-bounds (rows 1-2 in Table 2), in which we train/finetune on the full target dataset rather than 10%. For the graph embedding model, we adopt a two-layer GCN in [Kipf and Welling(2016)]. We use the pretrained scene graph extractor in [Xu et al.(2017)Xu, Zhu, Choy and Fei-Fei]. For parameters, we follow the official implementation of these algorithms. An NVIDIA XP GPU is used for all experiments. Best results are recorded among the three different finetuning learning rates, which are equal to, half of, or 10% of the pre-training learning rate, respectively.
### Results
Table 2 shows the results of domain adaptation from RefCOCO to RefCOCO+, RefCOCOg and RefClef. Each of the datsets has different splits. Details about each split can be found in [Kazemzadeh et al.(2014)Kazemzadeh, Ordonez, Matten and Berg] and [Yu et al.(2016)Yu, Poirson, Yang, Berg and Berg]. The first two rows show the upper-bounds. For DANN [Ganin and Lempitsky(2015)], we consider three scenarios in which the domain classifier is only implemented on the subject domain (-visual), on the language domain (-language) or on both of the visual and language domain (-language&visual). We also implement one model where we concatenate the features from the visual and language branches and implement a domain classifier proposed in [Tzeng et al.(2017)Tzeng, Hoffman, Saenko and Darrell] on top of the concatenation features (adversarial-cat). For this model, a generative adversarial nets (GAN) [Mirza and Osindero(2014)] training strategy is applied. Besides, we also include the results of replacing the embedding layer with a pre-trained Bert model [Devlin et al.(2018)Devlin, Chang, Lee and Toutanova] as the language feature extractor. (row 14 in Table 2) and using the language and visual relation-based method separately (Table 2, rows 11-12).
Our approach achieves the best results compared with all baselines. Among the three target domains, we have more improvements on RefCOCO+ and RefCOCOg than on RefClef. The main difference is that RefCOCO share the same image base with RefCOCO+ and RefCOCOg but different images with RefClef. It shows that our approch works better while the image base is consistent since our approach focus on the relation-based transfer, which could be potentially combined with image feature transferring method and boost the results further.
**Bert embedding.** Bert model [Devlin et al.(2018)Devlin, Chang, Lee and Toutanova] has been widely used in natural language processing tasks.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & **RefClef** & **RefCOCO** & **RefCOCO+** & **RefCOCOg** \\ \hline image from & ImageCLEF & MSCOCO & MSCOCO & MSCOCO \\ \hline expression style & concise and simple & concise and simple & no location words & complex and long \\ \hline expressions/objects/images & 130525/96654/19894 & 142209/50000/19994 & 141564/49856/19992 & 85474/54822/26711 \\ \hline \end{tabular}
\end{table}
Table 1: The differences between REG datasets
Since Bert model is pre-trained with large language datasets, it improves the quality of language embedding and brings the model more domain-invariant features. Therefore, we investigate to replace the embedding layer in the REG model with the pre-trained Bert model. The results show that it achieves improvements in almost all splits, but not significantly. Besides, for RefClef on split TestC the results with the Bert embedding is slightly worse than without it. That may caused by that TestC contains objects sampled from images that contain at least 2 objects of the same category. [Kazemzadeh et al.(2014)Kazemzadeh, Ordonez, Matten and Berg]. Bert embedding hardly help distinguish them.
**Performance improvement in each modal.** The three proposed relation-based REG domain adaptation optimization methods could actually be applied separately. We show these results in Table 2, rows 11-12. We find that while using these methods alone, they all improve the transferability of the REG model and achieve better results than baselines. But still, the best performance is achieved by stacking them together. Besides, the visual-side relation transfer method (GCN relation encoding) achieves slightly better performance compared with the language-side optimization method (cross-attention word embedding) while being used separately.
## 5 Conclusions
In this paper, we investigate the domain adaptation problem for the REG task, which is still not well studied but widely needed in real-world applications. We propose several optimization methods based on the relations on visual and language modalities, which efficiently transfer knowledge learnt in source domain to target domain for the REG problem. Experimental results show that our proposed approach outperforms other domain adaptation methods. We expect our approach from the relation perspective, along with the sound experimental results, can further facilitate future research in this area.
## 6 Acknowledgement
This material is based upon work supported by the National Science Foundation under Grant No 1952792.
|
2309.13521 | JWST MIRI and NIRCam Unveil Previously Unseen Infrared Stellar
Populations in NGC 6822 | NGC 6822 is a nearby (~490 kpc) non-interacting low-metallicity (0.2 Zsolar)
dwarf galaxy which hosts several prominent H ii regions, including sites of
highly embedded active star formation. In this work, we present an imaging
survey of NGC 6822 conducted with the NIRCam and MIRI instruments onboard JWST.
We describe the data reduction, source extraction, and stellar population
identifications from combined near- and mid-infrared (IR) photometry. Our
NIRCam observations reach seven magnitudes deeper than previous JHKs surveys of
this galaxy, which were sensitive to just below the tip of the red giant branch
(TRGB). These JWST observations thus reveal for the first time in the near-IR
the red clump stellar population and extend nearly three magnitudes deeper. In
the mid-IR, we observe roughly two magnitudes below the TRGB with the MIRI
F770W and F1000W filters. With these improvements in sensitivity, we produce a
catalogue of ~900,000 point sources over an area of ~ 6.0 x 4.3 arcmin2. We
present several NIRCam and MIRI colour-magnitude diagrams and discuss which
colour combinations provide useful separations of various stellar populations
to aid in future JWST observation planning. Finally, we find populations of
carbon- and oxygen-rich asymptotic giant branch stars which will assist in
improving our understanding of dust production in low-metallicity, early
Universe analogue galaxies | Conor Nally, Olivia C. Jones, Laura Lenkić, Nolan Habel, Alec S. Hirschauer, Margaret Meixner, P. J. Kavanagh, Martha L. Boyer, Annette M. N. Ferguson, B. A. Sargent, Omnarayani Nayak, Tea Temim | 2023-09-24T01:29:13Z | http://arxiv.org/abs/2309.13521v2 | # JWST MIRI and NIRCam Unveil Previously Unseen Infrared Stellar Populations in NGC 6822
###### Abstract
NGC 6822 is a nearby (\(\sim\)490 kpc) non-interacting low-metallicity (0.2 \(Z_{\odot}\)) dwarf galaxy which hosts several prominent H ii regions, including sites of highly embedded active star formation. In this work, we present an imaging survey of NGC 6822 conducted with the NIRCam and MIRI instruments onboard JWST. We provide a description of the data reduction, source extraction, and stellar population identifications from combined near- and mid-infrared (IR) photometry. Our NIRCam observations reach seven magnitudes deeper than previous \(JHK_{s}\) surveys of this galaxy, which were sensitive to just below the tip of the red giant branch (TRGB). These JWST observations thus reveal for the first time in the near-IR the red clump stellar population and extend nearly three magnitudes deeper. In the mid-IR, we observe roughly two magnitudes below the TRGB with the MIRI F770W and F1000W filters. With these improvements in sensitivity, we produce a catalogue of \(\sim\)900,000 point sources over an area of \(\sim 6.0\times 4.3\) arcmin\({}^{2}\). We present several NIRCam and MIRI colour-magnitude diagrams and discuss which colour combinations provide useful separations of various stellar populations to aid in future JWST observation planning. Finally, we find populations of carbon- and oxygen-rich asymptotic giant branch stars which will assist in improving our understanding of dust production in low-metallicity, early Universe analogue galaxies.
keywords: galaxies: dwarf - galaxies: irregular - galaxies: individual (NGC 6822) - infrared: galaxies - infrared: stars - stars: AGB and post-AGB
## 1 Introduction
The dwarf irregular galaxy NGC 6822 is one of the closest (d \(\sim 490\pm 40\) kpc; Sibbons et al. 2012; Fusco et al. 2012) in the Local Group. It is famously home to some of the largest, brightest star-forming regions in the local universe (Kennicutt 1979; Hodge et al. 1989; O'Dell et al. 1999; Jones et al. 2019) with active star formation throughout its disk and central bar. Its low metallicity ([Fe/H] \(\approx-1.2\); \(\sim\)30% \(Z_{\odot}\); Skillman et al. 1989; Lee et al. 2006), elevated star formation rate, and overall youth make it a nearby analogue of galaxies at the universal epoch of peak star formation (\(z\sim 1.5-2\); Madau & Dickinson 2014), at which point a majority of the universe's star formation and chemical enrichment is expected to have taken place. We note that while NGC 6822 is generally considered to be isolated (de Blok & Walter 2000; Battinelli et al. 2006) without any detectable satellites, Zhang et al. (2021) postulate a 200 kpc close passage within the virial radius of the Milky Way \(3-4\) Gyr ago. Table 1 lists properties of NGC 6822 that have been adopted for this study.
As a target of many stellar populations studies, NGC 6822 has been extensively surveyed in both the optical and infrared (IR). An old (\(\sim\)11 Gyr) stellar component within the galaxy was inferred with the first discovery of RR-Lyrae stars (Clementini et al. 2003; Baldacci et al. 2003), while several localised bright H ii regions in
the central bar Cannon et al. (2006) show the galaxy is still actively forming stars (Jones et al., 2019). After steady star formation over its evolution, NGC 6822 began a burst of star formation \(\sim\)3 Gyr ago (Tolstoy et al., 2001) and has continued to increase over the last \(100-200\) Myr (Gallart et al., 1996). Optical surveys with the _Hubble Space Telescope_ (HST; Wyder, 2001) and _Subaru_(Tantalo et al., 2022) detect intermediate age red clump (RC) and red giant branch (RGB) stars. The asymptotic giant branch (AGB) stars within the galaxy trace the intermediate to old stellar populations and have been extensively studied in the IR (see e.g., Cioni & Habing, 2005; Gibbons et al., 2012; Hirschauer et al., 2020). These cool luminous stars evolve from low- to intermediate-mass (\(\sim 1.5-10\)\(M_{\odot}\)) main sequence (MS) progenitors and may play a vital role in the dust evolution of galaxies.
The dust contribution from evolved stars in metal-poor environments and at high redshift is not currently well defined. In particular, for young galaxies inhabiting the early universe, it can be expected that insufficient time has elapsed for MS stars to reach the AGB phase, which suggests that they do not contribute significantly to the overall dust budget. Observations have thus far shown, however, that large reservoirs of dust appear to exist at redshifts out to \(z\sim 6\)(Bertoldi et al., 2003; Robson et al., 2004; Beelen et al., 2006; Algera et al., 2023). Studies of nearby metal-poor systems such as dwarf galaxies and globular clusters have shown that dust can originate from AGB stars in these environments (McDonald et al., 2010; Whitelock et al., 2018; Jones et al., 2018), however the effects of metallicity on AGB star dust production are disputed (van Loon et al., 2005; McDonald et al., 2011; Sloan et al., 2012, 2016). The Surveying the Agents of a Galaxy's Evolution (SAGE) surveys (Meixner et al., 2006; Gordon et al., 2011) mapped the Magellanic Clouds with _Spitzer_ from 3.6 to 160 \(\mu\)m. These data enabled detailed studies characterising the evolved stellar populations within the Magellanic Clouds and their contributions to dust production (Blum et al., 2006; Srinivasan et al., 2009; Boyer et al., 2011). Boyer et al. (2011) find that the very dusty "extreme" carbon-rich AGB stars dominate the return of dust into the interstellar medium (ISM) in both galaxies (\(\sim\)90% of the dust input from evolved stars; Boyer et al., 2012), while supergiant (SG) stars do not contribute significantly (\(<\)4%). The authors also find that in the higher metallicity Large Magellanic Cloud (LMC), oxygen-rich AGBs produce overall more dust than the lower-mass carbon-rich AGB stars, while in the Small Magellanic Cloud they contribute dust roughly equally.
While studies such as SAGE have made great progress towards understanding dust production mechanisms at low metallicity, it is now possible to extend this work to more distant systems with JWST. The JWST imaging program of NGC 6822 (program ID: 1234, PI: M. Meixner) that we present in this work allows us to investigate the evolved star population out to \(\sim\)500 kpc at the resolution and depth that was achieved by _Spitzer_ in the Magellanic Clouds. This work builds upon the studies of Jones et al. (2017), Hirschauer et al. (2020), and Kinson et al. (2021) that characterised the stellar populations of NGC 6822.
This paper presents the overview of the NGC 6822 JWST imaging program, including the observation details, data reduction, photometric extraction, and combined NIRCam and MIRI colour-magnitude diagrams (CMDs) used to identify its various stellar populations, with a focus on the evolved stars. Section 2 presents the observational strategy in detail. We describe the data reduction and photometric extractions in Section 3. In Section 4, we present and discuss our resulting images, stellar classifications, and luminosity functions. Finally, we summarise the results in Section 5.
wavelength detector subarrays in each module are filled, as is the \(\sim\)43\({}^{\prime\prime}\) gap between the A and B modules. We positioned the east and west portions of our mosaic such that there is a 10.0% overlap in both the rows and columns of the resultant 2\(\times\)1 mosaic. The nright2 readout pattern was selected to optimize the signal-to-noise ratio (S/N), with one integration per exposure and seven groups per integration. With a three-point sub-pixel dither pattern to sample the point spread function (PSF), this resulted in 12 total dithers, with an overall total exposure time of 1803.8 seconds for each of the short- + long-wavelength filter combinations (i.e., F115W + F356W and F200W + F444W). See also Table 2 for filter properties and appendix Table A1 for a complete summary of exposure parameters.
Coverage of NGC 6822's central stellar bar was achieved by restricting the aperture position angle (PA) of the JWST spacecraft to between 92.0\({}^{\circ}\) and 93.0\({}^{\circ}\). Finally, observations of the east and west halves of the mosaic were grouped to be non-interruptible, ensuring a consistent relative orientation. The NIRCam mosaic was centred at RA = 19:44:56.1990, Dec. = -14:47:51.29; the tile with a MIRI coordinated parallel is centred at RA = 19:45:00.2644, Dec. = -14:47:55.23.
Our selection of NIRCam filters was based upon prior work on stellar populations at similar wavelengths. The F115W and F200W filters are the closest equivalent to standard Johnson \(J\) and \(K_{s}\) filters, respectively, while the F356W and F444W filters are equivalent to _Spitzer_ IRAC [3.6] and [4.5], respectively. With these filter selections, we are able to construct a standard collection of diagnostic near-IR CMDs, such as F115W - F200W vs. F200W (equivalent to \(J-K_{s}\) vs. \(K_{s}\)). This allows for comparative study with previous observations made of other galaxies, including the SAGE studies of the Magellanic Clouds (e.g., Blum et al., 2006; Meixner et al., 2006; Whitney et al., 2008; Gordon et al., 2011), the DUST in Nearby Galaxies with Spitzer (DUSTINGS) project which surveyed local group dwarf galaxies in the mid-IR (e.g., Boyer et al., 2015a; McQuinn et al., 2017; Goldman et al., 2019), and others (e.g., Jones et al., 2015, 2018, 2019; Hirschauer et al., 2020).
### Miri
MIRI observations totalling \(\sim\)15.12 hours were obtained between 2022 September 4 and 2022 September 15. The 1\(\times\)6 mosaic was restricted to a PA of 93\({}^{\circ}\) and positioned to run along the central stellar bar of NGC 6822. Each pointing in the mosaic included a 10.0% overlap between the rows to ensure smooth background matching between the fields. Due to an error in the program implementation, only five of the six MIRI tiles in the mosaic were observed on 2022 September 4. The final MIRI tile (and its coordinated NIRCam parallel observations) were subsequently obtained on 15 September 2022. Our program employed the F770W, F1000W, F1500W, and F2100W filters using a small four-point cycling other pattern, the full subarray and fastri readout pattern. A list of filter properties can be found in Table 3 and a list of exposure parameters including groups per integration and integrations per exposure for each filter is provided in appendix Table A2. The MIRI mosaic was centred at RA = 19:44:58.0949, Dec. = -14:48.20.62; the MIRI tile with a NIRCam coordinated parallel was centred at RA = 19:44:58.4923, Dec. = -14:46:40.85.
Our selection of MIRI filters was guided by the predicted MIRI colours of mid-IR bright stellar populations from Jones et al. (2017): CMDs constructed from these wavelengths reveal the reddest and duistiest sources, distinguishing evolved (SGs and AGB stars) and young (YSOs) stellar populations. In addition, the F770W filter traces polycyclic aromatic hydrocarbon (PAH) emission which is expected to be prevalent in star-forming regions, while the F1000W filter is sensitive to the 10 \(\mu\)m silicate absorption present in the spectral signatures of dusty, embedded YSOs.
### Parallel Imaging
Accompanying our primary pointings, we obtained parallel observations in both NIRCam and MIRI. Only one of the two 2\(\times\)1 mosaic tiles in our primary NIRCam observations has an associated MIRI coordinated parallel (see Figure 1). This parallel observation was designed to provide background comparison images by observing a field offset from the primary target by the intrinsic physical sep
Figure 1: _Spitzer_ three-colour image of NGC 6822 from Cannon et al. (2006) with the JWST NIRCam and MIRI survey regions superimposed (solid lines). The left image shows the NIRCam coverage and the right image the MIRI coverage. The NIRCam tile with an associated MIRI parallel is shown in green, while the MIRI tile with its associated NIRCam parallel is in red with detector modules A and B labelled accordingly. North is up and east is to the left.
aration between NIRCam and MIRI on the JWST focal plane. We were able to position our pointing such that the eastern tile of our primary NIRCam mosaic was placed along the galaxy's main body, and the coordinated MIRI parallel was located north of the disk, offset from the Hubble X star-forming region. The MIRI parallel images were obtained at this location in the F1000W and F1500W filters using a full subarray, a slow1 readout pattern, with five groups per integration and one integration per exposure over the 12 total dithers, resulting in a total exposure time of 1433.4 seconds per filter (appendix Table A2). The total footprint encompasses 74''\(\times\)113''. This readout pattern was selected in contrast with the MIRI prime observations which used fast1 (see subsection 2.2) due to data rate limits.
One of our six mosaic tiles in the primary MIRI observation has a NIRCam coordinated parallel (second from the top; see Figure 1). This tile was selected due to the advantageous position in the instrument focal plane of the Hubble IV star-forming region, which falls into NIRCam's Module B. Module A is situated further from the galaxy and is thus a useful pointing for obtaining information on a region of the galaxy that is not actively forming stars. The two regions image an area of 132''\(\times\)132''each. These parallels imaged in the F140M, F150W, F277W, and F335M bands. A complete summary of the NIRCam parallel observing parameters including read mode, groups, integrations, and total exposure time is given in appendix Table A1.
## 3 Data reduction and photometry
### NIRCam Image Processing
The uncalibrated NIRCam images were processed through all three stages of the JWST pipeline using version 1.9.6 and Calibration Reference Data System (CRDS) version 11.16.20. For Stage-1 processing, we used CRDS context 1063 (jwst_1063.pmap). We implemented the frame0 correction to recover stars that saturated in the first group (\(\sim\)21 s), but were unsaturated in the first 10.7 seconds comprising frame0 (ramp_fit.suppress_one_group-False). Stage-2 processing was run with CRDS context 1075 (jwst_1075.pmap). The output of Stage-2 (*_calls) were corrected for 1/1 noise using the image Ioover.py tool from Willott (2022). The resulting files were then aligned to _Gaia_ DR3 using the _JWST/Hubble_ Alignment Tool (JHAT; Rest 2023). Stage-3 mosaics were created with CRDS context 1077 (jwst_1077.pmap). We skipped the tweakreg step in Stage-3 since the WCS alignment was provided already by JHAT. The difference in pmap for each stage of the pipeline occurred due to a rapid succession of reference file deliveries while we were processing the data. The only update relevant to this dataset between the 1063 and 1075 pmaps is an update to the snowball correction step. Snowballs are rare in this dataset, so we did not apply that correction.
### MIRI Image Processing
A description of the MIRI processing for this observational program is given in Lenkic et al. (2023) but we give a short overview here and describe the treatment of the background in more detail. To create calibrated MIRI images we used JWST pipeline version 1.9.5 with CRDS version 11.16.21 and context jwst_1084.pmap. Each of the raw MIRI files was processed through Detector1Pipeline and the output through Image2Pipeline with default parameters. The resulting images were then aligned to _Gaia_ DR3 using the tweakreg step in the pipeline.
We generated instrumental backgrounds from Visits 001 and 005 of Observation 007 (i.e., the two tiles at the mosaic edges), as these are least affected by real diffuse emission. While this is not the ideal strategy, it helps to mitigate detector effects and reveal fainter sources in the galaxy. To generate an instrumental background image free of point sources and diffuse emission, we median combined all ofhers of a given filter in both tiles. However, in the case of the F770W and F1000W filters, significant structure due to the extent of real diffuse emission in these tiles rendered these backgrounds unusable. For these filters we instead created a model instrumental background by taking comparatively unaffected detector rows and columns in the median combined background in order to create a representative median row and column. These were then mapped onto the detector plane to produce the model background which also accounts for row and column calibration artefacts (Dickel et al., in prep.). An example of this is shown in Figure 2.
For the F1500W and F2100W filters, only minimal structure around the brightest sources remained in the median background. Even though the residual contamination is low, it can still manifest in background subtracted images as faint shadows dispersed on the image. To remove the residuals we manually masked their location and applied an approximate 'filling' of the gaps using the ndimage.distance_transform_edt module in SciPy. Once backgrounds were created for all filters, we subtracted the instrumental background images from each dither in the mosaic tiles. Since Visit 001 of Observation 009 was performed 10 days after the other visits, we found that the background levels had increased slightly and corrected for this by determining a simple background offset from regions on the detector free of diffuse emission. Overall we found that our background subtraction methods performed well in reducing the background level across all mosaic tiles in all filters. We then constructed mosaics from the background subtracted images using Image3Pipeline.
### Source Detection
Point-source photometry was extracted using the starguoh(Nally & Jones, 2022) photometric tool and pipeline which is optimised for JWST observations of crowded stellar populations within complex environments (e.g. Jones et al., 2023). This pipeline performs point-source extraction and band merging across multiple observations and wavelengths utilising core functions from the python photutils (Bradley et al., 2022) package. A complete set of relevant starguoh parameters and their values adopted for our photometric extractions are listed in appendix Table A3.
The individual exposures from the _Gaia_-aligned Stage-2 images are used for source detection with starbuq2 --detect. Sources with a 5\(\sigma\) detection above the sky, which is initially estimated locally within an annulus, are first located by centroid fitting. The geometry of each point source is calculated and _sharpness_ and _roundness_ values are assigned. We remove cosmic rays from the catalogue by setting an upper limit on _sharpness_ and faint peaks in the dust structure by setting a lower limit. The source symmetry is measured using _roundness_; we are able to remove most resolved background galaxies and further spurious detections within the dust structure by limiting the allowed level of asymmetry.
The NIRCam fullbox 4 flight dither mode results in four sets of three overlapping exposures for a given pointing. Only sources detected in at least two frames of each set are retained in the resulting catalogue; any sporadic source not meeting this threshold is
discarded as a likely cosmic ray or other detector artefact while retaining true sources at the detector edges. For the MIRI images, the full cycling, small mode results in four dithers, so we stipulate that a source must be detected in three or more exposures. The flux distribution between the exposures is examined and sources with asymmetric distributions (the mean and median differ by more than 5%) are assigned the flag SRC_VAR. Matching between frames in the image stack is done with a nearest neighbour calculation with a threshold separation of 0.1 arcseconds.
We conduct aperture photometry on these sources in each individual frame. A fixed aperture radius of 1.5 pixels and a sky annulus with an inner and outer radius of 3.0 and 4.5 pixels, respectively, is used for all NIRCam bands, and the aperture correction is interpolated between values given in CRDS jwst_nircam_apcorr_0004.fits. The MIRI images are treated the same but we increase the aperture radius to 2.5 pixels for F770W and F1000W, and 3.0 pixels for F1500W and F2100W to account for the larger PSFs at longer wavelengths. The backgrounds are calculated within a 4.0-5.5 and 4.5-6.0 pixel annuli respectively, and aperture corrections are calculated from jwst_niri_apcorr_0005.fits. The data quality array within the aperture of each source is inspected and we flag sources with saturated or DO_NOT_USE pixels with the starbuii flag SRC_BAD and pixels that contained a jump during detection with SRC_JWP.
We present Table 4 containing a list of source counts for every filter and an estimate of sensitivity values.
### PSF Photometry
The nebulous background emission underlying our images of NGC 6822 is modelled using starbug2 --background. This routine places masking apertures of varying sizes, calculated on the flux of each source, and fills them with the median pixel value within a local annulus. For every pixel in the image, the background is measured by averaging all the local pixels within a set box size. This process creates an effective representation of the nebulous emission; by subtracting this diffuse emission background from a single exposure, we create a clean image of our field with nebulous emission removed onto which PSFs can be accurately fit.
For each detector subarray within NIRCam, we generated a five arcsecond PSF with webspspf (Perrin et al., 2014) version 1.1.1. The single exposures have their nebulous background estimated and subtracted, and we run PSF photometry using starbug2 --psf on the residual image at the source positions from the combined and cleaned source list generated in our aperture photometry step. The centroid position is left as a free parameter, allowing both flux and position to be fit during the routine. If the newly fitted centroid position differs from the initial guess by greater than 0.1 arcseconds, we refit the flux but hold the position fixed at the initial guess. This results in a poorer fit and therefore we flag the source with SRC_FIX.
Unfortunately, the MIRI PSFs simulated by webspf do not currently include the cruciform structure which is known to contain up to \(\sim\)26% of the flux in the F770W band (Gaspar et al., 2021), resulting in poor PSF fits and some flux which is unaccounted for. In this work, we therefore use aperture photometry for all the MIRI detections in our catalogue, rather than PSF photometry.
### Photometric Corrections
We calculate and apply instrumental zero point magnitudes to calibrate the PSF photometry because they are not normalised to physical units. For each filter, a cleaned aperture photometry catalogue which retains only the most reliable point sources is used as the base to determine the zero point. To produce the clean source list from the main catalogue, both the faintest and brightest sources are removed to limit sources with low S/N, partially-saturated objects, and any potential remaining detector artefacts. Sources must have a photometric error of less than 0.1 and must not have any poor-quality data flags. These cuts eliminate over 80% of the sources in the main catalogue. Finally, these sources are matched to the equivalent source in the PSF catalogue. We use starbug2 --calc-instr-zp to calculate the median difference in the source magnitudes measured by aperture photometry and PSF fitting to obtain instrumental zero point magnitudes which are subsequently used to calibrate the PSF photometry from star
Figure 2: Example of background model applied to an F770W dither. The uncorrected image, the model instrumental background, and background subtracted images are shown in left, middle, and right, respectively. Only the main imager field is shown as we exclude the Lyt coronagraph from our mosaics.
bugil to the AB magnitude system. For easier comparison with past works, we convert these magnitudes to the Vega system using the reference files just_nicram_abvegaoffset_0001.asdf and just_miri_abvegaoffset_0001.asdf.
Finally, as NGC 6822 is located at a low galactic latitude it is necessary to correct the NIRCam photometry for foreground reddening. We adopt the value E(B\(-\)V) = 0.36 (Tantalo et al., 2022) to correct for the moderate Galactic foreground extinction and apply the extinction curve of Cardelli et al. (1989) assuming \(R_{V}=3.1\). No extinction corrections were applied to the MIRI data, as foreground reddening in mid-IR wavelengths is negligible. Nor was differential extinction internal to NGC 6822 accounted for as this is assumed to be insignificant compared to photometric uncertainties due to crowding.
### Catalogue
Individual filter band catalogues are merged together to form a combined NIRCam and MIRI point-source catalogue for NGC 6822 with starbug2-match --band. Prior to any matching, the photometric uncertainty of each source is assessed, and anything with a magnitude error greater than 10% is removed from the individual catalogues. This reduces the likelihood of mismatching high-quality sources with any remaining spurious sources, producing a reliable band-matched catalogue at the possible cost of completeness.
Initially, we treat the NIRCam and MIRI catalogues separately, applying the same methodology to the band matching within each instrument. The matching routine starts with the shortest-wavelength catalogue, as the smaller PSF full width at half maximum (FWHM) leads to the highest astrometric certainty. This is matched with a nearest neighbours method to the next-shortest wavelength catalogue, with any unmatched sources above the separation threshold appended to the end. The position used for each source is taken from the shortest-wavelength catalogue in which it initially appeared, allowing for faint red objects such as highly dust-enshrouded AGB stars that are not visible in the near-IR to be retained. We used a separation threshold that increases as the PSF size increases, as using a single threshold larger than the astrometric uncertainty of the longest MIRI wavelengths would cause mismatching in the shortest NIRCam filters. We match short wavelength NIRCam filters using a threshold of 0.06\({}^{\prime\prime}\) and long wavelength NIRCam filters with 0.1\({}^{\prime\prime}\). Within MIRI we adopt 0.15\({}^{\prime\prime}\) for F770W and F1000W, 0.2\({}^{\prime\prime}\) for F1500W, and 0.25\({}^{\prime\prime}\) for F2100W.
Finally, we merge the NIRCam and MIRI catalogues with a separation threshold of 0.3\({}^{\prime\prime}\). We expect many of the reddest and most dust-enshrouded sources detected in the long-wavelength MIRI data to be missing from the shortest-wavelength NIRCam images. However, the depth and sensitivity of F115W and F200W result in a catalogue populated by many faint blue sources. Consequently, a simple positional matching approach between a long-wavelength MIRI band and a short-wavelength NIRCam band would likely result in a significant number of mismatches because of the many sources present at short wavelengths which lie along similar lines of sight. To combat this issue we require any sources matched between NIRCam and MIRI to have an F444W detection. In other words, a source in our MIRI catalogue will be compared with all sources in the NIRCam catalogue within the matching radius and paired with the nearest source for which a detection in the F444W band exists. If no NIRCam sources within the matching radius are detected in the F444W band, the MIRI source will be assumed to have no NIRCam counterpart and will subsequently be appended to the catalogue as a new object. Thus both blue and red objects in the catalogue will be retained and the chance of a mismatch between co-located sources reduced.
### Foreground Contamination
To remove foreground Milky Way stars from our JWST NGC 6822 source list we examine _Gaia_ Data Release 3 (DR3 Gaia Collaboration et al., 2021) for foreground contamination. First, the _Gaia_ catalogue is cleaned of sources with poor astrometry and possible non-single objects within our JWST FoV, as outlined by Fabricius et al. (2021). This involves keeping sources with: RUVE\(\leq\)1.4, astrometric_excess_noise_sig\(\leq\)2.0, visibility_periods_used\(\geq\)9 and ipd_gof_harmonic_amplitude\(\leq\)0.1. The cleaned _Gaia_ catalogue is matched to our NGC 6822 photometry, resulting in 656 positive matches within the main field. Sources that exhibit a significant (5\(\sigma\)) parallax or proper motion are considered to be foreground sources and are thus removed from our catalogue. Due to the relative proximity of NGC 6822, we check that the proper motion of the sources do not deviate significantly from the global proper motion of the galaxy, as outlined by Dimitrova et al. (2021).
In total, we find that 179 sources exhibit proper motion above the threshold with all but one also deviating significantly from that of the galaxy. We also find that every source with significant parallax also exhibits proper motion. This allows for 178 sources to be removed from the catalogue as foreground contaminates. Despite NGC 6822's proximity to the galactic plane, we see a low number of foreground stars in our data set. Our detection routine is unable to recover many of the bright foreground stars due to the saturation limits of our data. Furthermore, _Gaia_ only detects the brightest IR sources in NGC 6822; this paired with the small FoV of our observations means that the number of foreground contaminating stars in our catalogue is low, as expected.
## 4 Results and Discussion
### Mosaic Images
The NIRCam prime mosaics shown in Figure 3 cover a FoV of \(\sim\)29.0 arcmin\({}^{2}\), whilst the MIRI mosaic covers a \(\sim\)14.5 arcmin\({}^{2}\) FoV, with the NIRCam FoV overlapping nearly the entirety of that of MIRI. These mosaics image NGC 6822, located at a distance of 490 kpc, with resolutions from 0.037\({}^{\prime\prime}\) (0.087 pc) to 0.139\({}^{\prime\prime}\) (0.332 pc) in NIRCam, and from 0.268\({}^{\prime\prime}\) (0.638 pc) to 0.673\({}^{\prime\prime}\) (1.601 pc) in MIRI. This provides an improvement over existing _Spitzer_ imaging by up to a factor of 10.
The NIRCam field shows a smooth distribution of stars increasing in density towards the centre of the FoV. Scanning by eye reveals an assortment of clear extreme candidates with bright blue or red colours on a backdrop of many faint stars. The background galaxies are evenly distributed across the image, with some showing detailed spiral structures and others only visible in the longest wavelengths. A dense globular cluster (Hubble VII) is visible in the centre right of the image. The extent of the MIRI footprint is overlaid as a white dotted line. The NIRCam data is devoid of all dust structure whereas the MIRI data of the same region is completely dominated by complex diffuse emissions. The centre of our FoV covers the young massive star forming region Spitzer I, for which a detailed study of the young stellar objects has been conducted by Lenkic et al. (2023).
### Stellar Population Classification
This section presents JWST CMDs covering the galaxy within the full extent of our survey. We select colour combinations that highlight significant population separation and those which represent similar wavelength coverage as _Spitzer_ filters used in the SAGE surveys (Meixner et al., 2006; Gordon et al., 2011). We adapt classification boundaries from prior work (Blum et al., 2006; Hirschauer et al., 2020) and investigate the location various stellar populations occupy in different JWST colour combinations, which we anticipate will be useful in future stellar surveys.
Figure 4 shows one NIRCam-only CMD (F115W\(-\)F200W vs. F200W) in contour and Hess diagram format, one MIRI-only CMD (F770W\(-\)F1500W vs. F770W), and one combined CMD (F444W\(-\)F770W vs. F770W) with the most identifiable populations overall. The first CMD, with a filter combination most similar to the (\(J-K_{s}\) vs. \(K_{s}\)) CMDs of previous NGC 6822 studies (e.g., Cioni & Habing, 2005; Sibbons et al., 2012; Whitelock et al., 2013), was chosen as it utilizes the most sensitive photometry and therefore reveals the faintest populations. The diagram is shaped as a collection of sources that splits vertically into two main fingers
Figure 3: JWST full-colour mosaics displaying the spatial coverage from both instruments. Above shows MIRI coverage with F770W (blue), F1000W (green), F1500W (yellow) and F2100W (red), image credited to ESA/Webb, NASA & CSA, M. Meixner. Below shows the NIRCam coverage with F115W (blue), F200W (cyan), F356W (yellow) and F444W (red) with the extent of the MIRI footprint marked with the white dotted line. East is up and north is to the right.
from the base, with the broadness of the base likely being a product of greater scatter as the sensitivity limit of the photometry is approached. The blue finger follows the Upper Main Sequence (UMS), with more massive stars appearing higher up. The right-hand fork is primarily the Red Giant Branch (RGB), with the Red Clump (RC) appearing as a dense bulge at F200W=22.0 and the AGB bump (AGBb) embedded just above at F200W=21.0. The vertical track seen above the RC is formed by red helium burning (RHeB) stars evolving toward the AGB. The supergiant (SG) track splits from the RGB below the Tip of the RGB (TRGB) and runs diagonally upward, consisting of some of the brightest sources in the galaxy. Above the TRGB the thermally pulsing AGB (TP-AGB) stars separate into oxygen-rich AGB (OAGB) and carbon-rich AGB (CAGB). A fraction of the TP-AGB will likely be mid thermal pulse and sit below the TRGB. Formal separation of these stars is difficult photometrically but they make up only a small percentage (\(<10\%\)) of the overall count (Boyer et al., 2015).
#### 4.2.1 Upper Main Sequence
The upper main sequence population is clearly visible in the top left panel of Figure 4 as the prominent branch located at F115W \(-\) F200W \(<0.5\) and approaching F115W \(-\) F200W \(=0.0\). At F200W \(=25.0\) this branch connects with the base of the RGB, though it is unclear if this constitutes the Main Sequence Turn-Off (MSTO) for an aged population as the confusion introduced into the photometry as it approaches the completeness limit merges the populations together. An optical survey of NGC 6822 by Zhang et al. (2021) isolated the UMS with ages \(<\)100 Myr and confirmed that the population perfectly traced the HI gas distribution that lies orthogonal to the bulk of the stellar component of the galaxy. Because they trace recent star formation, these stars offer information about the underlying gas and how it evolves over time.
UMS stars are generally very bright and have dispelled any nearby dust, they possess spectral energy distributions (SEDs) peaking in blue wavelengths. As such, they are detectable in the short NIRCam filters but become difficult to detect at longer wavelengths.
#### 4.2.2 Red Giant Branch
The RGB begins at F200W \(=25\) and F115W \(-\) F200W \(=0.8\) in the top left panel of Figure 4 and extends vertically in a steep diagonal towards the red. This sequence is formed by stars that have just left the MS and have begun H-shell burning. In a single-aged population the MSTO will appear as a tight track connecting the MS to the base of the RGB. The multi-aged population of NGC 6822, a product of continuous star formation, elongates the RGB and is consistent with an old population spanning \(2-10\) Gyr. Previously Zhang et al. (2021) isolated the brighter portion of this branch to show that the old stars contribute to the galaxy's eccentric elliptical stellar component that twists radially and is perpendicular to the younger stellar component.
The TRGB is well defined in the shorter wavelengths in Figure 4 but unresolved in the longer MIRI filters. Our method of roughly calculating its position is discussed in Section 4.3.
The positioning of the RGB on a given CMD is fairly independent of star formation history (SFH) but there are two effects that cause it to spread in colour space: Younger populations appear slightly red and older populations appear more blue when considering a fixed value of metallicity. Whereas when given a fixed population age with varying metallicity, more metal poor stars appear bluer than more metal rich stars. This effect is known as the age-metallicity degeneracy (Carrera et al., 2008) as for a given individual star, it is difficult to determine with certainty if its position on the RGB is caused by its age, metallicity or both. The width of the RGB in our CMDs exceeds the expected characteristic scatter caused by uncertainty in the photometry and therefore is likely caused by a broad range of ages and metallicities across the galaxy. Detailed surveys of the metallicity gradient across the galaxy have been conducted by Patrick et al. (2015).
#### 4.2.3 Red Clump
As old or intermediate-aged stars undergo a helium flash, they leave the RGB and collect in the RC. The RC structure is very dense in our CMDs, sitting just left of the RGB at F200W \(\sim 22.2\). In our shortest-wavelength filters we resolve sources three magnitudes below the RC, but decreased depth of photometry cuts off its detection above 4 \(\mu\)m, as seen in Figure 4.
The RC is a useful standard candle and has been used to measure SFH in NGC 6822 in the past (Wyder, 2001) by comparing age-sensitive RGB with the age-insensitive RC. The increased sensitivity of our catalogue will allow for more finely sampled calculations of the SFH within the inner regions of the galaxy. Additionally, differential reddening in NGC 6822 introduces a spread in the photometry of the theoretically tight RC. This effect can be used to build an extinction map of the galaxy (Wyder, 2001). Such an exercise is beyond the scope of this study, however it will be addressed in a forthcoming paper.
#### 4.2.4 Red Helium Burning Sequence
Above the RC extends a vertical sequence known as the red helium-burning (RHeB) stars or the vertical red clump (VRC). The RHeB track is formed as young stars collect at the right side of the "blue loop". In older stars the distance covered during the blue loop is much smaller and they appear at the fainter end of this population. The younger stars appear brighter as the extension of the blue loop increases and sit at the upper end of the RHeB track. As the height of this sequence is directly indicative of the age of the population, it can be used to measure the SFH within the galaxy (Dohm-Palmer et al., 1997).
#### 4.2.5 AGB Bump
The AGBb is a population of rarely-observed stars that emerges as evolving early AGB stars stall momentarily as their H-burning shells are extinguished by the expansion of their convective layers. Observing this feature is difficult, as it requires a large number of sources and may be quickly obscured by even small photometric errors. Yet, it is clearly visible in top left plot of Figure 4 embedded in the RGB at F200W \(=21\). The age and metallicity of the stars influences its positioning on the CMD and as such, along with its younger and more populated counterpart, the RGB bump (RGBb), the AGBb can be used to gain some insight into the SFH of the galaxy (Ferraro, 1992). Unlike the AGBb which always appears above the RC, the RGBb appears below or is overlaid onto the RC (Gallart, 1998); deeper analysis of our data is be required to isolate it. Although the AGBb is visible in optical data in NGC 6822 (Tantalo et al., 2022), this often neglected population is a useful mark to constrain evolutionary tracks.
#### 4.2.6 Supergiants
At F200W = 19, the SG branch forks blueward from the RGB and extends steeply to the saturation limit of our data. The sequence is clearly visible in the upper-right plot of Figure 4, although the source density drops at the brightest end. The existence of a SG population in the galaxy has been shown by Whitelock et al. (2013), Hirschauer et al. (2020), and Dimitrova et al. (2021), and they follow the structure of the central bar. This result was expected as SG stars are young and massive, so have not had sufficient time to wander far from the star forming sites in the centre of the galaxy. Incorporating NIRCam and MIRI photometry, SG stars are separated effectively with F444W - F770W vs. F770W as seen in the lower-left panel of Figure 4 where they form a distinct blue "horn" in the CMD.
Figure 4: Colour-magnitude diagrams over three filter combinations with broad stellar classification tracks overlaid and labeled accordingly, point density displayed as contours where appropriate: F115W - F200W vs. F200W (top), F444W - F770W vs. F770W (lower left) F770W – F1500W vs. F770W (lower right). Upper right plot shows a second F115W – F200W vs. F200W CMD in Hess format.
#### 4.2.7 Asymptotic Giant Branch
Formal separation of the evolved populations is outwith the scope of this overview but will be included in the following papers in this survey. Here we present initial cuts to determine the general positions of the SG, OAGB, and CAGB. To inspect the locations of these evolved populations in colour space, we show in the first panel of Figure 5 a zoom-in of the F115W\(-\)F200W vs. F200W CMD, where they appear to follow diagonal sequences. Above the TRGB at F200W \(=17.5\), a small gap separates the TP-AGB stars from the RGB. The two main groups in this classification are separated by the chemical makeup of their photospheres. Free carbon and oxygen are bound in very stable CO molecules. The overabundance of one of the two components is then left unbound in the photosphere and will form the basic ingredient for dust grain formation once its lifted into the circumstellar dust shell of the star. AGBs with a \(C/O\) ratio \(>1\) are defined to be C-rich and those with \(C/O<1\) are O-rich. The dust species formed are dictated by this chemical difference, with CAGBs forming carbonaceous dust grains and OAGBs forming sulphates, emissions from which can be photometrically separated (Potter et al., 2004).
We separate the CAGB and OAGB sequences with a diagonal line above the TRGB at F115W \(-\) F200W \(\geq 1.5\). Historically the AGBs in NGC 6822 have been split with a vertical line in the near-IR (see e.g., Cioni & Habing, 2005; Sibbons et al., 2012). Combining mid-IR _Spitzer_ (where the molecular emissions separate the AGBs more effectively) and near-IR United Kingdom Infrared Telescope (UKIRT) data for broader baseline photometry, Hirschauer et al. (2020) developed a novel statistical approach to separate them with more complex colour cut boundaries. We cross-match our catalogue with Sibbons et al. (2012) and Hirschauer et al. (2020) to identify where these populations lie on our JWST CMDs. We show this comparison in the first panel of Figure 5, where blue data points indicate OAGBs and red data points indicate CAGBs. The stars correspond to AGB identifications from Sibbons et al. (2012) and triangles correspond to those of Hirschauer et al. (2020). We find that the classification of sources as CAGB stars by both of these previous studies are broadly consistent with the colour space that they occupy in our JWST CMDs (red region). A handful of sources (\(\sim\)10) from Hirschauer et al. (2020) are classified as CAGB, but appear to fall within the region we define as corresponding to OAGB stars (blue region).
Separating the OAGBs and SGs is more involved due to the low point density in our limited CMD. Here we adopt a diagonal line above the TRGB at F115W \(-\) F200W \(=1.3\) and draw it down the edge of the RGB with the increased sensitivity of our photometry; this region (denoted in green) is where we classify objects as SGs. In the LMC, the \(J-K_{S}\) vs. \(K_{S}\) CMD of Blum et al. (2006) had a similar SG feature. A significant fraction of the previously identified OAGB stars from Sibbons et al. (2012) and Hirschauer et al. (2020) are identified as such with our JWST data. However, a larger number of these (several 10s) appear to fall within the space we define as corresponding to SGs. More in depth follow-up studies will need to be conducted to formalise this separation. Finally, there are a few sources that were previously classified as OAGB or CAGB that fall along the RGB; these are likely TP-AGBs that are mid-pulse.
A caveat of this comparison is that the improved resolution of JWST resolves previous bright _Spitzer_ sources into multiple fainter stars. These would appear on the CMD in the first panel of Figure 5 to the right of the RGB and below the space occupied by CAGBs. These sources (roughly half from the catalogues of Sibbons et al., 2012; Hirschauer et al., 2020) have however been omitted from the figure for clarity.
While the colour cuts we have applied here effectively distinguish the evolved stellar populations, we caution that extinction imposes limitations for the duistied sources. We expect that the for most dust enshrouded AGBs, extinction can cause the star to dip below horizontal colour cut we have applied above the TRGB to identify the evolved stars. As such, these sources will be excluded from our CAGB classification and spectroscopic follow-up studies will be required to distinguish them from YSOs. Additionally, although it is normally assumed that everything to the right of the diagonal cut we have defined at F115W\(-\)F200W\(=1.3\) (red line) is a CAGB (Boyer et al., 2015), in cases where OAGBs produce large quantities of silicate dust, they will also move rightward and occupy a similar colour space (Aringer et al., 2016). Separating the duistied OAGB and CAGB stars will also require spectroscopic observations.
Using the classification schemes defined above, we overlay the sources onto F115W\(-\)F770W vs. F770W in the second panel of Figure 5. The SGs, OAGBs, and CAGBs are still visible as three distinct populations, although increasing absorption effects caused by dust in the longer wavelengths begin to increase the scatter of the populations. This colour combination separates the CAGBs from the OAGBs, with the bulk of the two populations being separated with nearly one magnitude of colour space between them. This combination holds close comparison with the \(J-[8.0]\) CMD of the LMC from Blum et al. (2006), where the SGs and CAGBs fork above the bulk of OAGBs; with NIRCam- and MIRI-equivalent filters we see the exact same morphology.
Long-wavelength MIRI photometry is key to chemically separating the most dust-enshrouded sources in our catalogue. The bottom panel of Figure 6 shows the distribution of sources in F770W \(-\) F2100W vs. F2100W, where the source density is very low due to the significant dust presence and rarity of sources which are bright at these wavelengths. The CMD splits into three distinct groups stretching across six magnitudes in colour space. This combination has shown to be analogous to _Spitzer_\([8.0]-[24]\) vs. \([24]\) in work by Jones et al. (2017), which allows us to draw comparison to SAGE work on the LMC where Srinivasan et al. (2009) demonstrated a triple forking sequence. In their work, CAGB (and extreme-AGBs) lie on the most-luminous and blue finger of the fork, whereas the OAGBs are fainter and more red but bifurcate into two fingers above \([8.0]-[24]>1\). Later, Sargent et al. (2011), using the Grid of Red supergiant and Asymptotic giant branch star ModelS (GRAMS) models, showed that the central fork is occupied by the OAGBs exhibiting the highest mass loss as well as some SG stars. Our CMD may be showing the same feature, but, limited by the sensitivity of F2100W, we only see its brightest portions. The small number of sources will require a more detailed investigation than the scope of this work.
#### 4.2.8 Young Stellar Objects
In all filter combinations shown in Figure 4, the YSOs inhabit the red area to the right of the RGB below the CAGB sequence. These dusty objects have characterisable SEDs and Lenkic et al. (2023) identifies 129 YSOs in the Spitzer I region. This area in the CMD is also inhabited by any contaminating background galaxies that remain after the cuts described in Section 3.3 have been applied, but their SED shapes are distinct from that of a YSO and are easily removed from the catalogue.
### Luminosity Functions
In Figure 7 we present three representative luminosity functions from our full band merged catalogue. The logarithmic scaled distribution is plotted in black and its linearly scaled version is plotted in filled gray. The optimal bin width is calculated for the number of sources present in each case using Knuth's rule (Knuth, 2006). We crudely estimate the completeness of our sample in each JWST filter by identifying the location of the turnover of the luminosity functions at the faint end of the magnitude distribution for each band.
We measure the TRGB for every filter in our catalogue sensitive enough to detect it. The TRGB is an important feature from an astrophysical perspective as it is a well constrained standard-candle, depending only weakly on age and metallicity (Cioni et al., 2000). It represents the final stage of the RGB for a low mass star before its eventual evolution onto the AGB. This prominent feature is used to separate the RGB stars from the dust producing TP-AGB stars as the majority (\(\sim\)90%) are situated above the TRGB (Boyer et al., 2015). Due to molecular emissions in the photosphere of the RGB stars, we expect the TRGB to not be flat but to slope upwards in the IR (e.g., Cerny et al., 2020; Durbin et al., 2020). This results in a less steep drop in a luminosity function than would be expected in the optical, consequently, the TRGB positions measured here are only estimations. To measure the position of the TRGB, we randomly sample a subset of magnitudes from the catalogue and smooth the distribution with a kernel density estimate. The first order derivative of the smoothed distribution is calculated with a Savitzky-Golay filter and the TRGB is located at the point of steepest decline, seen in the differential as a sharp trough. This process is repeated thousands of times with different samples of the catalogue and the results averaged to determine the TRGB. The SED of an RGB star steadily falls with increasing wavelength in the IR and the TRGB is faint in MIRI. We flag F115W TRGB sources, to ensure that the TRGB feature is visible above the noise of the completeness turnover in the
Figure 5: Evolved population separation from near-IR to mid-IR. The SGs in green, OAGBs in blue, and CAGBs in red are defined in the first figure showing F115W – F200W vs. F200W. The upper plot also shows classified OAGB (blue) and CAGBs (red) by Sibbons et al. (2012) (stars) and Hirschauer et al. (2020) (triangles). Boundaries adopted to separate AGB species and SGs are drawn in solid lines. Stars classified in above boundaries are drawn in the second CMD plot F115W –F770W vs. F770W.
Figure 6: Colour-magnitude diagram with MIRI F770W –\(-\)2100W \(vs.\) F2100W.
mid-IR luminosity functions. We find that the TRGB is measurable up to and including F1000W. Table 4 lists the number of sources detected at each wavelength, the corresponding faint and bright source limits for that filter, and where appropriate, the magnitude of the TRGB.
In F115W we are roughly complete to F115W = \(25.78\), around \(\sim\)2.8 magnitudes below the RC population visible at F115W \(\sim 23.0\) mag. The RC is fairly independent of stellar age and metallicity, and as such they are known to be reasonable standard candles, occupying a tight region in CMD space, deviations from which can be used to derive differential reddening effects within the galaxy. Blue sources on the UMS contaminate the distinctive features in the post-MS luminosity function. Thus we construct a second luminosity function for this filter consisting solely of red objects with F115W - F200W \(>0.5\). In doing so a new structure is revealed; a shallower slope on the faint side of the RC emerges. The red luminosity function also shows the prominent AGBb (Ferraro et al., 1999) at F115W = \(22.0\) and the TRGB at F115W = \(19.06\). The prominent nature of the position of the TRGB will allow us to constrain the distance to NGC 6822 and improve the understanding of the sloped relation visible in the IR. Above this, we begin to see the AGB form a shoulder to F115W = \(17.8\). Saturation affects the completeness of our catalogue at around F115W = \(16.0\). This low saturation limit will cause our sample to be less sensitive to the brightest sources detected in \(JHK_{s}\) data in previous surveys.
In F356W the bright central peak in the luminosity function at F356W = \(22.2\) corresponds to the RC. The distribution begins to rise with the increased number of sources at the lower end of the RGB, however sensitivity limits take effect at this point, and as such we place the faint limit at the peak of this shoulder F356W = \(23.27\). Above the RC, the small peak of the AGBb is at F356W = \(21\), from which the smooth decline of the RGB turns over at the TRGB at F356W = \(17.5\). The distribution of evolved stars is in steady decline until saturation limits the detections from F356W \(\sim 14.5\).
Finally, we present F770W which is limited to F770W = \(19.35\), below the TRGB at F770W \(\sim 17.8\). The position of the TRGB is uncertain as it is highly affected by the bin sizes due to the low source count; later work will formalise this value. Above the TRGB the AGB population forms a smooth flat shoulder until bright dusty sources peak the distribution just below the saturation limit at F770W = \(12.2\).
Overall we see an increase in the lower sensitivity limit as the wavelength of the filter increases and an increase in the upper sensitivity limit, but the range between these limits shrinks. This effect, paired with the low luminosity of the Rayleigh-Jeans tail at these longer wavelengths, causes the source count to drop significantly.
### Parallel Field Photometry
The spatial coverage of the two parallel fields is shown in Figure 1. NIRcam parallel B contains the localised star forming region Hubble IV and NIRCam parallel A contains no known areas of active star formation but is contaminated the PSF spikes of a bright off-field foreground star. We conduct photometry for the two NIRCam parallel imaging fields and plot the F115W - F200W vs. F200W CMD in Figure 8. starquoint photometric parameters listed in appendix Table 10 are kept the same for consistency across fields.
Both parallels A and B have source detections approximately 0.5 magnitudes deeper than the main field, likely due to the decreased crowding in the regions. The deeper detection and lower photometric scattering allows us to detect the base of the RGB. This will enable us to constrain the population age range accurately in future work.
NIRCam parallel B contains young UMS stars in a tight track on the left of the CMD. The RGB turns off and extends upwards but the TRGB is not well defined. The RC sits prominently out of the RGB to the left and the intermediate-age RHeB track extends vertically from it. A faint track of SG stars splits from the RGB at F200W\(\sim\)19 and is well separated from a small collection of AGB stars above the TRGB calculated in Table 4.
NIRCam parallel A does not contain many UMS stars beyond a short scattered track above the MSTO. Similarly the RHeB stars are only marginally present and no obvious SG sequence is seen. The relative lack of very young stars is due to there being no active star-forming region in the FoV.
## 5 Conclusion and Summary
We observed the central stellar bar of NGC 6822, utilizing the spatial resolution and sensitivity afforded by JWST NIRCam and MIRI to characterise the IR stellar populations of this isolated metal-poor dwarf galaxy. Our observations were designed to image the most dust enshrouded stars from evolved TP-AGBs to young YSOs embedded within the super star cluster Spitzer I.
* \(14.0\) for the NIRCam filters, and \(19.25\)
- \(10.0\) for the MIRI filters. In both cases, the brightest stars in NGC 6822 are saturated.
* We estimate positions of the TRGB in NIRCam and MIRI filters up to and including F1000W. The falling SED of RGB stars in IR wavelengths cause the stars to be too faint to detect in our F1500W and F2100W data.
* We show CMDs of varying colour combinations, using JWST equivalents of UKIRT \(JHK_{s}\) and _Spitzer_ filter combinations from previous surveys of NGC 6822 and the Magellanic Clouds to guide the placement of source classifications when pairing NIRCam and MIRI data.
* We detect a population of carbon- and oxygen rich AGB stars and matching independently classified IR catalogues to determine the boundary between them. They are distinct from one another and separate from a younger SG population also detected.
* We observe for the first time in the IR, the RC and RHeB populations as well as the illusive and short live AGBb phase and
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Filter & Source Count & Faint Limit & Bright Limit & TRGB (error) \\ & & [VegaMag] & [VegaMag] & [VegaMag] \\ \hline F115W & 765756 & 25.78 & 15.3 & 19.06(05) \\ F200W & 539361 & 24.70 & 14.9 & 17.85(13) \\ F356W & 175346 & 23.27 & 14.5 & 17.74(30) \\ F444W & 147328 & 23.06 & 14.0 & 17.65(12) \\ F770W & 7212 & 19.25 & 12.2 & 17.81(04) \\ F1000W & 4782 & 19.98 & 11.5 & 17.48(21) \\ F1500W & 1352 & 16.06 & 10.0 & - \\ F2100W & 269 & 14.43 & 10.0 & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Source count per filter and sensitivity limits estimated by locating the turnover at the lower end of the luminosity functions, and upper limits placed around the brightest collection of sources. The TRGB is calculated by finding the point of steepest decline in the luminosity function. All magnitudes are given in the Vega system.
demonstrate the position they inhabit on several CMD colour combinations.
Using parallel near-IR imaging of fields outside the central stellar bar of NGC 6822, we show that the young populations are absent or heavily reduced in number, owing to the probable lack of star formation in this area of the galaxy.
_Facilities: JWST_ (NIRCam & MIRI) - James Webb Space Telescope. _Software:_ jhat (Rest 2023), image1overf.py (Willott 2022), astropy (Astropy Collaboration et al. 2013), starbug ii (Nally & Jones 2022), and topcat (Taylor 2005).
## Acknowledgements
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #1234. CN acknowledge the support of an STFC studentship. OCJ has received funding from an STFC Webb fellowship. MM and NH acknowledge support through NASA/JWST grant 80NSSC22K0025 and MM and LL acknowledge support from the NSF through grant 2054178. MM and NH acknowledge that a portion of their research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80N0M01B0004). PJK acknowledge support from Science Foundation Ireland/Irish Research Council Pathway programme under Grant Number 21/PATH-S/9360. ASH is supported in part by an STScI Postdoctoral Fellowship.
## Data Availability
The data used in this study may be obtained from the Mikulski Archive for Space Telescopes (MAST; [https://mast.stsci.edu/](https://mast.stsci.edu/)) and are associated with program #1234.
Figure 7: Luminosity functions for the F115W (upper), F356W (middle) and F770W (bottom) filters. Log scaled distributions are drawn in black and linear scaled distributions in filled grey. Bin sizes have been scaled optimally for the number of sources detected in each band. The inferred completeness estimates are denoted with a solid purple line and the calculated TRGB is denoted in a dashed line. In F115W the catalogue is cropped with F115W-F200W>0.5 to remove the UMS and highlight the features within the RGB, this is plotted in the solid red line.
Figure 8: Colour-Magnitude Diagrams F115W – F200W vs. F200W in Hess format for the parallel imaging fields. Parallel B (left) is the NIRCam module B, appearing closest to the centre of NGC 6822, containing Hubble IV the star forming region. Parallel A (right) is the NIRCam module A, the furthest field away from the centre of the galaxy. |
2309.09579 | Time Series Forecasting for Air Pollution in Seoul | Accurate air pollution forecasting plays a crucial role in controlling air
quality and minimizing adverse effects on human life. Among pollutants,
atmospheric particulate matter (PM) is particularly significant, affecting both
visibility and human health. In this study the concentration of air pollutants
and comprehensive air quality index (CAI) data collected from 2015 to 2018 in
Seoul, South Korea was analyzed. Using two different statistical models: error,
trend, season (ETS) and autoregressive moving-average (ARIMA), measured monthly
average PM2.5 concentration were used as input to forecast the monthly averaged
concentration of PM2.5 12 months ahead. To evaluate the performance of the ETS
model, five evaluation criteria were used: mean error (ME), root mean squared
error (RMSE), mean absolute error (MAE), mean percentage error (MPE), and mean
absolute percentage error (MAPE). Data collected from January 2019 to December
2019 were used for cross-validation check of ETS model. The best fitted ARIMA
model was determined by examining the AICc (Akaike Information Criterion
corrected) value. The results indicated that the ETS model outperforms the
ARIMA model. | Sean Jeon, Seungmin Han | 2023-09-18T08:39:28Z | http://arxiv.org/abs/2309.09579v1 | # Time Series Forecasting for Air Pollution in Seoul
###### Abstract
Accurate air pollution forecasting plays a crucial role in controlling air quality and minimizing adverse effects on human life. Among pollutants, atmospheric particulate matter (PM) is particularly significant, affecting both visibility and human health. In this study the concentration of air pollutants and comprehensive air quality index (CAI) data collected from 2015 to 2018 in Seoul, South Korea was analyzed. Using two different statistical models: error, trend, season (ETS) and autoregressive moving-average (ARIMA), measured monthly average PM\({}_{2.5}\) (particles with a diameter less than 2.5\(\upmu\)m) concentration were used as input to forecast the monthly averaged concentration of PM\({}_{2.5}\) 12 months ahead. To evaluate the performance of the ETS model, five evaluation criteria were used: mean error (ME), root mean squared error (RMSE), mean absolute error (MAE), mean percentage error (MPE), and mean absolute percentage error (MAPE). Data collected from January 2019 to December 2019 were used for cross-validation check of ETS model. The best fitted ARIMA model was determined by examining the AICc (Akaike Information Criterion corrected) value. The results indicated that the ETS model outperforms the ARIMA model.
## 1 Introduction
Air pollution profoundly impacts human health, the economy, and environmental sustainability. It can lead to serious illnesses like lung cancer and heart disease. Monitoring air quality provides real-time data to control the cause of air pollution. Forecasting air quality, in addition, can provide information necessary for designing mitigation acts to prevent damage caused by air pollution (Zaini, 2022). Developing accurate forecasting models can improve air pollution forecasting and warnings, thus facilitating better control of pollution in Seoul. Therefore, the precise monitoring and forecasting is very important.
The main purpose of this study is to forecast air pollution in Seoul through the analysis of air pollutants, the comprehensive air-quality index (CAI), and meteorological variables. The CAI serves the purpose of making air quality easily understandable to the public and is calculated based on concentrations of SO\({}_{2}\), CO, O\({}_{3}\), NO\({}_{2}\), PM\({}_{10}\) and PM\({}_{2.5}\). In this study, statistic characteristics and correlations between six pollutants have been examined. Furthermore, meteorological variables which are highly related to six pollutants also have been explored. And then exponential smoothing and autoregressive integrated moving-average (ARIMA) methods have been applied to forecast the concentration of PM\({}_{2.5}\) in particular. For the analysis of the correlation between air pollutants and CAI, specific regions within Seoul - City-hall,
Ganseo-gu, Seocho-gu, and Sonpa-gu - have been selected. Ganseo-gu, among these regions, has been chosen for the development of forecasting models for PM\({}_{2.5}\). The real-time data measured every hour from 2015 to 2018 was used to develop forecasting models. For the evaluation of applied forecasting models, the data from 2019 to 2020 has been compared with in-sample data.
**2. Relevant literature review**
It has been demonstrated in previous studies that a range of variables and forecasting models are utilized for predicting air pollution. In this report, an exploration of the types of variables suitable for air pollution forecasting will be conducted through a literature review. Also, an examination will be undertaken of the various forecasting methods that have been utilized and developed in the existing literature.
_2.1. Variables for forecasting_
It has been observed from prior studies that a multitude of variables are employed for the prediction of air pollution. Air pollutants and meteorological variables commonly serve as independent factors for forecasting. For instance, PM10 and PM2.5 are significant air pollutants with the potential to significantly impact both human health and society. Elangasinghe (2014) conducted a time series analysis of PM\({}_{10}\) and PM\({}_{2.5}\) to forecast their concentrations in a coastal site. The application of an artificial neural network (ANN) model facilitated accurate forecasts of PM concentrations, which are heavily influenced by meteorological factors. Earlier studies have also been undertaken to enhance and develop existing forecasting models. Wang _et al_. (2015) developed a hybrid forecasting model for daily PM\({}_{10}\) and SO\({}_{2}\) concentrations. Given the close relationship between air pollutant concentrations and the geological and meteorological environment, correlated variables like wind speed can be simultaneously considered. Cogliani (2001) demonstrated the correlation between the air pollution index and meteorological variables such as thermic excursion, the previous day's air pollution index, and the daily average wind speed. Forecasting often involves using statistical metrics like mean pollutant concentrations. Kumar and Goyal (2011) utilized 24-hour average values of RSPM (Respiable Suspended Particulate Matter), SO\({}_{2}\), NO\({}_{2}\) and SPM (Suspended Particulate Matter). Moreover, indices such as the Air Quality Index (AQI), designed to portray air quality, are pivotal variables for air pollution forecasting. The AQI conveys air quality through categorized grades that differ among countries. Ganesh _et al_. (2017) employed regression models to predict AQI in Delhi, India, and Houston, America. The efficacy of air pollution forecasting has been demonstrated through the utilization of pollutant concentrations, statistical values for each variable, and various indices calculated using air pollutants.
_2.2. Forecasting Models_
The techniques and tools for real-time air quality forecasting can be categorized into three groups: simple empirical approaches, parametric or non-parametric approaches, and physically based approaches (Zhang _et al_., 2012). In this report, parametric or non-parametric approaches, rooted in statistical methods, will be examined through the literature review. The variables that exhibit a high correlation with air pollution can be individually examined (Anthes and Warner, 1978). Cogliani (2001) discovered that daily thermic excursion has a strong correlation with the air pollution index by analyzing measured meteorological variables. Statistical methods have been developed for air pollution forecasting and can be effectively used for predicting air
quality. Gracia (2017) built four statistical models to forecast PM\({}_{10}\) concentration in the metropolitan area of northern Spain: Vector autoregressive moving-average (VARMA), autoregressive integrated moving-average (ARIMA), multilayer perceptron (MLP), neural networks and support vector machines (SNMs) with regression. Among these models, ARIMA, which explains autocorrelations in data, is one of the most widely utilized for air pollution forecasting. Numerous studies have been conducted to improve existing forecasting models. For instance, Chaloulakou, Saisana, and Spyrellis (2003) conducted a comparative assessment of forecasting models to predict summertime ozone levels in Athens. More recently, computational intelligence techniques like machine learning and deep learning technologies have been applied to air quality forecasting to enhance existing models (Zaini _et al._, 2022).
## 3 Data analysis
### Data
Concentrations of six types of air pollutants, along with CAI and meteorological variables, have been utilized for the analysis of air pollution in Seoul, South Korea. The major air pollutants include sulphur dioxide (SO\({}_{2}\)), carbon monoxide (CO), ozone (O\({}_{3}\)), nitrogen dioxide (NO\({}_{2}\)) and particulate matter (PM\({}_{10}\) and PM\({}_{2.5}\)). SO\({}_{2}\), a colorless gas, is emitted from volcanic eruptions and fossil fuel usage in industries and electricity generation (Makgato and Chirwa, 2020). CO, also emitted from fossil fuels, can have fatal effects on human health at high concentration levels (Chen et al., 2021). Ground-level O\({}_{3}\), formed from nitrogen oxide and volatile organic compounds, poses risks to both human health and the environment (Chen et al., 2021). NO\({}_{2}\), emitted from fossil fuels and power plants, is another significant air pollutant (Lu et al., 2021). PM\({}_{10}\) and PM\({}_{2.5}\) are fine atmospheric particles with diameters less than 10 \(\upmu\)m and 2.5 \(\upmu\)m, respectively. The CAI, calculated from the concentrations of the six pollutants mentioned above, provides a simplified representation of daily air quality. Additionally, meteorological variables are examined, which have strong correlation with pollutant concentrations.
The Ministry of Environment of the Republic of Korea provides real-time data on six air pollutants and the comprehensive air-quality index (CAI) from 614 monitoring points across 162 cities and counties. This study focuses on analyzing the concentrations of these six pollutants and CAI from four specific regions within Seoul (Table 1). The data has been collected every hour from each region. Monthly averaged values are used for forecast.
### Statistical characteristics of variables
Strong seasonal trends in the concentrations of six air pollutants are observed at four distinct measuring points. The average values of these six air pollutants and CAI are presented in Table 2. The data reveals that Gangeo-gu exhibits the highest concentrations in both CAI and the majority of air pollutants among the four regions.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Location** & **Address** & **Longitude** & **Latitude** \\ City Hall & 15, Deoksugung-gil, Jung-gu, Seoul, Republic of Korea & 126.9747 & 37.5643 \\ Gangeo-gu & 71, Gangseo-ro 45da-gil, Gangseo-gu, Seoul, Republic of Korea & 126.8351 & 37.5447 \\ Banpo-dong & 16, Sinbapo-ro 15-gil, Seecho-gu, Seoul, Republic of Korea & 126.9945 & 37.5046 \\ Bang-yi-dong & 59, Guchenmyeon-ro 42-gil, Gangdong-gu, Seoul, Republic of Korea & 127.1368 & 37.545 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Selected regions for analyzing and forecasting
The following figures show the concentrations of air pollutants in each month. The graph indicates that SO\({}_{2}\) concentration is elevated during the spring period, especially in February and March. In Gangeo-gu, however the SO\({}_{2}\) is exceptionally higher in April than in February.
The trend of CO concentration is similar to that of SO2 in that it is high when the temperature is low (e.g. winter season) as in Figure 2. The CO, however, exhibits clearer trend than SO\({}_{2}\) in general displaying lower concentration in summer season.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline & SO\({}_{2}\)(ppm) & CO(ppm) & O\({}_{3}\)(ppm) & NO\({}_{2}\)(ppm) & PM\({}_{10}\)(\(\mu\)g/m\({}^{3}\)) & PM\({}_{2}\).s(\(\mu\)g/m\({}^{3}\)) & CAI(Index) \\ City-hall & 0.004 & 0.531 & 0.024 & 0.034 & 41.150 & 23.384 & 84.031 \\ Gangeo\_gu & 0.005 & 0.465 & 0.025 & 0.030 & 46.264 & 24.424 & 85.995 \\ Seecho\_gu & 0.004 & 0.469 & 0.024 & 0.029 & 46.102 & 23.622 & 84.737 \\ Songpa\_gu & 0.004 & 0.511 & 0.021 & 0.031 & 46.178 & 24.350 & 85.339 \\ \hline \end{tabular}
\end{table}
Table 2: **Average of air pollutants and CAI concentrations**
Figure 1: **Seasonality of SO:**
Figure 2: **Seasonality of CO**
Figure 3 represents that the concentration of O\({}_{3}\) is high during the summer. The overall trends are opposite to those of SO\({}_{2}\) and CO. In Figure 4, NO\({}_{2}\) concentration shows the weakest trend among the air pollutants, while the NO\({}_{2}\) concentration is lower when the temperature is high.
Similar patterns are observed in Figures 5 and 6 for PM\({}_{10}\) and PM\({}_{2.5}\), wherein the concentration is high in spring. However, PM\({}_{2.5}\) maintains high concentration throughout the year, while PM\({}_{10}\) exhibits a significantly higher trend in February.
### Correlation between variables
Among the six air pollutants, it is acknowledged that PM\({}_{2.5}\) not only poses risks to human health but also significantly affects the quality of life due to its association with visibility (Cobourn, 2010; McKeen et al., 2007). In this report, the focus will be on examining the correlation between PM\({}_{2.5}\) and other variables for the purpose of air pollution forecasting. As demonstrated in the graph above, the PM\({}_{2.5}\) concentration is elevated during the cold and windy spring season. Table 3 presents the correlation of PM\({}_{2.5}\) with the other five air pollutants. Among these pollutants, PM\({}_{10}\) exhibits the highest correlation with PM\({}_{2.5}\) in each selected region. The data indicates that CO possesses the next highest correlation, while SO\({}_{2}\) and NO\({}_{2}\) also maintain relatively strong correlations compared to O\({}_{3}\). Among the meteorological variables, temperature demonstrates a notably high correlation with PM\({}_{2.5}\) (Table 4). Wind speed and precipitation also exhibit comparatively high correlations with PM\({}_{2.5}\).
### Trend and seasonality of PM\({}_{2.5}\)
Ganseo-gu, which shows the highest PM\({}_{2.5}\) concentration, has been selected to further
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Location** & **Temperature(\({}^{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{ }}}}}}}}}}}}}}}}}\))** & **Predipitation(mm)** & **Wind speed(m/s)** & **Local air pressure(hPa)** & **Visibility(10m)** \\
**City hall** & -0.095 & -0.061 & -0.096 & 0.062 & -0.592 \\
**Ganseo-gu** & -0.094 & -0.073 & -0.132 & 0.058 & -0.557 \\
**Seocho-gu** & -0.136 & -0.068 & -0.092 & 0.073 & -0.561 \\
**Songpa-gu** & -0.188 & -0.070 & -0.122 & 0.101 & -0.515 \\
**Average** & -0.128 & -0.068 & -0.110 & 0.073 & -0.556 \\ \hline \hline \end{tabular} \end{table 4: Correlation between PM\({}_{2.5}\) and other air pollutants
Figure 6: Seasonality of PM\({}_{2.5}\)
analysis and forecast. Figure 7 illustrates the trend, seasonal and random components of PM\({}_{2.5}\) in Ganseo-gu. distinctly depicts a declining tendency from the third quarter of 2015 to the third quarter of 2016, followed by an increasing pattern for the rest of the period. In terms of seasonality, it becomes evident that the concentration is elevated during the spring and winter seasons, characterized by relatively cold weather and low air temperatures.
## 4 Forecasting models
In this study, exponential smoothing and autoregressive integrated moving-average (ARIMA) models have been applied to forecast PM\({}_{2.5}\) concentration in Ganseo-gu. In this report, formulas and some application cases of exponential smoothing and ARIMA models will be examined.
### Error, trend, seasonal (ETS) model
The exponential smoothing method was suggested in late 1950 (Brown, 1959; Holt, 1957; Winters, 1960). It is based on the idea that the average of past observations affects future values with higher weights on more recent values (Hyndman and Athanasopoulos, 2018). In other words, the weight assigned to observations diminishes exponentially as they become older. The ETS model, which is a statistical model underlying exponential smoothing, was applied to forecast PM\({}_{2.5}\) in this paper. The ETS model accounts for variations using both additive and multiplicative approaches within the trend and seasonal components (Pegels, 1969). The parameters used for ETS model are listed in table 5 (Hyndman and Athanasopoulos, 2018)
Three forecasting models were employed by Cekim (2020) for the prediction of PM10 concentration across 18 cities in Turkey: error, trend, and seasonal (ETS); autoregressive integrated moving average (ARIMA); and singular spectrum analysis (SSA). To assess the appropriateness of each forecasting model in every city, the Root Mean Square Error (RMSE) was utilized. In each city, the most effective model was implemented, and the study proposed forecasted values.
### Autoregressive integrated moving-average (ARIMA) model
Autoregressive integrated moving-average (ARIMA) model is one of the most commonly
\begin{table}
\begin{tabular}{l l} \hline \hline
**Components** & **Methods** \\
**Error** & A(additive), M(multiplicative) \\
**Trend** & N(none), A(additive), Ad(additive damped), M(multiplicative), Md(multiplicative damped) \\
**Seasonal** & N(none), A(additive), M(multiplicative) \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Parameters used for error, trend, seasonal (ETS) model**
Figure 7: Trend, seasonal and random components of PM\({}_{2.5}\) concentration
used approaches to time series forecasting (Hyndman and Athanasopoulos, 2018). ARIMA is composed of autoregressive (AR) model and moving average (MA) model with the degree of differences to achieve stationary. The autoregressive model of order p can be denoted as (Hyndman and Athanasopoulos, 2018):
\[y_{t}=c+\varphi_{1}y_{t-1}+\varphi_{2}y_{t-2}+\cdots+\varphi_{p}y_{t-p}+e_{t} \tag{1}\]
The moving average model of order q can be represented as (Hyndman and Athanasopoulos, 2018):
\[y_{t}=c+e_{t}+\theta_{1}e_{t-1}+\theta_{2}e_{t-2}+\cdots+\theta_{q}e_{t-q} \tag{2}\]
As a result, the autoregressive moving average (ARMA) model can be expressed as (Hyndman and Athanasopoulos, 2018):
\[y^{\prime}_{t}=c+\varphi_{1}y^{\prime}_{t-1}+\varphi_{2}y^{\prime}_{t-2}+ \cdots+\varphi_{p}y^{\prime}_{t-p}+e_{t}+\theta_{1}e_{t-1}+\theta_{2}e_{t-2}+ \cdots+\theta_{q}e_{t-q} \tag{3}\]
when \(y^{\prime}_{t}\) is the differenced series. The equation of ARIMA model including differences can be expressed as (Hyndman and Athanasopoulos, 2018):
\[\left(1-\varphi_{1}B-\cdots-\varphi_{p}B^{p}\right)(1-B)^{d}\;y_{t}=c+(1+ \theta_{1}B+\cdots+\theta_{q}B^{q})e_{t} \tag{4}\]
Gracia (2017) forecasted PM\({}_{10}\) concentration in northern Spain using four different models: autoregressive integrated moving-average (ARIMA), vector autoregressive moving-average (VARMA), multilayer perceptron (MLP) and neural networks and support vector machines (SVMs) with regression. It the study, ARIMA model demonstrated effective performance in both short term (one month) and comparatively long term (several months) period of forecasting.
## 5 Results and Discussion
### Results of Error, trend, seasonal (ETS) model
In this study, the application of five ETS models to predict the concentration of PM2.5 in Ganseo-gu, Seoul, has been carried out: ANN, AAN, AAN damped, AAA, and AAA damped. Figure 8 shows the result of forecast using each model.
Figure 8: Results from error, trend, seasonal (ETS) models
To assess the performance of ETS models, evaluation criteria have been considered: mean error (ME), root mean squared error (RMSE), mean absolute error (MAE), mean percentage error (MPE), mean absolute percentage error (MAPE). Among the applied models, AAA damped model demonstrates the best fit for both training and test sets, based on all of the criteria.
For the further verification, the cross-validation check has been conducted. The data was divided into in-sample period including from 2015 to 2018 and out of sample including from 2019. For the five ETS models referred above, mean absolute percentage error (MAPE) were 26.97108, 25.42727, 26.89370, 34.72839, and 24.03151, respectively. The outcome of the cross-validation process further verifies that the AAA damped model demonstrates the most superior performance.
### Results of autoregressive integrated moving-average (ARIMA) model
The autoregressive intergrated moving-average (ARIMA) model assumes that the data is stationary in the mean and variance. Figures 6 and 7 display the concentration of PM\({}_{2.5}\) have seasonality and trend. The ACF and PACF plots in Figure 9 also display the time series of PM\({}_{2.5}\) concentration is not stationary, showing much higher values than significant value at lag 1, 2, 5, 6, 7, and 12. To make the time series stationary, seasonal differencing was implemented. The resulting ACF and PACF plots, as illustrated in Figure 10, demonstrate a rapid decline to zero in both ACF and PACF.
With the differenced data, ARIMA models with five different parameters have been applied: where (p,d,q) is (2,0,0), (3,0,0), (2,1,0), (2,2,0), (2,1,1), and (2,1,2). Based on AICc values, it is represented that ARIMA(2,1,1) model is best fitted, showing AICc values of 589.26, 591.55, 583.66, 593.72, 580.73, 580.73, and 582.55 respectively. By evaluating AICc values, it becomes evident that the ARIMA(2,1,1) model exhibits the best fit, showcasing AICc values of 589.26, 591.55, 583.66, 593.72, 580.73, 580.73, and 582.55, respectively. The graph depicted in Figure 11 displays the forecasted values for the next 12 months as generated by the ARIMA(2,1,1) model. Figure 12 displays the outcomes of ARIMA(2,1,1), demonstrating that the residuals exhibit characteristics of white noise, indicating an absence of autocorrelation within this model.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Models** & **Data set** & **ME** & **RMSE** & **MAE** & **MPE** & **MAPE** \\ & Training set & -6.159 & 161.863 & 133.532 & -4.825 & 19.821 \\
**ANN** & Test set & 86.922 & 293.186 & 221.636 & 0.367 & 26.971 \\
**AAN** & Training set & -0.525 & 161.776 & 133.545 & -3.985 & 19.686 \\ & Test set & 116.452 & 296.549 & 218.268 & 4.926 & 25.427 \\ & Training set & -3.520 & 161.606 & 133.457 & -4.469 & 19.764 \\ & Test set & 88.679 & 293.586 & 221.563 & 0.626 & 26.894 \\ & Training set & -2.329 & 112.831 & 93.151 & -3.855 & 14.608 \\
**AAA** & Test set & 147.288 & 238.507 & 192.212 & 13.232 & 22.076 \\ & Training set & -11.400 & 112.850 & 92.564 & -5.089 & 14.749 \\
**AAA damped** & Test set & 133.950 & 229.923 & 182.985 & 11.349 & 21.096 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Evaluation criteria values of error, trend, seasonal (ETS) models**
Figure 11: Results from the ARIMA(2,1,1) model
Figure 10: ACF and PACF plots of differenced PM2.5 concentration data
Figure 9: ACF and PACF plots of PM2.5 concentration data
### Comparison of ETS and ARIMA models
Some of the evaluation criteria, such as MAPE and AICc, have been employed to facilitate the comparison of forecasting models' performance. AICc becomes particularly helpful when dealing with models of the same category. For instance, when assessing various ARIMA candidate models featuring distinct parameters, AICc serves as a beneficial evaluation criterion. However, AICc lacks applicability when contrasting the performance of diverse forecasting models. Furthermore, the comparison of evaluation criteria for ETS and ARIMA remains valid only when employing data of the same orders of differencing.
In this study, therefore, a comparison was made between the ETS(A,Ad,A) model and the ARIMA(0,0,0)(0,1,2) model, with both models employing undifferenced data. Figure 13 and Figure 14 present the outcomes of the ETS(A,Ad,A) and ARIMA(0,0,0)(0,1,0) models, respectively. The evaluation criteria-based analysis indicates that the ETS(A,Ad,A) model outperforms the ARIMA model in forecasting PM2.5 concentration (Table 6).
Figure 12: Residuals of the ARIMA(2,1,1) model
Figure 13: Results from ETS(A,Ad,A) model
## 6 Conclusions
In this report, the monthly average concentration of PM\({}_{2.5}\) in Ganseo-gu, Seoul, South Korea, has been forecasted for a 12-month period (from January 2019 to December 2019) using two distinct models: error, trend, seasonal (ETS) and autoregressive moving-average (ARIMA) model. The results indicate that both models are highly effective in predicting the monthly PM\({}_{2.5}\) concentration. Nonetheless, a comparison of the two models reveals that the ETS model (RMSE = 5.837 for the test set) outperforms the ARIMA model (RMSE = 7.732 for the test set).
In summary, considering its superior performance, the ETS model can be extended to other regions like City-hall, Seocho-gu, and Songpa-gu. For future studies, variables highly correlated with PM\({}_{2.5}\), such as wind speed and temperature, could be considered to enhance forecasting accuracy.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Models** & **Dataset** & **ME** & **RMSE** & **MAE** & **MPE** & **MAPE** \\ ETS(A,Ad,A) & Training set & -0.605 & 4.608 & 3.190 & -6.768 & 15.791 \\ & Test set & 3.358 & 5.837 & 5.489 & 12.043 & 22.768 \\ ARIMA(0,0,0)(0,1,0) & Training set & 0.111 & 3.651 & 2.996 & -2.698 & 14.028 \\ & Test set & 5.073 & 7.732 & 6.148 & 14.915 & 21.189 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Evaluation criteria values of ETS(A,Ad,A) and ARIMA(0,0,0)(0,1,0) model**
Figure 14: **Results from ARIMA(0,0,0)(0,1,0) model** |
2308.16770 | Enhancing PLM Performance on Labour Market Tasks via Instruction-based
Finetuning and Prompt-tuning with Rules | The increased digitization of the labour market has given researchers,
educators, and companies the means to analyze and better understand the labour
market. However, labour market resources, although available in high volumes,
tend to be unstructured, and as such, research towards methodologies for the
identification, linking, and extraction of entities becomes more and more
important. Against the backdrop of this quest for better labour market
representations, resource constraints and the unavailability of large-scale
annotated data cause a reliance on human domain experts. We demonstrate the
effectiveness of prompt-based tuning of pre-trained language models (PLM) in
labour market specific applications. Our results indicate that cost-efficient
methods such as PTR and instruction tuning without exemplars can significantly
increase the performance of PLMs on downstream labour market applications
without introducing additional model layers, manual annotations, and data
augmentation. | Jarno Vrolijk, David Graus | 2023-08-31T14:47:00Z | http://arxiv.org/abs/2308.16770v1 | Enhancing PLM Performance on Labour Market Tasks via Instruction-based Finetuning and Prompt-tuning with Rules
###### Abstract
The increased digitization of the labour market has given researchers, educators, and companies the means to analyze and better understand the labour market. However, labour market resources, although available in high volumes, tend to be unstructured, and as such, research towards methodologies for the identification, linking, and extraction of entities becomes more and more important. Against the backdrop of this quest for better labour market representations, resource constraints and the unavailability of large-scale annotated data cause a reliance on human domain experts. We demonstrate the effectiveness of prompt-based tuning of pre-trained language models (PLM) in labour market specific applications. Our results indicate that cost-efficient methods such as PTR and instruction tuning without exemplars can significantly increase the performance of PLMs on downstream labour market applications without introducing additional model layers, manual annotations, and data augmentation.
taxonomy, transformer, natural language processing, labour market intelligence +
Footnote †: footnoteinfo][https://eseco.ece.europa.eu/en](https://eseco.ece.europa.eu/en)
## 1 Introduction
The increasing availability of raw labour market information allows businesses, educational facilities and job seekers to gain a clear and more complete understanding of the labour market [1]. While the increasing volumes of available data provide opportunities, there are several challenges towards fully utilizing the data.
On the one hand, the majority of available data is of an unstructured nature, with a lack of large-scale annotated datasets that could be used in training, and/or fine tuning of models for downstream applications. On the other hand, there is much effort in creating structured representations of labour market data, through taxonomies and ontologies such as ESCO,1 ISCO,2 or O*NET.3
Footnote 1: [https://www.io.org/public/english/bureau/stat/isco/isco88/](https://www.io.org/public/english/bureau/stat/isco/isco88/)
Footnote 2: [https://www.onetcenter.org/overview.html](https://www.onetcenter.org/overview.html)
Leveraging structured ontologies to enrich and interpret unstructured labour market data has considerable research attention, through skill and occupation recognition, classification, and linking [2, 3, 4, 5, 6]. These different downstream tasks can prove invaluable in enabling better workforce and labour market insights, identification of trends and temporal patterns, and providing structured data or enrichments that can be applied as feature representation for job or career path recommendations [7, 8, 9].
Many of these approaches rely on supervised learning, where a commonly identified limitation in literature is the availability of multilingual, labour market and task-specific datasets. In addition, due to the dynamic nature of the labour market makes it very difficult to keep more structured representations of the labour market up-to-date and relevant (i.e. labour market ontologies and taxonomies); updating and maintaining these knowledge structures is typically done by human domain experts, making them time- and resource-intensive, and meaning that whenever such a structure is updated, datasets for supervised learning may become obsolete.
In this paper, we propose a novel method that relies on pretrained language models (PLMs), prompt tuning with rules (PTR) [10], and the structured multilingual ESCO taxonomy, to efficiently and cheaply generate large amounts of labeled data for learning a variety of downstream tasks for extracting structured information from unstructured labour market data, specifically: (i) relation classifiers, that aim to predict the type of relation between skills and occupations, (ii) entity classifiers, that aim to classify labour market entities as skill or occupation, (iii) entity linkers, which aims to link various surface forms of labour market entities to their canonical underlying skill or occupation entity, and (iv) question answering approaches, that aim to answer the correspondence between a descriptive text and the associated skill or occupation.
In this paper, we aim to address the following research questions:
1. Are "out-of-the-box" PLMs capable of generalizing learned behavior to labour market specific
applications?
2. Does instruction, and sub-prompt finetuning a PLM on a mixture of task-specific (i.e. general and labour market specific) datasets increase the performance on labour market specific benchmarks?
3. Is the tuned PLM able to transfer the learned behavior across labour market specific tasks?
In this paper, we demonstrate domain-specific prompt-based tuning and its effect on the performance of skill extraction, occupation classification, link prediction, and entity linking tasks. We propose leveraging instruction tuning without exemplars (i.e. no examples at inference time) and sub-prompts for a more cost-efficient solution for downstream labour market applications [11, 12, 13, 10]. We provide manually constructed templates that encode the knowledge embedded in the ESCO occupation and skill taxonomies. We benchmark different configurations of finetuning the PLMs, to demonstrate the effectiveness of e.g., adding instructions or sub-prompts.
## 2 Related Work
Recent successes of PLMs such as GPT [14], BERT [15], RoBERTA [16] and T5 [17] have demonstrated the usefulness and adaptability of the transformer architecture. Although these PLMs can capture rich knowledge from massive corpora, a fine-tuning process with extra task-specific data is still required to transfer their knowledge for downstream tasks. Besides fine-tuning language models for specific tasks, recent studies have explored better optimization and regularization techniques to improve fine-tuning.
Several works try to integrate ontological and/ or taxonomical knowledge into task-specific models, to improve the performance of downstream applications. Take the work by [18], who introduced _KnowBert_, a methodology that explicitly models entity spans in the input text. They further use an entity linker to retrieve relevant embeddings of the entity from a knowledge base to enhance their representations. Another approach would be the work by [19], the so-called _TransE_ model, that focused primarily on representing hierarchical relationships. Similar to our work, _TransE_ models multi-relational data from knowledge bases (i.e. triplestores) to improve performance for link prediction [19].
[20] took a different approach, proposing _ERNIE_, a method that consists of two stacked modules, namely the T-Encoder responsible for capturing lexical and syntactic information, and the K-Encoder responsible for augmenting this lexical and syntactical information with extra token-oriented knowledge from the underlying layer [20]. Lastly, we have _ESCOXLM-R_ that employ further pre-training on the ESCO taxonomy [6]. In addition to the masked language modelling (MLM) pre-training objective, the authors also introduce the so-called ESCO Relation Prediction (ERP) task to internalize knowledge of non-hierarchical relations within ESCO [6].
Another pre-training-based approach that leverages a self-supervised method to pre-train a deeply joint language-knowledge foundation model from text and knowledge graphs at scale is the Deep Bidirectional Language-Knowledge Graph Pretraining (DRAGON) proposed by Yasunaga et al. [21]. Results from the paper indicate that DRAGON outperforms existing LM and LM+KG models on diverse downstream tasks in particular on complex reasoning about language and knowledge.
Despite the success of fine-tuning PLMs, there is a big gap between the MLM objective and fine-tuning objectives for downstream applications. Prompt-based learning has been a widely explored method that uses templates to transform the input into classification problems, and as such, closes the gap between task-specific and MLM objectives [22]. [23] propose _KnowPrompt_, a method for task-oriented prompt template construction where they use special markers to highlight entity mentions in the template. [24] also proposed a template-based NER model using BART. The model enumerates all possible text spans and considers the generation probability of each type within manually crafted templates [22, 24].
Since the manual creation of templates is labour-intensive, methods for the automated generation of prompts and labels are well-researched. In principle, a prompt consists of a template and label words. As such, Schick and Schutze [13] first searches the label word space for the manually created templates. Next, gradient-guided search automatically generates both templates and label words. Compared to human-picked prompts, most auto-generated prompts cannot achieve comparable performance [10].
Prior literature has shown that increasing the number of tasks in finetuning improves the generalization to unseen tasks [11]. Experiments from Chung et al. [11] show that "instruction finetuning" scales well with the number of tasks and the size of the model. Wei et al. [25] further suggests that instruction-tuned models respond better to continuous outputs from prompt tuning. Prompt tuning on FLAN even achieves more than 10% improvement over a non-instruction-tuned equivalent model [25].
## 3 Methodology
### Preliminaries
#### 3.1.1 Esco
ESCO (European Skills, Comptences, Qualifications and Occupations) is the European multilingual classification
of skills, competences and occupations. In total, ESCO describes 3,008 occupations and 13,980 knowledge, skill, and competences in 28 different languages.
ESCO has both hierarchical and non-hierarchical relationships: hierarchical relationships, or hypernymies, are relations of the form _x is-a y_[26, 4]. Non-hierarchical relationship are essentially those relationship that are not hierarchical. For example: "_Java Programming_ is an essential skill for a _Software Developer_" is a non-hierarchical relationship, whereas, "a _Software Developer_ is a _Information and Communications Technology Professional_" is hierarchical.
#### 3.1.2 PLMs: T5 & FLAN-T5
In this paper we rely on the T5 PLM, since the text-to-text framework allows us to directly apply the same model, objective, training procedure, and decoding process to every task we consider [17]. In addition, we turn to an instruction-tuned variant of the T5 model: FLAN-T5. Instruction-based finetuning has shown to improve zeroshot performance on unseen tasks [25, 11]. In this paper we aim to study whether this property also applies to the domain-specific unseen tasks that we propose.
### Prompt-Tuning with Rules
In this paper, we utilize the ESCO taxonomy as background knowledge for the Prompt-Tuning with Rules (PTR) approach proposed by Han et al. [10].
PTR builds on prompt tuning methods that rely on cloze tests, where the PLM is applied to replace or fill in a missing word in a sentence. A so-called verbalizer maps a fixed set of class labels (e.g., positive, negative) to underlying label words (e.g., "great", "terrible"), so that
Figure 1: Visual representation of proposed method: PTR (shown on top) with three sub-prompts (yellow, green, and red) with MLM heads predicting [MASK] tokens, given their respective verbalizers (inspired by Han et al. [10]). An outtake of the ESCO taxonomy represented in the bottom, with hierarchical (red) relations and non-hierarchical (rest), and how the entities and relations populate the template (dotted lines).
by predicting a label word, the PLM effectively classifies a sentence.
PTR extends this prompt tuning approach with prior knowledge encoding, i.e., leveraging logic rules to encode prior knowledge about tasks and classes into prompt tuning, and efficient prompt design, through composing multiple sub-prompts and combining into prompts [10].
Illustrative exampleWe illustrate how we leverage the ESCO taxonomy to construct and populate sub-prompts, as proposed by Han et al. [10].
Consider a (sub-)prompt template for entity type classification, as; "[CLS] the [MASK] _[ENTITY]_." Which can be instantiated for the skill 'ensure correct metal temperature," as: "[CLS] the [MASK] _ensure correct metal temperature,"_ and for the occupation "electron beam welder," as: "[CLS] the [MASK] _electron beam welder_."
Finally, we can combine the above instantiations of the same sub-prompt into a final prompt, that spans entity type and relation classification, as such: "[CLS] the [MASK]\({}_{1}\)\(ensure\)\(correct\)\(metal\)\(temperature\)[MASK]\({}_{2}\) the [MASK]\({}_{3}\)\(electron\)\(beam\)\(welder\)".
PTR relies on so-called "verbalizers" that map class labels to label words. In our example, the class labels ['skill', 'occupation"] for entity classification are mapped to (the same) label words {'skill', 'occupation"} in place of [MASK]\({}_{1}\) and [MASK]\({}_{3}\), and the class label {'isEssential-Skill', 'isOptionalSkill'} in place of [MASK]\({}_{2}\) are mapped to the corresponding label words {'is an essential skill for', 'is an optional skill for'} in the case of relation classification.
i.e., \(\varphi_{[MASK]_{1}}\) and \(\varphi_{[MASK]_{3}}\) aim to assign an entity class through predicting a label word from X, and \(\varphi_{[MASK]_{2}}\) aims to classify the type of relation between the two through label words Y.
### Instruction-based Finetuning
Instruction-based finetuning aims to teach a PLM to perform certain tasks, by responding to instructions in natural language [25]. For two of our three datasets (i.e., the QA and EL), we manually constructed templates that result in natural language instructions that describe the task for that dataset to the PLM.
While scaling language model sizes seems to be a reliable predictor for improved model performance, it comes at the price of high compute. Therefore, development of compute-efficient techniques that improve performance at the cost of a relatively small amount of computational resources is important. Instruction-based finetuning improves performance of PLMs on evaluation benchmarks by up to 9.4%, requiring only 0.2% of the pre-training compute [11]. Furthermore, Chung et al. [11] demonstrate that smaller models that are instruction tuned can outperform larger models without it.
Figure 2 demonstrates how we leverage the ESCO taxonomy to construct instruction tuning templates for the QA examples.
## 4 Experimental Setup
The aim of this paper is to leverage prompt-based and instruction-based finetuning, to cost-efficiently optimize PLMs performance on four downstream labour market tasks. As described in the previous section, we propose four different tasks for evaluation, namely: entity classification (EC), relation classification (RC), entity linking (EL), and question answering (QA).
### Datasets
We evaluate PTR and instruction-based finetuning in labour market-specific downstream tasks through benchmark datasets we generate through populating hand-crafted templates, with instances from the ESCO taxonomy.
We generate three datasets of prompts, that address four different tasks; i) entity classification (EC) and ii) relation classification (RC) as illustrated above (combined in a single set of prompts), in addition to iii) entity linking (EL), and iv) question answering (QA).
Construction of a self-supervised dataset comprises three different components; i) a subset of ESCO relations, ii) a template to map the triples associated to the relations
Figure 2: Visual representation of method: Instruction tuning for the QA examples. The instruction is prepended to the question, instructing the PLM how to proceed in answering the given question.
to (sub-)prompts, and iii) verbalizers that map class labels to label words.
#### 4.1.1 Entity Classification + Relation Classification
To build the entity classification and relation classification (EC + RC) dataset, we leverage the _isEssentialFor_ and the _isOptionalFor_ relations as found in ESCO. For both entity classification and relation classification, we largely follow the work by Han et al. [10], i.e., we extract all triples that have as subject a skill entity, the _isEssentialFor_ or the _isOptionalFor_ as predicate, and finally as object an occupation entity.
Formally, our triples look as follows:
\[<Skill,r,Occupation>, \tag{1}\]
where \(r\in\{isOptionalFor,isEssentialFor\}\). The entity and relation classification template \(T(x)\) is formalized as:
\[s_{1}=The\ [MASK]\ entity\ [Skill]\] \[s_{2}=The\ [MASK]\ entity\ [Occupation]\] \[s_{1}\ [MASK]\ s_{2}\]
Lastly, we formulate two different verbalizers \(\varphi_{1}\) and \(\varphi_{2}\) such that:
\[\varphi_{1} =C_{1}\rightarrow\nu_{1}, \tag{2}\] \[\varphi_{2} =C_{2}\rightarrow\nu_{2}, \tag{3}\]
where \(C_{1}\)={'Occupation','skill'} and the accompanying label words \(\nu_{1}\) = {'occupation','skill'}. Similarly, \(C_{2}\) = {'isOptionalFor', 'isEssentialFor'}, and the label words \(\nu_{2}\) = {'_is optional for', 'is essential for_}.
Note that in our case, the verbalizers are one-to-one mappings, whereas in the PTR methodology, many-to-one mappings are also supported. For the entity and relation classifications we have not included the possibility of "no relation" and/ or "no entity", for the simple reason of self-supervision. While we fully believe these negative examples to be useful for better learning how to recognize entities and the relation connecting entities, they would require manual annotation, and as such fall beyond the scope of this research.
#### 4.1.2 Entity Linking
To model entity linking, we rely on the _alternativeLabel_ relation in ESCO, i.e., our task is to map an entity surface form or entity mention (_alternative label_), to the canonical entity name (i.e., _label_).
We can formalize the entity linking task as the following triplestore:
\[<e\,r\,m>, \tag{4}\]
where \(e\in E\) the set of skill and occupation entities, and \(m\in M\) the set of skill and occupation mentions (i.e., alternative labels for the ESCO skill and occupation labels). Lastly, \(r\in C\), meaning that the predicate can be either signalling that the mention is an alternative label or not an alternative for the given ESCO entity.
Given an entity, \(e\) and a mention \(m\), we are interested in finding out what type of relation there exists between \(e\) and \(m\). As such, we formalize the entity linking as a masked language problem via \(x_{prompt}=e\ [MASK]\ m\).
In Figure 1, the blue boxes represent two examples of alternative labels for the occupation _electron beam welder_.
We formalize the template for our second set of prompts as:
\[\mathtt{e}\ \mathtt{[MASK]}\ \mathtt{m}\]
We formulate the verbalizer \(\varphi\) such that:
\[\varphi=C\rightarrow\nu, \tag{5}\]
where
\[C=\{\mathtt{alternativeLabel},\mathtt{noAlternativeLabel}\}\]
\(\nu=\{\mathtt{is\ an\ synonym\ for},\mathtt{it\ not\ a\ synonym\ for}\}\)
For each generated example from the ESCO triplestores, we also randomly sample negative examples by randomly shuffling the objects and subject of the positive triplestores and changing the predicate label to _noAlternativeLabel_.
#### 4.1.3 Question Answering
For the QA task, we use so-called _instructional templates_ as defined by Chung et al. [11] and Wei et al. [25]. Instructional templates prepend an instruction to the prompt.
\begin{table}
\begin{tabular}{l r r r} \hline \hline & EC + RC & QA & EL \\ \hline \# total & 123,752 & 27,792 & 195,350 \\ \# skills & 13,890 & 13,890 & 13,890 \\ \# occupations & 3,008 & 3,008 & 3,008 \\ \# essential & 64,877 & - & - \\ \# optional & 58,875 & - & - \\ \# altlabels & - & - & 96,117 \\ \# pos & 123,752 & 13,896 & 97,675 \\ \# neg & 0 & 13,896 & 97,675 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the different datasets. Since, the train and evaluation sets differ due to random sampling or the choice for \(K\), we can only report the total counts.
In our case, we prepend the example and question with _"Answer the following with yes/no"_, instructing the PLM how to answer the question that follows.
The question and answering dataset is constructed with the descriptions of the entities in ESCO. As such, we can construct a dataset as:
\[<e,\{\texttt{description}\}> \tag{6}\]
Next, we define the template \(T(x)\) as depicted in the example in Figure 2. Where we first prepend the instruction _"Q: Answer the following with yes/no"_ to the body _"Does [description] describe [entity label]"_, to finish it off with _"A: [MASK]"_.
The verbalizer \(\varphi\) then maps the \(\{\textit{'yes}^{\prime},\textit{'no'}\}\) to the label words \(\{\textit{'yes}^{\prime},\textit{'no'}\}\).
We randomly sample correct examples, in addition to generating negative examples by randomly sampling a skill or occupation entity, and pairing this with a randomly sampled description from the set of available descriptions, and tagging the label for the answer to be _"no"_. This results in a balanced dataset, with a fifty-fifty split of positive and negative examples.
### Experiments
In order to answer our research questions, we propose the following experiments.
#### 4.2.1 Experiment 1: Zero-shot Learning
First, to better understand the labour market-specific tasks that we propose, we first test off-the-shelve PLMs in a zero-shot setting, using our own generated prompt datasets for inference.
In addition, to test the hypothesis that FLAN-T5's multitask learning enables a better ability of learning additional (domain-specific) tasks, in our first experiment we directly compare off-the-shelve T5 and FLAN-T5 models, on each of our three datasets.
#### 4.2.2 Experiment 2: K-shot Learning
Next, having established the performance differences between the off-the-shelve PLMs, we study the impact of few-shot learning to steer the best performing PLM from experiment 1 towards the domain-specific data and tasks, where we perform an ablation study on the number of examples (\(K\)) we use for few-shot learning.
This is motivated by a.o., Han et al. [10], who report comparable or even better results in the few-shot scenario than e.g., methods that inject special symbols to index the positions of entities and methods that inject both type information and special symbols. The authors sample \(K\) training instances and \(K\) validation instances per class from the original training set and development set, and evaluate the models on the original test set.
We propose using \(K=\{64,128,256\}\) sets.
#### 4.2.3 Experiment 3: Multitask Learning
After having studied the effect of few-shot learning, we perform an ablation study to measure the effect of learning multiple tasks in parallel, i.e., transfer learning from one task to the other.
We do so by fine-tuning the FLAN-T5 on all combinations of tasks from a single to the full set, i.e., we train FLAN-T5 on the RC+EC and consecutively on the EL and QA tasks. We then test the performance of the resulting model on all three data sets to identify whether, e.g., prompt tuning on EL can help performance on the QA dataset.
### Implementation Details
Our model implementation relies on the HuggingFace, PyTorch and OpenPrompt frameworks (albeit with some customizations), proposed by Wolf et al. [27], Paszke et al. [28] and Ding et al. [29] respectively.
For the zero-shot approach of the first experiment, we turn to T5 and FLAN-T5, for which we use the implementation by the original authors [17, 25, 11]. More specifically, we use the 3 billion parameter checkpoints as found on huggingface under the names; _'t5-3b'_, and _'google/flat-t5-xl'_.
To answer our second research question we adjust the number of examples used for training the models by comparing different values for parameter \(K\) (i.e., number of samples). We optimize our PTR and Instruction-based finetuning models using AdamW, with a learning rate of respectively \(3e-5\) and \(2e-5\). Furthermore, we reset the weight decay on the normalization layers and bias. We fine-tune all models using batch size 32, and train the PTR models for 10 epochs, whereas, we train the instruction based finetuning models for only 5. The best model checkpoint is selected.
### Evaluation metrics
In order to systematically evaluate few-shot performance, we randomly pick \(K\) samples from the total dataset, and use the remaining data to sample evaluation sets. This sampling is done 9 times, each iteration we sample 512 random examples from the remaining data after the train/test split. We report F1 scores averaged over 9 runs in addition to standard deviations (\({}^{\pm std}\)). We argue that sampling multiple splits gives a more robust measure of the actual performance.
Since the single EC+RC dataset contains two separate tasks, it is important to avoid contamination between the
train and test sets. Therefore, after the initial division, we check all individual skill and occupation entities from the train set, and remove all relations in the test set that contain any of those entities. For the QA and EL training data the risk of contamination is mitigated through the train/test split (i.e., after the normal split unique entries belong either to the train or test set).
## 5 Results
In this section we present and summarize the results of our experiments described in Section 4.2.
### Experiment 1: Zero-shot learning
See Table 2 for the comparison of T5 and FLAN-T5 in 0-shot learning, i.e., applied off the shelve for inference on our generated prompts.
First, we see that FLAN-T5 substantially outperforms the non instruction-based finetuned counterpart T5 on the QA (83.44 vs. 33.75 respectively) and EL tasks (57.38 vs. 33.89 respectively), but slightly underperforms on the EC+RC tasks, at 44.54 for FLAN-T5 and 48.07 for the T5 model.
A potential explanation for this might be the fact that FLAN-T5 is trained on a variety of entity classification tasks that do not involve skill and occupation entities (i.e., the primary focus is on person and organisation entities). As such, the learned patterns may interfere with the PLMs ability to recognize skills and occupations.
### Experiment 2: K-shot learning
In Table 3 we show the performance differences at different levels of \(k\) in the fewshot learning scenario.
First, we note how the performance of FLAN-T5 + PTR substantially outperforms both T5 and FLAN-T5 from Table 2 with F1 scores between 50.42 and 51.60 across different values of \(K\), compared to 48.07 and 44.54 respectively for the zero-shot T5 and FLAN-T5.
Next, we see that different values of \(K\) are optimal for different tasks; with maximum scores at \(K=128\) for EC+RC and QA at 51.60 and 94.23 respectively, and a maximum score of 98.06 for \(K=256\) for EL.
The scaling of the model potentially gives us insights into how sample efficient the model is in learning the behavior. Larger models are in-general more sample efficient and as such require less examples to learn a particular behavior [30].
### Experiment 3: Multitask learning
Finally, we show the impact of learning single or multiple tasks at once. Results of our ablation experiments are shown in Table 4, where we vary with models that are trained on all combinations of different train sets of prompts, which we evaluate on each of the three test set of prompts.
Here, we note that first, in some cases adding prompts for additional tasks increases performance for the original tasks, consider, e.g., the case for (testing on) EL, where adding QA prompts yields an F1-score of 97.61 (row 4, Table 4), and adding EC+RC prompts even gets performance up to 98.48 (row 5, Table 4), whereas the model tuned with EL prompts only, scores 95.17 F1 (row 3, Table 4).
However, this does not apply for QA nor EC+RC, where only tuning with respectively QA and EC+RC prompts yields the highest score, nor for the case of training on all additional prompts--these runs (bottom row in Table 4) do not outperform the best performing models tuned on one or two sets of prompts.
Overall, this indicates that multitask learning can contribute in some cases to increased performance.
#### 5.3.1 Unseen task performance
Supporting these observations is the pattern around performance on unseen tasks, i.e., models tuned on (a) task(s) that do not include the test task used for evaluation. Consider, e.g., EL; models that have not seen any EL prompts in their tuning stage, perform substantially worse with 58.96 for EC+RC and QA, 57.40 for EC+RC, and 60.31 for QA, versus between 95.17 and 98.48 for models that have seen EL prompts.
Similar patterns are seen with the other tasks, where for EC+RC models that have not seen any EC+RC prompts perform between 45.05-47.68, and around 50.55 and 51.60 for models that have. For QA, we see that models without QA prompts in tuning score between 68.22-87.98, and models that have range from 93.24 to 94.23.
However, increasing the number of tasks in tuning does increase performance for unseen tasks in two out of three cases: when testing on the EC+RC prompts, a model that combines QA and EL prompts in tuning scores 47.68, and outperforms QA-only (45.93) and EL-only (45.04) models. Similarly, for QA, combining EC+RC and EL prompts yields an F1-score of 87.98, whereas EC+RC-only yields 78.36, and EL-only a mere 68.22 F1.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Model & EC+RC & QA & EL \\ \hline T5 & 48.07\({}^{\pm.19}\) & 33.75\({}^{\pm.2}\) & 33.89\({}^{\pm.37}\) \\ FLAN-T5 & 44.54\({}^{\pm.66}\) & 83.44\({}^{\pm.44}\) & 57.38\({}^{\pm.60}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: F1 scores of experiment 1, where we compare 0-shot performance between T5 and FLAN-T5.
Finally, models that are tuned on all tasks do not outperform models tuned on two tasks in two out of three sets (only for QA does the full model perform better than models trained on two tasks).
## 6 Discussion
Our paper explored three different questions. First, _are "out-of-the-box" PLMs capable of generalizing learned behavior to labour market specific applications?_ In order to answer this question, we created three self-supervised benchmarks from the ESCO taxonomy.
To answer this question, we performed zero-shot comparing between T5 and the instruction-tuned FLAN-T5, that has seen 1,836 additional tasks in prompts. Results showed that FLAN-T5 substantially outperforms T5 on two labour market-specific tasks, with a 49.7% increase in F1 score for QA, and 23.5% for EL. However in the EC+RC task where T5 outperforms FLAN-T5 by 3.53%. These findings confirm that overall, the instruction-tuned FLAN PLM benefits from having seen multiple tasks. The result for the EC+RC task can be explained by "misleading" patterns learned from the more general finetuning on named entity recognition (i.e., recognition of "Persons" and "Organizations", etc.). However, further investigations and ablation studies on general task tuning and its exact influence on the performance is needed for a more definite answer.
On the second question, whether _instruction and/or sub-prompt finetuning a PLM on a mixture of task-specific datasets could increase the performance on labour market specific benchmarks?_, we performed experiment \(2\), where we varied our \(K\) instruction samples for training our best-performing PLM: FLAN-T5. Results demonstrated that PTR-based finetuning with 128 examples leveraged the best performance. Overall, this yielded an 7.06% performance increase over the zero-shot performance of FLAN-T5. Additionally, further scaling of the number of examples, to 256, yielded only a 6.24% increase, suggesting no further performance increases for further scaling of the number of examples. Our results seem to indicate that using PTR with labour market specific examples yields improvements above and beyond the 1836 tasks FLAN-T5 was tuned on.
Lastly, we investigated _the effects of transfer learning across labour market specific tasks_.
Here, our results suggest that first, learning more tasks does yield increased performance on new, unseen tasks. At the same time, the best-performing models often were those that were trained on the evaluation task exclusively (for EC+RC and QA). Overall, unsurprisingly, directly learning the task at hand yields the best performing models, but the fact that multiple tasks improve performance for unseen tasks does suggest that the domain-specific knowledge that the PLMs receive in the tuning stage, do help solving the unseen task at hand. Prompt tuning on the QA and EL (i.e., instruction based finetuning) examples lead to a 3.14% improvement on the EC + RC task. Similarly, prompt tuning on the EC + RC and QA examples yielded a 1.58% increase in performance on the EL task, with an overall 4.54% increase over the zero-shot scenario. A possible explanation, "the ability to recognize whether an entity is an occupation or skill help discriminate whether two entities are not synonymous". However, training on all tasks did not seem to increase the overall performance on any of the tasks. We believe this is potentially caused by overlaps in learned behavior from these different labour market specific tasks and the 1836 tasks FLAN-T5 is already pre-trained on.
### Implications
Finetuning PLMs is often an effective transfer mechanism in NLP. However, an entire new model is often required
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Train \(\downarrow\) / Test \(\rightarrow\) & EC+RC & QA & EL & & & & \\ \hline EC+RC & \(\mathbf{51.60^{\pm.47}}\) & \(78.36^{\pm.86}\) & \(57.40^{\pm.16}\) & \(57.40^{\pm.16}\) & \(45.93^{\pm.26}\) & \(69.23^{\pm.24}\) & \(60.31^{\pm.12}\) \\ EL & \(45.04^{\pm.26}\) & \(68.22^{\pm.70}\) & \(95.17^{\pm.39}\) & \(95.17^{\pm.39}\) & \(95.17^{\pm.39}\) & \(95.17^{\pm.39}\) & \(95.17^{\pm.39}\) \\ \hline EC+RC, QA & \(51.34^{\pm.23}\) & \(93.24^{\pm.21}\) & \(58.96^{\pm.52}\) & \(58.96^{\pm.52}\) & \(51.21^{\pm.45}\) & \(87.98^{\pm.31}\) & \(95.48^{\pm.29}\) \\ QA, EL & \(47.68^{\pm.23}\) & \(93.69^{\pm.24}\) & \(97.61^{\pm.14}\) & \(95.17^{\pm.14}\) & \(95.17^{\pm.14}\) & \(95.17^{\pm.14}\) \\ \hline all & \(50.55^{\pm.70}\) & \(94.10^{\pm.27}\) & \(98.19^{\pm.32}\) & \(98.19^{\pm.32}\) & \(95.17^{\pm.39}\) & \(95.17^{\pm.39}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: F1 scores of our previously best performing model: FLAN-T5 with 128-shot learning, on the different combinations of tasks we propose.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{EC+RC} & \multicolumn{3}{c}{QA} & \multicolumn{3}{c}{EL} \\ Model \(\downarrow\)\(K\)\(\rightarrow\) & 64 & 128 & 256 & 64 & 128 & 256 & 64 & 128 & 256 \\ \hline FLAN-T5 + PTR & 50.42 & 51.60 & 50.87 & - & - & - & - & - & - \\ FLAN-T5 + Instruction tuning & - & - & - & 92.09 & 94.23 & 93.71 & 89.26 & 95.17 & 98.06 \\ \hline \hline \end{tabular}
\end{table}
Table 3: F1 scores for experiment 2, comparing the impact of number of instructions (e.g., \(K\)) across the three benchmark datasets (top row).
for every task. Our results indicate that cost-efficient methods such as PTR and instruction-based finetuning can significantly increase the performance of PLMs on downstream labour market applications without introducing any additional model layers, manual annotations, and data augmentation.
Furthermore, our results suggest that while training on general tasks can increase the overall performance on labour market specific applications, providing the general models with labour market specific examples increases performance above and beyond the general finetuning.
### Limitations
There are several limitations to the current study that should be considered. First, we only used one-to-one verbalizers between our classes and label words. Meaning that every class label is mapped to one respective label word. This would be a fruitful area for future research, e.g., occupation can also be rewritten as job, or work. Adding these alternatives to the label words will probably yield improved performance over the current one-to-one verbalizers.
Second, for the purpose of this initial exploration we focused primarily on binary classification tasks. As such, we did not incorporate the possibility for a non-existing relation in the PTR finetuning.
Third, while the underlying methods support multiple languages, we chose to conduct our experiments on English. In part because the descriptions used in the QA dataset are not complete for all 28 languages for which ESCO is available. A future study could assess the performance of PTR and instruction based finetuning without examples in other languages.
Lastly, this study primarily focused on the actual _isEssentialFor_ and _isOptionalFor_ relations as they were present in the ESCO taxonomy. As such, we did not implement the _reversed_ and or negative relations, even though this was suggested to further increase performance.
## 7 Conclusion
In this study, we demonstrated that FLAN-T5 substantially outperforms T5 on the QA and EL tasks with respectively 49.7% and 23.5% F1 scores. However, on the remaining EC+RC task, T5 outperformed FLAN-T5 by 3.53%. Overall it seems that PLMs benefit from instruction based finetuning even on labour market specific benchmarks. However, if the task at hand is very different from the task at hand, it can potentially hurt performance, as demonstrated with the EC+RC tasks.
Furthermore, our results seem to indicate that using PTR with labour market specific examples yields improvements above and beyond the 1,836 tasks FLAN-T5 was tuned on. Unsurprisingly, directly learning the task at hand leads to the best performing models. But, results also show prompt tuning on other labour market specific tasks can improve performance on unseen tasks. For example, prompt tuning on EC+RC and QA improved the performance on the EL task with 1.58%, and prompt tuning on QA and EL improved the performance on the EC+RC task by 3.14%.
There are several limitations to the current study; i) we solely used one-to-one verbalizer, ii) we focused primarily on binary classification tasks, iii) we only focused on English, and lastly we only used relations actually present in the ESCO taxonomy, meaning that we did not implement the reversed relations. Future studies could address the limitations of this study by using incrementing the amount of used label words, adding negative and reversed relations, and using ESCO to construct parallel datasets for all available languages.
|
2309.16459 | Augmenting LLMs with Knowledge: A survey on hallucination prevention | Large pre-trained language models have demonstrated their proficiency in
storing factual knowledge within their parameters and achieving remarkable
results when fine-tuned for downstream natural language processing tasks.
Nonetheless, their capacity to access and manipulate knowledge with precision
remains constrained, resulting in performance disparities on
knowledge-intensive tasks when compared to task-specific architectures.
Additionally, the challenges of providing provenance for model decisions and
maintaining up-to-date world knowledge persist as open research frontiers. To
address these limitations, the integration of pre-trained models with
differentiable access mechanisms to explicit non-parametric memory emerges as a
promising solution. This survey delves into the realm of language models (LMs)
augmented with the ability to tap into external knowledge sources, including
external knowledge bases and search engines. While adhering to the standard
objective of predicting missing tokens, these augmented LMs leverage diverse,
possibly non-parametric external modules to augment their contextual processing
capabilities, departing from the conventional language modeling paradigm.
Through an exploration of current advancements in augmenting large language
models with knowledge, this work concludes that this emerging research
direction holds the potential to address prevalent issues in traditional LMs,
such as hallucinations, un-grounded responses, and scalability challenges. | Konstantinos Andriopoulos, Johan Pouwelse | 2023-09-28T14:09:58Z | http://arxiv.org/abs/2309.16459v1 | # Augmenting LLMs with Knowledge
###### Abstract
Large pre-trained language models have demonstrated their proficiency in storing factual knowledge within their parameters and achieving remarkable results when fine-tuned for downstream natural language processing tasks. Nonetheless, their capacity to access and manipulate knowledge with precision remains constrained, resulting in performance disparities on knowledge-intensive tasks when compared to task-specific architectures. Additionally, the challenges of providing provenance for model decisions and maintaining up-to-date world knowledge persist as open research frontiers. To address these limitations, the integration of pre-trained models with differentiable access mechanisms to explicit non-parametric memory emerges as a promising solution. This survey delves into the realm of language models (LMs) augmented with the ability to tap into external knowledge sources, including external knowledge bases and search engines. While adhering to the standard objective of predicting missing tokens, these augmented LMs leverage diverse, possibly non-parametric external modules to augment their contextual processing capabilities, departing from the conventional language modeling paradigm. Through an exploration of current advancements in augmenting large language models with knowledge, this work concludes that this emerging research direction holds the potential to address prevalent issues in traditional LMs, such as hallucinations, un-grounded responses, and scalability challenges.
## I Introduction
Large Language Models (LLMs) have brought about remarkable advancements in Natural Language Processing (NLP) and are now integral to various widely-used products, including Copilot [1], Google's search engine, and more recently, Chat-GPT, a chatbot built upon GPT3 [2]. These models, characterized by their memorization capabilities as well as their compositional prowess, have demonstrated unprecedented performance in tasks ranging from language understanding to text generation, paving the way for more sophisticated human-computer interactions.
However, LLMs are not without their limitations. They often produce seemingly plausible yet incorrect predictions, a phenomenon known as hallucinations [3], leading to avoidable errors in various contexts. Furthermore, many of the ground-breaking capabilities of LLMs appear to scale with the model's size in terms of trainable parameters. While recent efforts have produced smaller LLMs with retained capabilities [4], the practicality of training and maintaining large models remains a challenge, with continual learning for such models posing an ongoing research question [5].
These limitations are rooted in a fundamental issue with LLMs: they are primarily trained for statistical language modeling, relying on a single parametric model and a relatively limited context, typically the preceding "n" tokens. Despite advancements in hardware and software, most models still employ relatively small context sizes compared to the expansive context required for accurate language modeling in all scenarios. Consequently, achieving the necessary scale to store the knowledge beyond the immediate context has become a necessity.
In response, a growing research trend has emerged, moving away from the traditional statistical language modeling paradigm. One approach addresses the limited context size of LLMs by enhancing its relevance through the incorporation of information extracted from external documents [6][7]. By equipping language models with modules that retrieve relevant documents from databases based on the context, it becomes possible to replicate certain capabilities of larger LLMs while using fewer parameters [8][9].
Moreover, in this evolving landscape, pioneering models [10][11] that leverage structured knowledge stand out. These models leverage knowledge graphs along with a corpus of supporting documents, which can be jointly processed by Graph Convolutional Neural Networks (CNNs). By harnessing graph-based representations, these structured-knowledge augmented models excel in generating precise responses to open-domain questions. This innovative use of structured knowledge marks a significant advancement in enhancing language models, demonstrating the diverse strategies researchers are adopting to address the limitations of contemporary LLMs.
It is worth noting that such approaches transform the
resulting models into a non-parametric ones, as they can now effectively query external data sources.
Another strategy involves enabling LLMs to leverage external tools [12], such as search engines [13][14][12], allowing them to augment the current context with crucial missing information not contained within the model's weights. Although most of these efforts aim to address individual shortcomings of LLMs, it is evident that a more comprehensive integration of knowledge tools has the potential to significantly enhance the capabilities of these models.
In light of these recent developments in NLP, there is a pressing need for a comprehensive taxonomy of augmented language models and clear definitions of the technical terminology used, which sometimes carry varying interpretations and intentions.
## II Background
As we delve into the intricacies of augmenting Large Language Models (LLMs) with external knowledge, it is imperative to establish a foundational understanding of the key concepts that underpin this transformative field. Knowledge augmentation strategies, such as harnessing knowledge graphs, employing beam search techniques, leveraging triple-store databases, and integrating sequence-to-sequence models, form the bedrock upon which advanced language models now stand. In this section, we embark on a comprehensive exploration of these pivotal concepts, unraveling their significance, methodologies, and interconnectedness. By elucidating these fundamental building blocks, we pave the way for a profound understanding of how contemporary LLMs harness external knowledge to achieve unprecedented linguistic feats.
### _Generative Language Models_
Generative language models are trained to produce new text, given an input sequence of tokens. They are able to perform this by learning the statistical relationships between words and phrases in a large corpus of text. When given a prompt, a generative model will try to produce text that is consistent with the statistical patterns it has learned.
Some of the most popular generative models in natural language processing include autoregressive models [15], variational autoencoders (VAEs) [16] and Generative adversarial networks (GANs) [17]. In this literature survey, we will mostly explore Transformers, autoregressive models, along with another type of generative language models, sequence-to-sequence models.
### _Autoregressive Models_
An autoregressive model [15] is a type of neural network used for generating sequences of data, where each element in the sequence is predicted one at a time based on the previously generated elements. In other words, the model generates data by conditioning its predictions on the data it has generated so far. Autoregressive models are typically used for tasks like text generation, time series forecasting, and speech synthesis.
One of the most well-known autoregressive models in NLP is the GPT (Generative Pre-trained Transformer) series, such as GPT-2 [18] and GPT-3 [2]. These models generate text by predicting the next word in a sentence based on the preceding words. They use self-attention [19] mechanisms to capture dependencies between words at different positions in the sequence, making them capable of generating coherent and contextually relevant text.
### _Sequence-to-sequence Models_
A sequence-to-sequence (seq2seq) model [20] predicts the probability of a token being the next token in a given sequence of words.
It consists of an encoder and a decoder. The encoder reads the input sequence, one step at a time and produces a fixed-dimensional vector representation of the entire sequence. This vector is called a _context vector_ and it is a representation of all the meaningful information of the input sequence. The context vector is then passed to the decoder, which generates an output sequence.
Sequence-to-sequence models are typically trained using a maximum likelihood objective, which means that they are trained to produce the output sequence that is **most likely** to follow the input sequence. In summary, seq2seq models are designed for tasks involving the transformation of one sequence into another, often with different lengths and structures. They are typically applied to tasks such as: machine translation, text summarization, and question-answering, where the relationship between the input and output sequences is not purely linear or where the lengths of input and output sequences can vary significantly.
From this point and onwards, we will refer to sequence-to-sequence models as just seq2seq.
### _Transformers_
The Transformer architecture [19] marked a groundbreaking advancement in the field of NLP. Since its inception, Transformers have become the backbone of various state-of-the-art language models, underpinning many of the recent developments in the realm of augmented language models.
At its core, the Transformer architecture revolutionized sequence-to-sequence modeling through the introduction of the attention mechanism. Unlike earlier recurrent neural networks (RNNs) [21][22] and convolutional neural networks (CNNs) [23], Transformers rely on self-attention mechanisms to capture dependencies between elements in a sequence, making them highly parallelizable and efficient for processing long-range dependencies.
The architecture consists of two main components: the encoder and the decoder. The encoder processes the input sequence, while the decoder generates the output sequence. Each component comprises multiple layers, with each layer containing a multi-head self-attention mechanism and feed-forward neural networks. These self-attention mechanisms enable Transformers to capture contextual information efficiently, making them ideal for tasks that involve understanding and generating sequences of data.
In the context of language modeling, Transformers can be adapted to function as decoder-only models. In decoder-only Transformers, the encoder component, which is used for encoding input sequences, is removed. These models retain the core Transformer architecture but focus exclusively on generating sequences of tokens, making them particularly suitable for autoregressive language modeling tasks.
Decoder-only Transformers operate in an autoregressive manner. They generate sequences one token at a time, with each token's prediction conditioned on the previously generated tokens. This autoregressive approach allows them to produce coherent and contextually relevant text. Decoder-only Transformers have been instrumental in various text generation tasks, including machine translation, text summarization, and text completion.
Since the introduction of the Transformer architecture, numerous variants and extensions have emerged, each tailored to address specific challenges in NLP. These variants include models such as BERT (Bidirectional Encoder Representations from Transformers) [24], GPT (Generative Pre-trained Transformer) [18][2], and T5 (Text-to-Text Transfer Transformer) [25], among others. Many of these models have laid the foundation for augmenting language models with external knowledge, a topic of great interest in recent NLP research.
### _Beam Search_
Beam Search is a heuristic search algorithm that explores a graph, G, by expanding only the K (beam width) most promising nodes at each step. Beam Search simulates the behavior of Breadth-First Search. More specifically, it uses BFS to create a search tree. At each level of the tree, it checks all the successors of the current level and keeps only the top K ones, while pruning the others. The process repeats until K leaves are found. Beam search will return the leaf that maximizes some given score function.
In the context of NLP, when using a generative model, Beam Search is utilized to find the sequence \(y=(y_{1},...,y_{n})\) that is most likely to come after an input sequence \(x\). In mathematic notation, the probability to maximize is:
\[p(y|x) =p(y_{n}|x,y_{1...n-1})\cdot p(y1...n-1|x)\] \[=p(y_{n}|x,y_{1...n-1})\cdot p(y_{n-1}|x,y_{1...n-2})\cdot...\cdot p (y_{1}|x) \tag{1}\]
Instead of choosing only the output token with the highest probability each time, beam search chooses the top K tokens with the highest probability and explores the generated sequences recursively until we reach an \(<EOS>\) (end-of-sequence) token. Then, it returns sequence \(y\) (out of the K sequences) that maximizes \(p(y|x)\).
In the following sections, we will explore some concepts that are pivotal to the understanding of state-of-the-art augmentation of LLMs.
### _Text Corpus_
A text corpus, \(D\) is a set of documents: \(d_{1},...,d_{|D|}\) where each document is a sequence of words: \(d_{i}=(w_{1},...,w_{|d_{i}|})\). Specifically, in the context of this paper, a document is essentially a sentence, and an article is a collection of documents.
As we will see later on in this survey, text corpora are considered an **unstructured** knowledge base and are usually organized in vector databases.
### _Vector Database_
In a vector database, a document can correspond to one vector or many vectors, depending on the specific implementation of the database. A single vector captures the overall meaning of the document. This is often done by averaging the vectors of the words in the document. In other cases, a document may be represented by a vector for each word in the document. This is often done when it is important to be able to track the individual words in the document.
When a language model retrieves information from a vector database, it essentially has access to knowledge that is not stored in its parameters (weights). Therefore, a vector database is a form of **non-parametric memory** for LLMs.
### _Dense Vector Index_
Indexing in a vector database is the process of organizing the vectors in the database in a way that makes it efficient to search for and retrieve similar vectors (vectors with a high inner product). This is accomplished by creating a data structure that maps each vector to a set of other vectors that are similar to it.
Maximum Inner Product Search (MIPS) is a specific type of vector search that aims to find the vector in the database with the highest inner product with a given query vector. MIPS is used in a variety of applications, such as recommendation systems, machine learning, and image retrieval.
FAISS [26] is a popular open-source library for efficient similarity search and clustering of dense vectors. FAISS contains a variety of algorithms for MIPS, as well as other types of vector search. FAISS is used by many companies and organizations, including Google, Facebook, and Microsoft.
### _Triplestore Knowledge Bases_
A Triplestore knowledge base is a database that consists of subject-predicate-object triples. An example of such a triple is: (Subject: Albert Einstein, Predicate: was born in, Object: Ulm, Germany). Triples are a great form of representing factual knowledge because they capture the nature of the relationship between a subject and an object and can be easily processed by LLMs. One can visualize this knowledge base as a graph whose vertices are the various subjects and objects (entities) and the predicates are the edges between these entities.
Each edge has a type (e.g: "was born in") that describes the kind of the relation between the connected entities. Triplestore knowledge bases with more than one types of relations are called **heterogeneous**.
Triplestores are an excellent example of what we call **structured** knowledge bases. They can be merged with unstructured knowledge bases through a set of **entity links**: \((v,d_{p})\), connecting entity \(v\) with a word at position \(p\), in document \(d\).
### _Graph Convolutional Networks_
Graph convolutional networks (GCNs) are a type of neural network that can be used to learn representations of nodes in a structured knowledge base, such as a graph. GCNs are particularly well-suited for node classification tasks, where the goal is to predict the label of each node in the graph (e.g: whether the node contains an answer to a given question or not).
GCNs work by iteratively aggregating information from the neighbors of each node. At each layer, the GCN collects the embeddings of all of a node's neighbors, averages them, and then applies a linear transformation and a nonlinear activation function. The output of this layer is then used as the input to the next layer.
The more layers the GCN has, the more multi-hop reasoning the model will be able to perform, because it will gather information from more far away neighbors. This makes GCNs well-suited for tasks where the labels of nodes depend on the labels of their neighbors, such as social network analysis and fraud detection.
Here is a high-level overview of how a GCN works for node classification:
1. Initialize the embeddings of all nodes in the graph.
2. For each node in the graph: 1. Collect the embeddings of all of the node's neighbors. 2. Average the embeddings of the node's neighbors. 3. Apply a linear transformation and a nonlinear activation function to the average embedding. 4. The output of this function is the new embedding for the node.
3. Repeat step 2 for a fixed number of layers.
4. The final embedding of each node is used as the input to a classifier to predict the node's label.
### _Relational Graph Convolutional Networks_
One problem that arises when the knowledge-base graph heterogeneous is that, in that case, we want to take into consideration the type of relation that a node has with its neighbors before we average their embeddings.
A relational GCN [27] is similar to a regular GCN, but it uses a separate matrix for each type of relation. Therefore, when using a relational GCN, we aggregate the embeddings from all neighbors with a specific relation and we pass the averaged embedding into a separate CNN layer for each relation.
## III Knowledge Base Augmented Generation
Language models have the ability to store knowledge in their parameters. Alternatively, knowledge in the form of natural language can be offloaded completely from the LM by retrieving from an external knowledge base. Memory augmentation strategies help the language model to avoid producing non-factual information as well as reducing the number of parameters required to achieve comparable performance to significantly larger LMs. Based on their structure, knowledge bases can be either unstructured (text-based) or structured (graph-based). In this literature survey, we are going to explore work from both worlds.
### _Retrieval-Augmented Generation (RAG)_
RAG [6] uses both parametric and non-parametric memory to generate more accurate and informative responses to an input query.
Specifically, the RAG architecture entails:
* _a generator_: a BART-large [28] sequence-to-sequence language model, pre-trained on a massive dataset of text and code (parametric memory).
* _a knowledge base_: a dense vector index of the Wikipedia database (non-parametric memory). All documents in the knowledge base are also encoded as vectors using a \(BERT_{BASE}\)[24] document encoder, \(BERT_{d}\).
* _a retriever_: a component that is responsible for retrieving the documents of the knowledge base that are most relevant to the input query. It follows the DPR (dense passage retrieval) architecture [29] and it consists of a document encoder, \(BERT_{d}\) and a query encoder, \(BERT_{q}\). The retriever
* calculates the embedding of the input query, using the \(BERT_{q}\) encoder.
* conducts _Maximum Inner Product Search_ (MIPS) in the indexed knowledge base to find the **K*
* most similar documents to the input query
According to the authors of RAG, training and fine-tuning the parameters of the \(BERT_{d}\) encoder is extremely computationally expensive, and not very effective accuracy-wise. Specifically, if they were to train the parameters of \(BERT_{d}\), then for each training iteration, the embeddings of each document in the \(BERT_{BASE}\) knowledge base would have to be updated as well, so that they are in-sync with the new \(BERT_{d}\) encoder.
Fig. 1: Overview of knowledge augmentation of language models from the paper by Izacard et al. [7]. The input query (light yellow), along with a number of retrieved relevant documents (light blue), passes through the generative seq2seq model to produce an output response.
Therefore, they use a completely pre-trained \(BERT_{d}\) encoder, and during the fine-tuning stage, they only fine-tune the parameters of the query encoder \(BERT_{q}\).
One interesting aspect of RAG is how it implements the _fusion_ of knowledge from all retrieved documents to produce a final response. In both proposed versions of RAG, RAG-token and RAG-sequence, fusion is performed right after the decoder.
Specifically, RAG-token:
* for each retrieved document \(z\), calculates the probability for each token \(y_{i}\) in the vocabulary to be the next token in the sequence: \[p_{\theta}(y_{i}|x,z,y_{1:i-1})\] (2)
* sums the probabilities over all retrieved documents (marginalization): \[p_{\theta}^{{}^{\prime}}(y_{i}|x,y_{1:i-1})=\sum_{z}p_{\eta}(z|x)\cdot p_{ \theta}(y_{i}|x,z,y_{1:i-1})\] (3)
* runs Beam Search to find the K most likely next tokens
* chooses the token, \(y_{i}\) with the highest transition probability
The RAG-sequence model is quite easier to grasp. It takes into account only one retrieved document per sequence that it generates. Specifically, for each retrieved document, it conducts Beam Search to generate K sequences. Then, it simply returns the sequence with the highest probability.
### _Realm [30]_
REALM was the first method that managed to pre-train jointly the retriever and the generator. The authors of REALM propose three stages of training for the given architecture:
* _initialization_
* _pre-training_
* _fine-tuning_
One significant challenge that REALM faced was the fact that, at the beginning of training, the query and document encoders, \(Embed_{input}\) and \(Embed_{doc}\) respectively contain completely random parameters. Hence, the retrieved documents, z, will likely be unrelated to the input query, x. As a result, the Generator learns to ignore the retrieved documents. Once this occurs, during training, the Retriever no longer receives a meaningful gradient and cannot improve, creating a vicious cycle that does not result in an accurate end model.
To avoid this cold-start problem, the authors warm-start (initialization) the Retriever (\(Embed_{input}\) + \(Embed_{doc}\)) using a training objective known as the Inverse Cloze Task (ICT) [31] where, given a sentence, the model is trained to retrieve the document where that sentence came from.
In the case of the Generator, the authors warm-start it with BERT pre-training [24] and they use the uncased BERT-base model (12 layers, 768 hidden units, 12 attention heads).
After the initialization stage, the REALM proposes an unsupervised pre-training method. During the pre-training iteration, REALM:
1. randomly selects sentences from the text corpus and masks specific tokens from each sentence
2. receives a masked query, q, as input. An example of that query would be: _"The [MASK] at the top of the pyramid"_
3. outputs its token prediction (correct answer is _"pyramidion"_)
4. back-propagates through the parameters, \(\theta\) of the the retriever \(p_{\theta}(z|x)\), and \(\phi\), of the generator \(p_{\phi}(z|x)\) (joint pre-training of the models).
During pre-training, both the \(Embed_{doc}\) and the \(Embed_{input}\) components of the Retriever are updated. Because the parameters of \(Embed_{doc}\) are updated during pre-training, in order for the document embeddings in the Wikipedia knowledge base to stay in-sync with the updated Retriever, after each back-propagation step, REALM needs to:
1. re-compute the document embeddings
2. re-calculate the document index (in order to perform MIPS)
This is a computationally expensive task, especially for really large databases, such as Wikipedia. Therefore, REALM was designed such that the embedding updates happen every 100 back-propagation steps, as an asynchronous process.
The supervised fine-tuning method that the authors used in order to evaluate REALM on Open-domain Question Answering (Open-QA) goes as follows: 1. they collect question-answer tuples, such as: (_"What's the angle of an equilateral triangle"_, _"60 degrees"_). 4. REALM receives the question as input. 5. it outputs its prediction. 6. similar to the pre-training phase, REALM back-propagates through the parameters of the the retriever \(p_{\theta}(z|x)\), and \(\phi\), of the generator \(p_{\phi}(z|x)\), but this time \(Embed_{doc}\) stays untouched. Therefore, fine-tuning is much less computationally expensive.
### _Fusion in Decoder (FiD)_
FiD [7] employs a similar but quite simpler idea to RAG. Their main difference, however, lies in the way they perform the fusion of the retrieved knowledge.
Similar to RAG, in FiD, we have two main models:
* _the retriever_ which has access to a \(BERT_{BASE}\) where documents are represented as dense vectors and retrieves the most relevant documents by running _Maximum Inner Product Search_ (MIPS) using the FAISS library [26]
* _the generator_ which is a sequence-to-sequence model that receives the input query concatenated with a retrieved passage and is trained to produce an answer. For their experiments, they used a pre-trained T5 seq2seq model.
In FiD, fusion of the knowledge in the retrieved documents is performed right before the decoder. Specifically, similar to RAG, they concatenate the input query with each retrieved passage and they separately feed each concatenation to the encoder (in parallel). However, after that, all the produced encoded vectors are concatenated together (fusion) and are passed a single-vector input to the decoder, which performs attention across all retrieved documents (cross-attention).
### _Atlas_
Atlas [9] is essentially the next generation of RAG and FiD, but it specializes in few-shot learning tasks. Atlas builds upon REALM [30] and proposes jointly pre-training both the retriever and the generator model, unlike RAG which uses pre-trained models and jointly trains them only during the fine-tuning stage.
When performing a task, from question answering to generating Wikipedia articles, Atlas starts by retrieving the top-k relevant documents from a large corpus of text. Then, these documents are fed to the language model, along with the query, which in turn generates the output. Both the retriever and the language model are based on pre-trained transformer networks.
Atlas, similar to FiD, follows the retriever-generator architecture:
* _the retriever_ is based on the _Contirever_ model [32] which entails a \(BERT_{q}\) and a \(BERT_{d}\) encoder and returns the K most relevant documents based on their similarity with the query.
* _the generator_ uses a T5 seq2seq model [25] and applies the FiD technique that processes each document separately in the encoder and concatenates the embeddings before they enter the decoder.
Atlas, in contrast with RAG, trains both \(BERT_{q}\) and \(BERT_{d}\) (not only \(BERT_{q}\)). Hence, the \(BERT_{q}\) embeddings for each document in the knowledge base need to be regularly updated so that they are in-sync with the updated \(BERT_{d}\) encoder. This is a computationally expensive task.
### _Retro_
The creators of RETRO [8] managed to implement an augmented language model at an unprecedented scale. This work's breakthrough is that it managed to pre-train and augment a relatively small Transformer model (25%fewer parameters than GPT-3 [2]) with a database that is 2 trillion tokens large (\(10^{3}\)xlarger than similar retrieval-augmented LLMs).
As we saw in previous work, such as RAG, REALM and Atlas, one main difficulty of augmenting LLMs with external knowledge-bases is that training the Retriever can be computationally expensive, because while the document encoder becomes better, the embeddings for each passage in the database need to be recomputed.
In this paper, they completely bypassed that challenge by using a frozen BERT retriever [24] which contains a pre-trained document encoder. Hence, in RETRO they calculate the document embeddings once, in the beginning, and do not update them again. As a result, the main bottleneck that accessing the external database entails is to retrieve the K most-relevant documents to the input query, which they implemented using the SCANN library [33]. This is a task of sub-linear complexity, which means that we can query their 2 trillion token database in 10ms.
One main difference of RETRO with previous work is that in RETRO they don't retrieve single documents (sentences), but chunks (a retrieved sentence along with the following sentence). This enables the generator model to acquire more context around the retrieved information and produce more accurate answers.
Here is an overview of how RETRO produces an answer to an input query, q:
1. it splits the input query into chunks of 4 tokens
2. For each chunk, cq of q, RETRO: 1. calculates the embedding of the chunk 2. finds the 2 nearest neighbors (most relevant documents) in its knowledge base 3. encodes cq through the encoder 4. encodes the 2 nearest neighbors through the encoder 5. interleaves the encodings of the nearest neighbors with the query-chunk embeddings to perform cross-attention. Neighbors of the first chunk only affect the last token of the first chunk and the first tokens of the second chunk.
Through this technique, RETRO manages to perform attention in complexity that is linear to the number of retrieved passages.
### _GRAF-Net_
GRAFT-Net [10] is a novel model designed for enhancing Question Answering (QA) in scenarios where there is a structured, graph-like knowledge base (triplestore) along with a substantial text corpus. GRAFT-Net leverages advancements in graph representation learning to extract answers by creating question-specific sub-graphs containing both text and knowledge-base entities and relations.
Results in a range of benchmarks demonstrate that GRAFT-Net exhibits competitive performance compared to state-of-the-art methods when tested on either structured knowledge bases or text corpora in isolation.
Fig. 2: Overview of the Fusion-in-Decoder (FiD) [7] technique. The input question gets concatenated with each relevant passage and all concatenations get encoded in parallel. The embeddings that are produced are concatenated together (fusion) and are passed as input to the decoder.
Graft-Net consists of the following stages:
1. the question sub-graph (\(G_{q}\)) retrieval stage: This is a characteristic of early fusion, the process of combining information from the triplestore knowledge-base and text early in the model, i.e., before a graph neural network is used.
2. the answer selection stage, where GRAFT-Net use a Graph Convolutional Network (GCN) variant [34][35][27] to do binary classification (answer, not-answer) on the nodes of \(G_{q}\).
The question sub-graph \(G_{q}\) essentially is a copy of the entire knowledge-base graph, in which the nodes and edges that are irrelevant to a given question, \(q\), are pruned. In addition, the question sub-graph contains text documents as well, but only the ones that are likely to contain the answer to question \(q\).
The retrieval of the question sub-graph, \(G_{q}\) happens in two parallel pipelines:
1. Knowledge Base Retrieval
2. Text Retrieval
During the knowledge base retrieval, a sub-graph of the triplestore knowledge base is retrieved. Specifically, GRAFT-Net:
1. retrieves a set of seed entities, \(Sq\), that are relevant to the question \(q\)
2. runs the Personalized PageRank (PPR) method [36] around these seeds to identify other entities which might be an answer to the question. During PPR, we assign weights to edges around the seed entities. Each edge weight is essentially the cosine similarity between: * the question vector, v(q): average of all word vectors in the question * the relation vector, v(r): average of all word vectors in the relation corresponding to that edge
3. retains the top E entities \(v_{1}\),..., \(v_{E}\) by PPR score, along with any edges between them, and adds them to the question sub-graph, \(G_{q}\)
During the text retrieval phase, GRAFT-Net retrieves documents (sentences) relevant to the question, \(q\), from the Wikipedia database. The text retrieval phase entails the steps that are described below. GRAFT-Net:
1. retrieves the top 5 most relevant Wikipedia articles (collection of documents), by using a weighted bag-of-words model [37].
2. populates a Lucene index [38] (facilitates data search in a large corpus of text) with sentences from these articles, and retrieves the top ranking ones: \(d_{1}\),..., \(d_{D}\).
The final question graph \(G_{q}\) consists of:
* \(V_{q}\): all retrieved entities and documents
* \(E_{q}\): all relations between the retrieved entities and all entity links between entities and documents
Because the vertices of the graphs can be either entities or documents, the graph is considered: heterogeneous.
### _PullNet [11]_
PullNet builds upon the advancements made by GRAFT-Net and uses the text corpus to supplement information extracted from the triplestore knowledge base in order to answer multi-hop questions. The subjects and objects in the triples contain links to relevant documents in the text corpus and PullNet uses these links to produce more factually-based answers.
Like GRAFT-Net, PullNet has an initial phase where it retrieves a question sub-graph \(G_{q}\). However, PullNet **learns** how to construct the sub-graph, rather than using an ad-hoc subgraph-building strategy. More specifically, PullNet relies on a small set of retrieval operations, each of which expands a graph node by retrieving new information from the knowledge base or the corpus. It learns when and where to apply these "pull" operations with another graph CNN classifier. The "pull" classifier is weakly supervised, using question-answer pairs.
The end result is a learned iterative process for sub-graph construction, which begins with a small sub-graph containing only the question text and the entities which it contains, and gradually expands the sub-graph to contain information from the knowledge base and corpus that are likely to be useful. The process is especially effective for multi-hop questions.
## IV Search-Engine Augmented Generation
Augmenting large language models with search engines represents the next step in the evolution of AI-driven natural language processing. Search engines empower models with a gateway to an expansive universe of knowledge that far surpasses what external knowledge bases can access. By harnessing the prowess of search engines, these models gain the ability to tap into the vast and ever-expanding repository of information on the World Wide Web. This dynamic access not only provides a wealth of information but also ensures that text generation remains current and up-to-date with the latest developments, a feat that external knowledge bases often struggle to achieve as they require continuous updates.
However, it is crucial to acknowledge that this newfound access to the open web through search engines carries potential risks. The information landscape of the internet is diverse, encompassing both valuable knowledge and, regrettably, harmful or malicious content. When integrated with augmented large language models, there exists the possibility of inadvertently exposing the model to inappropriate or unsafe content. This introduces concerns regarding the reliability and safety of the generated responses, as the model may unintentionally incorporate harmful information into its outputs.
As we will see in the following sections, the use of search engine-based queries has the benefit that these queries are inherently designed to be understood by humans, enhancing both the interpretability of the model's responses and its potential for continuous improvement through direct annotation or feedback. However, to harness the immense potential of this symbiotic fusion of AI-driven language models and the vast knowledge landscape facilitated by search engines, it is imperative to develop robust safeguards and mechanisms to
mitigate the risks associated with accessing potentially harmful or malicious content. This will ensure that the augmentation of language models with search engines not only broadens their horizons but also maintains the integrity and safety of their outputs, ushering in a new era of responsible and informed natural language understanding and interaction.
### _Internet Augmented Dialogue Generation (IADG)_
Previously described FAISS-based approaches, such as RAG (III-A) and FiD (III-C), can take advantage of many existing methods developed for QA and dialogue tasks, as we saw, but have several disadvantages. First, they may be difficult to update to real-time web documents. On top of that, there may be a limit to the number of documents that can be stored in local FAISS deployments. Finally, such methods will not take advantage of the high quality ranking that has been finely tuned in Internet Search engines over decades of use. Thus, the authors of this paper by Facebook AI Research consider using Internet search engines directly for knowledge retrieval.
IADG [13] consists of two main components:
* a search query generator: an encoder-decoder Transformer that takes in the dialogue context as input, and generates a search query. This is given to the black-box search engine API, and N documents are returned.
* a FiD-style generator: an encoder-decoder model that encodes each document individually (along with the dialog context), concatenates the embeddings before they enter the encoder, and finally generates the next response.
Each of these components can be trained separately, given **supervised** data for both tasks. The query generator requires: (context, search query) pairs, and the response generator requires: (context, response) pairs.
The search engine is a black box in this system (similar to LaMDA), and could potentially be swapped out for any method. In IADG, they use the Bing Search API [39] for their experiments to generate a list of URLs for each query. Then, they use these URLs as keys to find their page content.
### _SeeKeR_
SeeKeR [14] (Search-engine \(\rightarrow\) Knowledge \(\rightarrow\) Response) introduces an innovative approach that employs a single language model to tackle three distinct modular tasks consecutively: searching for information, generating knowledge, and crafting a final response. In this research endeavor, SeeKeR explores a modular framework that builds upon the foundations of IADG [13] while amalgamating the most effective elements from various existing solutions.
The SeeKeR model adheres to the foundational architecture of the standard transformer [19], but it distinguishes itself by employing the same model in a modular fashion, iteratively for multiple tasks. Within each module, the encoder (or decoder) incorporates distinct special tokens to signal the specific module being activated. The output generated by each module is subsequently fed into the next one, along with the original context. SeeKeR comprises a trio of specialized modules, each dedicated to unique functionalities, namely:
* Search module: generates a search query from the encoded input context. Subsequently, this query is channeled into the Bing Web Search API [39], initiating a retrieval process that yields the 5 most relevant documents as outcomes.
* Knowledge module: utilizes the encoded input context and a pool of retrieved documents to generate meaningful responses. This response comprises one or more pertinent phrases or sentences extracted directly from the retrieved documents. Notably, the FiD [7] method is employed to encode both the context and the documents.
* Response module: operates on the encoded input context merged with the knowledge response and crafts a coherent and contextually relevant continuation to the input.
It is essential to highlight that the knowledge module essentially involves a "copy" mechanism, as it does not entail the creation of new tokens; rather, its complexity lies in the precise selection of the relevant knowledge to replicate.
The authors of SeeKeR consider the GPT2 transformer [18] as a base model, and fine-tune it to become a SeeKeR model. Therefore, they did not perform any pre-training of their own in this case. For their experiments, they considered medium, large and XL (345M, 762M and 1.5B parameters) models.
### _LaMDA_
In this paper by Google, the authors of LaMDA [12] manage to augment a language generation model with what they call a Toolset (TS), a black-box external knowledge source. The Toolset consists of:
1. a calculator
2. a translator
3. an information retrieval system (similar to a search engine)
The TS takes a single string as input and outputs a list of one or more strings. Each tool in TS expects a string and returns a list of strings. For example, the information retrieval system can receive _"How old is Rafael Nadal?"_ as input, and output [_"Rafael Nadal / Age / 35"_].
The information retrieval system is also capable of returning snippets of content from the open web, with their corresponding URLs. The TS tries an input string on all of its tools, and produces a final output list of strings by concatenating the output lists from every tool in the following order: calculator, translator, and information retrieval system. A tool will return an empty list of results if it can't parse the input (e.g., the calculator cannot parse _"How old is Rafael Nadal?"_), and therefore does not contribute to the final output list.
It is essential to note that only little information is given on how the information retrieval system works, in the LaMDA paper, apart from the fact that it entails a database, but also can provide web snippets along with their URLs.
LaMDA entails two main sub-models that follow the decoder-only Transformer architecture:
1. _LaMDA-Base_: A regular generative model that is pre-trained on a large dataset. LaMDA-Base is the first model
to receive a query from the user. It then generates a response that is checked and refined by LaMDA-Research.
2. _LaMDA-Research_: A generative model that usually receives the output of LaMDA-Base as input and is fine-tuned to choose the recipient of its output (the TS or the user). In general, LaMDA-Research queries the TS in a loop, until it has sufficient information to generate a final response to the user.
## V Limitations and Discussion
Augmented large language models grapple with a set of recurring challenges. These issues encompass occasional inconsistencies, contradictions, factual inaccuracies, potential repetition, and a limited depth of reasoning, among others [40, 41].
Furthermore, concerns emerge regarding the generation of content imbued with toxic language and bias, especially in specific contexts and topics [42, 43]. Another noteworthy concern is the influence of internet-sourced documents on model outputs, potentially leading to the retrieval of undesirable content. Many research experiments lean on externally developed search engines, offering advantages in terms of optimization and reliability. However, building one's retrieval system, as is often the case in question-answering (QA) and language modeling (LM) research, necessitates starting from scratch.
While search engines are adept at crawling and indexing the latest news and documents, this process demands significant engineering effort and is vital for various applications. Conversely, methods in the literature using their retrieval setups often rely on fixed document databases, which become outdated over time. Additionally, search engines are designed for human interaction, using natural language queries with limited context. In contrast, machine-generated queries, as exemplified by models like RAG [6], can potentially encode more context or adopt vector-encoded queries, albeit at the cost of human interpretability. A benefit of search engine-based queries is their human readability, offering both interpretability and the potential for improvement through direct annotation or feedback.
Language models employing augmentation address the challenge of hallucination but do not guarantee factual grounding. Instances of conflicting retrievals can lead to mixed responses. To enhance reliability, the introduction of trust mechanisms, assigning different weights to retrievals, is a potential avenue. Another concern is the generation of generic responses that may overlook the incorporated knowledge.
In this survey, we have highlighted these common challenges and limitations faced by augmented large language models, shedding light on the evolving landscape of language generation and the pressing need for innovative solutions.
## VI Conclusion
In this literature survey, we have explored a multitude of works in which Language Models (LMs) have been enriched with external knowledge, enabling them to generate more contextually grounded and up-to-date responses. Throughout these studies, LMs have demonstrated their capacity to enhance context by incorporating relevant information, thereby fostering the production of informative answers to various questions. This augmentation often involves the integration of non-parametric modules, marking a departure from the conventional language modeling paradigm and categorizing these models as augmented language models.
However, it is essential to acknowledge certain limitations within this paradigm shift. While LMs augmented with external knowledge exhibit reduced hallucination, they do not offer an ironclad guarantee of factual grounding. Instances arise where conflicting retrievals result in mixed answers, underscoring the need for continued refinement in this domain. Moreover, the limited exploration of the interplay between reasoning augmentation and knowledge integration in current research highlights a promising avenue for future endeavors.
As we reflect on the landscape of augmented language models, it becomes evident that this field holds immense promise and excitement. It represents a vital step towards ushering in the next generation of deep learning systems that can engage in complex and meaningful human-machine interactions while minimizing the parameter footprint. The journey towards fully realizing the potential of augmented LMs is ongoing, with opportunities for further innovation and investigation awaiting those who seek to shape the future of this dynamic field.
|
2308.00112 | S-Decomposable Banach Lattices, Optimal Sequence Spaces and
Interpolation | We investigate connections between upper/lower estimates for Banach lattices
and the notion of relative s-decomposability, which has roots in interpolation
theory. To get a characterization of relatively s-decomposable Banach lattices
in terms of the above estimates, we assign to each Banach lattice X two
sequence spaces XU and XL that are largely determined by the set of p, for
which lp is finitely lattice representable in X. As an application, we obtain
an orbital factorization of relative K-functional estimates for Banach couples
(X0, X1) and (Y0, Y1) through some suitable couples of weighted Lp-spaces
provided if Xi, Yi are relatively s-decomposable for i = 0, 1. Also, we
undertake a detailed study of the properties of optimal upper and lower
sequence spaces XU and XL, and, in particular, prove that these spaces are
rearrangement invariant. In the Appendix, a description of the optimal upper
sequence space for a separable Orlicz space as a certain intersection of some
special Musielak-Orlicz sequence spaces is given | Sergey V. Astashkin, Per G. Nilsson | 2023-07-31T19:31:29Z | http://arxiv.org/abs/2308.00112v2 | # \(S\)-decomposable Banach lattices, optimal sequence spaces and interpolation
###### Abstract.
We investigate connections between upper/lower estimates for Banach lattices and the notion of relative \(s\)-decomposability, which has roots in interpolation theory. To get a characterization of relatively \(s\)-decomposable Banach lattices in terms of the above estimates, we assign to each Banach lattice \(X\) two sequence spaces \(X_{U}\) and \(X_{L}\) that are largely determined by the set of \(p\), for which \(l_{p}\) is finitely lattice representable in \(X\). As an application, we obtain an orbital factorization of relative \(K\)-functional estimates for Banach couples \(\vec{X}=(X_{0},X_{1})\) and \(\vec{Y}=(Y_{0},Y_{1})\) through some suitable couples of weighted \(L_{p}\)-spaces provided if \(X_{i},Y_{i}\) are relatively \(s\)-decomposable for \(i=0,1\).
Also, we undertake a detailed study of the properties of optimal upper and lower sequence spaces \(X_{U}\) and \(X_{L}\), and, in particular, prove that these spaces are rearrangement invariant. In the Appendix, a description of the optimal upper sequence space for a separable Orlicz space as a certain intersection of some special Musielak-Orlicz sequence spaces is given.
Key words and phrases:Banach lattice, \(s\)-relative decomposable couples, relative decomposable couples, lower, upper estimates, interpolation, Calderon-Mityagin property 2010 Mathematics Subject Classification: Primary 46B70; Secondary 46B42, 15A15 The work of the first author was completed as a part of the implementation of the development program of the Volga Region Scientific and Educational Mathematical Center (agreement no. 075-02-2023-931.
## 1. Introduction
Let \(X\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{C}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) be a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) be a Banach space and \(\mathcal{M}\) a Banach space. Let \(\mathcal{M}\) a Banach space and \(\mathcal{M}\) a Banach space.
measurable subset \(\Omega^{\prime}\) of \(\Omega\) for some suitable measure defined on \(\Omega^{\prime}\) (see Proposition 1.4 in [10, p. 58]). Observe also that there is a simple sufficient condition for couples of Banach lattices to be relatively decomposable. Namely, if \(Y\) satisfies an upper \(p\)-estimate and \(X\) a lower \(p\)-estimate, then the Banach lattices \(X\) and \(Y\) are relative decomposable.
The notion of relative decomposibility plays a central role in the paper [11], where it was proved that all weighted couples modelled on two given couples \(\overrightarrow{X}=(X_{0},X_{1})\) and \(\overrightarrow{Y}=(Y_{0},Y_{1})\) of Banach lattices of measurable functions over \(\sigma\) -finite measure spaces possess the relative \(\mathcal{C}-\mathcal{M}\) property if and only if both pairs \(X_{0},Y_{0}\) and \(X_{1},Y_{1}\) are relatively decomposable. Therefore, in the case when \(\overrightarrow{X}=\overrightarrow{Y}\) and \(X_{0},X_{1}\) are \(\sigma\)-order continuous and have the Fatou property, the latter property of a couple \(\overrightarrow{X}\) implies that both \(X_{0}\) and \(X_{1}\) are \(L_{p}\)-spaces, which is a strong converse to the well-known result of Sparr [30], asserting that each weighted \(L_{p}\)-couple has the relative \(\mathcal{C}-\mathcal{M}\) property.
Of course, not all pairs of Banach couples possess the relative \(\mathcal{C}-\mathcal{M}\) property and this motivated Cwikel to consider a weakened version of that. Specifically, already in the papers [7] and [8], condition (1.2) is replaced with the inequality
\[K(t,y;\overrightarrow{Y})\leq w(t)K(t,x;\overrightarrow{X}),\ t>0,\ \ \mbox{with}\ \int_{0}^{\infty}w(u)^{s}du/u<\infty, \tag{1.4}\]
for some (fixed) function \(w(u)\geq 0\) and \(s\in[1,\infty]\) (if \(s=\infty\), after the usual modification of the condition imposed on \(w\), we come certainly to the definition of relative \(\mathcal{C}-\mathcal{M}\) property). In turn, this led to the introduction of the following more general concept of relatively \(s\)-decomposable pairs of Banach lattices.
Let \(1\leq s\leq\infty\). A pair of Banach lattices \(X\), \(Y\) is said to be _relatively_\(s\)-_decomposable_ whenever for all sequences \(\{x_{n}\}_{n=1}^{\infty}\subset X\), \(\{y_{n}\}_{n=1}^{\infty}\subset Y\) of pair-wise disjoint elements such that \(\sum_{n=1}^{\infty}x_{n}\in X\) and \(\left\|y_{n}\right\|_{Y}\leq\left\|x_{n}\right\|_{X}\), \(n\in N\), we have \(\sum_{n=1}^{\infty}y_{n}\in Y\) and
\[\left\|\sum_{n=1}^{\infty}\lambda_{n}y_{n}\right\|_{Y}\leq D\left(\sum_{n=1}^{ \infty}\left|\lambda_{n}\right|^{s}\right)^{1/s}\left\|\sum_{n=1}^{\infty}x_{ n}\right\|_{X}, \tag{1.5}\]
for some constant \(D\) and every sequence \(\{\lambda_{n}\}_{n=1}^{\infty}\in l_{s}\) (again with the usual modification in the case \(s=\infty\) that gives (1.3), i.e., the relative decomposibility). According to [9] (see also [4, Remark 4.4.33]), if \(\overrightarrow{X}=(X_{0},X_{1})\) and \(\overrightarrow{Y}=(Y_{0},Y_{1})\) are two couples of Banach lattices such that the pairs \(X_{0}\), \(Y_{0}\) and \(X_{1}\), \(Y_{1}\) are relatively \(s\)-decomposable, for every \(x\in X_{0}+X_{1}\) and \(y\in Y_{0}+Y_{1}\) satisfying condition (1.4) we have \(y=Tx\) for some linear operator \(T:X_{i}\to Y_{i}\), \(i=0,1\).
One of the main results of the paper [11] is a characterization of the relative decomposability in the setting of Banach lattices of measurable functions, implying that the above-mentioned trivial sufficient condition expressed in terms of upper and lower estimates is also necessary. In the case of relative \(s\)-decomposable pairs of Banach lattices \(X\), \(Y\) there is also a simple sufficient condition, formulated in
terms of upper estimates for \(Y\) and lower estimates for \(X.\) The main aim of this paper is to prove that, in a more general setting of abstract Banach lattices, this trivial sufficient condition is also necessary (as in the case of relative decomposability, i.e., when \(s=\infty\)).
A pivotal role in the proof of our main result is played by the notions of optimal upper and lower sequence spaces, introduced in this paper. Specifically, we associate to every Banach lattice \(X\) two sequence spaces \(X_{U}\) and \(X_{L},\) which rather precisely reflect lattice properties of \(X,\) in particular, encoding the optimal upper and lower estimate information, respectively. Section 3 is devoted to a detailed study of the properties of these spaces, which, as we believe, can be useful tools also when considering other issues related to Banach lattices. As a result, in Section 4 we present the proof of Theorem 2.8, which gives a solution of the above problem. On the way, we obtain also other results related to optimal sequence spaces. We show that if Banach lattices \(X\) and \(Y\) are relatively \(s\)-decomposable, then the space of multiplicators from \(X_{L}\) into \(Y_{U}\) with respect to coordinate-wise multiplication includes the space \(l_{s}\) (see Proposition 4.1). Another important ingredient in the proof of Theorem 2.8 is the relationship between the construction of optimal sequence spaces \(X_{U}\) and \(X_{L}\) and the finite lattice representability of \(l_{r}\)-spaces in \(X\).
Section 5 contains some applications of Theorem 2.8 to the interpolation theory. In particular, in Theorem 5.2 we prove that Banach lattices \(X\) and \(Y\) being relatively \(s\)-decomposable admit an orbital factorization of relative \(K\)-functional estimates through some suitable couples of weighted \(L_{p}\)-spaces. In the next section we present the full proof of rearrangement invariance of the optimal upper and lower sequence spaces (see Theorem 3.3).
Finally, in the Appendix we identify the optimal upper sequence space for a separable Orlicz space \(L_{M}\) on \([0,1]\). Namely, in Theorem 6.14, we prove that the space \((L_{M})_{U}\) can be described as a certain intersection of some special Musielak-Orlicz sequence spaces.
It is worth noting that, in contrast to [9] and [4], in this paper we use a weaker version of _finite_ relative \(s\)-decomposibility, which involves estimates (1.5) only for finite sequences. In a certain sense, this approach seems to be more natural, since our main result reveals the relationship between this property and upper/lower estimates of Banach lattices involved, whose definition contains only finite sequences of elements of these lattices as well. Observe that the above versions of the definition of relative \(s\)-decomposibility coincide if \(X,\)\(Y\) are Banach lattices of measurable functions such that \(Y\) has the Fatou property2 (in this case only, the concept of relative \(s\)-decomposibility is applied in [9] (see also [4]) to the study of interpolation properties of Banach couples).
Some remarks around the history of this paper. Already in [11, Theorem 1.3, p. 98] the classification of decomposable pairs of Banach lattices on measure spaces was addressed. In fact, the corresponding results for the general case were also announced there (see [11, p. 100]). This problem was presented to the second author by Michael Cwikel back in 2003. Some preliminary results were presented by the second author during Jaak Peetre's "Summer Seminar" in Lund, 2003. Time flies, and resulted the in the present paper 20 years later. The authors thank Michael Cwikel for his insight and contributions to this paper.
## 2. Some preliminaries and statements of the main results.
We will assume that the reader is familiar with the definition of Banach lattices, their basic properties and some basic terminology (see, for instance, [20],[22],[27]). In particular, two elements \(x,y\) from a Banach lattice \(X\) are said to be _disjoint_ if they satisfy \(\left|x\right|\wedge\left|y\right|=0\). For a given Banach space (lattice), we set \(B_{X}:=\{x\in X:\,\left\|x\right\|_{X}\leq 1\}\) and \(S_{X}:=\{x\in X:\,\left\|x\right\|_{X}=1\}\).
Recall that a lattice \(X\) is said to be \(\sigma\)_-order complete_ if every order bounded sequence in \(X\) has a least upper bound (see e.g. [20, Definition 1.a.3]). A Banach lattice has a \(\sigma\)_-order continuous norm_ if every positive, increasing, norm bounded sequence converges in norm (see e.g. [27, Definition 5.12], [20, Definition 1.a.6]).
Let \(l_{p}\left(I\right)\) denote the Banach space of all sequences, indexed by the set \(I\), which are absolutely \(p\)-summable if \(1\leq p<\infty\) (resp. bounded if \(p=\infty\)). In the case \(I=\mathbb{N}\) we simply write \(l_{p}\). As usual, by \(c_{0}=c_{0}(\mathbb{N})\) will be denoted the space of all sequences tending to zero as \(n\to\infty\). For definiteness, all Banach spaces and lattices considered in this paper are assumed to be real.
Let \(F_{1}\) and \(F_{2}\) be two positive functions (quasinorms). We write \(F_{1}\preceq F_{2}\) if we have \(F_{1}\leq CF_{2}\) for some positive constant \(C\) that does not depend on the arguments of \(F_{1}\) and \(F_{2}\). In the case when both \(F_{1}\preceq F_{2}\) and \(F_{2}\preceq F_{1}\) we write \(F_{1}\asymp F_{2}\). For a finite set \(E\subset\mathbb{N}\) we denote by \(\left|E\right|\) cardinality of \(E\). Finally, if \(F:\,\Omega\to\mathbb{R}\) is a function (resp. \(a=\{a_{i}\}_{i=1}^{\infty}\) is a sequence of real numbers), then \(\operatorname{supp}F:=\{\omega\in\Omega:\,F(\omega)\neq 0\}\) (resp. \(\operatorname{supp}a:=\{i\in\mathbb{N}:\,a_{i}\neq 0\}\)).
### Relative decomposability of Banach lattices
The following definition generalizes the first of two definitions in [9, p. 44] (see also [4, Definition 4.4.26, p. 597]) and has roots in the interpolation theory of operators (see e.g. [9, Theorem 2], [4, Theorem 4.4.29, p. 598] and Section 1). However, as was already mentioned in Section 1, in contrast to [9] (and also to [11], where the case \(s=\infty\) is covered), we will use a weaker version of this notion, which involves only finite sums of elements of given lattices.
**Definition 2.1**.: Let \(1\leq s\leq\infty.\) Banach lattices \(X\) and \(Y\) are said to be _(finitely) relative \(s\)-decomposable_ if there exists a constant \(D>0\) such that for each \(n\in\mathbb{N}\) and for all sequences of pair-wise disjoint non-zero elements \(\{x_{i}\}_{i=1}^{n}\subset X\) and
\(\left\{y_{i}\right\}_{i=1}^{n}\subset Y\) it holds
\[\left\|\sum_{i=1}^{n}y_{i}\right\|_{Y}\leq D\left(\sum_{i=1}^{n}\left(\left\|y_{ i}\right\|_{Y}/\left\|x_{i}\right\|_{X}\right)^{s}\right)^{1/s}\left\|\sum_{i=1}^{n} x_{i}\right\|_{X}\]
(with the usual modification in the case \(s=\infty\)). Let \(D_{s}=D_{s}\left(X,Y\right)\) denote the infimum of constants \(D\) satisfying the above condition and we refer to \(D_{s}\) as the _relative \(s\)-decomposibility constant_ of \(X\) and \(Y.\)
In what follows, we will suppress the word "finitely", although this is a change of terminology as compared to the previous use of this term, both in [9] and [4, Definition 2.2.16] (see also a related discussion in Section 1).
Definition 2.1 can be also stated equivalently as follows. Let \(\left\{x_{i}\right\}_{i=1}^{n}\subseteq S_{X},\left\{y_{i}\right\}_{i=1}^{n} \subseteq S_{Y}\) be two sequences of pair-wise disjoint elements. Then for all sequences \(\left\{a_{i}\right\}_{i=1}^{n}\) and \(\left\{b_{i}\right\}_{i=1}^{n}\) of scalars it holds
\[\left\|\sum_{i=1}^{n}a_{i}b_{i}y_{i}\right\|_{Y}\leq D\left(\sum_{i=1}^{n} \left|a_{i}\right|^{s}\right)^{1/s}\left\|\sum_{i=1}^{n}b_{i}x_{i}\right\|_{X}.\]
Note that in the case \(s=\infty\) Definition 2.1 reduces to the following: if \(\left\{x_{i}\right\}_{i=1}^{n}\subseteq X,\left\{y_{i}\right\}_{i=1}^{n} \subseteq Y\) are two sequences of pair-wise disjoint elements with \(\left\|y_{i}\right\|_{Y}\leq\left\|x_{i}\right\|_{X}\), \(i=1,\ldots,n\), then
\[\left\|\sum_{i=1}^{n}y_{i}\right\|_{Y}\leq D\left\|\sum_{i=1}^{n}x_{i}\right\| _{X}\]
Following [11], we will say in this case that Banach lattices \(X\) and \(Y\) are _relatively decomposable_ (see Section 1).
In particular, by Holder inequality, we have
**Example 2.2**.: If \(1\leq p,q,s\leq\infty\) then
\[D_{s}\left(l_{q},l_{p}\right)<\infty\iff\frac{1}{p}\leq\frac{1}{q}+\frac{1}{s}\]
and \(D_{s}\left(l_{q},l_{p}\right)=1\) whenever this constant is finite.
### Upper and lower estimates for disjoint elements in Banach lattices and the Grobler-Dodds indices
Let us start with recalling the notions of lower and upper estimates in Banach lattices; see [20, Definition 1.f.4]. A Banach lattice \(X\) is said to _satisfy an upper (resp. a lower) \(p\)-estimate_, where \(p\in[1,\infty]\), if for some constant \(M\) and all finite sequences of pair-wise disjoint elements \(\left\{x_{i}\right\}_{i=1}^{n}\subseteq X\) it holds
\[\left\|\sum_{i=1}^{n}x_{i}\right\|_{X}\leq M\left(\sum_{i=1}^{n}\left\|x_{i} \right\|_{X}^{p}\right)^{1/p}\]
(resp.
\[\left(\sum_{i=1}^{n}\left\|x_{i}\right\|_{X}^{p}\right)^{1/p}\leq M\left\|\sum_{i =1}^{n}x_{i}\right\|).\]
The infimum of all \(M\), satisfying the above inequality, is denoted by \(M^{\left[p\right]}\left(X\right)\) and \(M_{\left[p\right]}\left(X\right)\), respectively. Note that every Banach lattice admits (trivially) an upper \(1\)-estimate and a lower \(\infty\)-estimate.
The fact that a Banach lattice satisfies an upper or a lower \(p\)-estimate can be equivalently expressed in terms of relative decomposability. More explicitly, since \(M_{\left[p\right]}\left(l_{p}\right)=M^{\left[p\right]}\left(l_{p}\right)=1\), \(1\leq p\leq\infty\), one can immediately check the following (see also [11]):
**Proposition 2.3**.: _Let \(X\) be a Banach lattice and \(1\leq p\leq\infty\). We have:_
\(\left(i\right)\)_\(X\) satisfies an upper \(p\)-estimate if and only if \(l_{p}\) and \(X\) are relatively decomposable;_
\(\left(ii\right)\)_\(X\) satisfies a lower \(p\)-estimate if and only if \(X\) and \(l_{p}\) are relatively decomposable._
_Moreover, we have_
\[M^{\left[p\right]}\left(X\right)=D_{\infty}\left(l_{p},X\right),M_{\left[p \right]}\left(X\right)=D_{\infty}\left(X,l_{p}\right).\]
A direct application of the above definitions (see also Example 2.2) gives the following useful result.
**Proposition 2.4**.: _Assume that \(X\) and \(Y\) are Banach lattices such that \(X\) satisfies a lower \(q\)-estimate and \(Y\) satisfies an upper \(p\)-estimate, where \(1\leq q,p\leq\infty\). If \(1/p\leq 1/q+1/s\), then \(X,Y\) are relatively \(s\)-decomposable and the following estimate holds:_
\[D_{s}\left(X,Y\right)\leq M_{\left[q\right]}\left(X\right)M^{\left[p\right]} \left(Y\right).\]
From the latter proposition and the trivial fact that every Banach lattice satisfies an upper \(1\)-estimate and a lower \(\infty\)-estimate it follows that any pair of Banach lattices is relatively \(1\)-decomposable. Hence, given a pair of Banach lattices \(X\) and \(Y\), the set of all \(s\in\left[1,\infty\right]\) such that they are \(s\)-decomposable is always non-empty and is of the form either \(\left[1,s_{\max}\right]\) or \(\left[1,s_{\max}\right)\), where
\[s_{\max}=s_{\max}(X,Y):=\sup\{s\in\left[1,\infty\right]:\,X,Y\text{are $s$-decomposable}\}.\]
Taking \(X=l_{p}\), \(Y=l_{1}\) and \(s_{\max}\) so that \(1/s_{\max}+1/p=1\), we obtain an example of the first type because of \(X\) and \(Y\) are \(s\)-decomposable if and only if \(s\in\left[1,s_{\max}\right]\) (see Example 2.2). To get an example of the second type, let \(Y\) be a Banach lattice that satisfies, for a given \(p_{\max}\in\left[1,\infty\right]\), an upper \(p\)-estimate for all \(p<p_{\max}\) but not for \(p=p_{\max}.\) Then, taking \(l_{\infty}\) for \(X\), we see that \(X\) and \(Y\) are \(s\)-decomposable if and only if \(s<p_{\max}\) (see Proposition 2.3(i)).
Recall that the Grobler-Dodds indices \(\delta(X)\) and \(\sigma(X)\) of a Banach lattice \(X\) are defined by
\[\delta(X):=\sup\{p\geq 1:\,X\text{ satisfies an upper $p$-estimate}\}\]
and
\[\sigma(X):=\inf\{q\geq 1:\,X\text{ satisfies a lower $q$-estimate}\}.\]
For every infinite-dimensional Banach lattice \(X\) we have \(1\leq\delta(X)\leq\sigma(X)\leq\infty\). Moreover, the following duality relations hold:
\[\frac{1}{\delta(X)}+\frac{1}{\sigma(X^{*})}=1\,\,\,\text{and}\,\,\,\frac{1}{ \sigma(X)}+\frac{1}{\delta(X^{*})}=1.\]
**Definition 2.5**.: Let \(1\leq p\leq\infty\). We say that \(l_{p}\) is _finitely lattice representable_ in a Banach lattice \(X\) whenever for every \(n\in\mathbb{N}\) and each \(\varepsilon>0\) there exist pair-wise disjoint elements \(x_{i}\in X\), \(i=1,2,\ldots,n\), such that for any sequence \(\left\{a_{i}\right\}_{i=1}^{n}\) of scalars we have
\[\left(\sum_{i=1}^{n}\left|a_{i}\right|^{p}\right)^{1/p}\leq\left\|\sum_{i=1}^{ n}a_{i}x_{i}\right\|_{X}\leq(1+\varepsilon)\left(\sum_{i=1}^{n}\left|a_{i} \right|^{p}\right)^{1/p}. \tag{2.1}\]
Similarly, \(l_{p}\) is said to be _crudely finitely lattice representable_ in \(X\) whenever instead of (2.1) it holds
\[C^{-1}\left(\sum_{i=1}^{n}\left|a_{i}\right|^{p}\right)^{1/p}\leq\left\|\sum_{ i=1}^{n}a_{i}x_{i}\right\|_{X}\leq C\left(\sum_{i=1}^{n}\left|a_{i}\right|^{p} \right)^{1/p},\]
where \(C\) is a constant independent of \(n\in\mathbb{N}\) and \(\left\{a_{i}\right\}_{i=1}^{n}\).
For the following result see, for instance, [20, Theorem 1.f.12.ii].
**Proposition 2.6**.: _Let \(X\) be a Banach lattice. Then \(X\) admits a lower \(p\)-estimate for some \(p<\infty\) if and only if has \(l_{\infty}\) fails to be finitely lattice representable in \(X\)._
Moreover, in view of [20, Theorem 1.a.5, 1.a.7] and [13, p. 288], it follows
**Proposition 2.7**.: _If a Banach lattice \(X\) is not \(\sigma\)- complete or has a not \(\sigma\)-order continuous norm, then \(l_{\infty}\) is finitely lattice representable in \(X\)._
### The main result and its consequences
Now we are ready to state the main result of this paper, which gives a characterization of relatively \(s\)-decomposable Banach lattices in terms of their upper and lower estimates. This is an extension of results of the paper [11], where the case \(s=\infty\) was covered in a more restrictive setting of Banach lattices of measurable functions. Note that the case when \(\delta\left(Y\right)\leq\sigma\left(X\right)\) is more interesting, because then the lattices \(X\) and \(Y\) potentially may be not relatively decomposable. As was mentioned in Section 1, a non-trivial part of the next theorem can be also treated as the converse to Proposition 2.4.
**Theorem 2.8**.: _Suppose \(X\) and \(Y\) are infinite dimensional Banach lattices. If \(\delta\left(Y\right)\leq\sigma\left(X\right)\) the following conditions are equivalent:_
\(\left(i\right)\)_\(X\) and \(Y\) are relatively \(s\)-decomposable;_
\((ii)\) _There exist \(p,q\), with \(1/p=1/q+1/s\), such that \(X\) satisfies a lower \(q\)-estimate and \(Y\) an upper \(p\)-estimate;_
\((iii)\) _There exist \(p,q\) with \(1/p=1/q+1/s\) such that \(X,l_{q}\) and \(l_{p},Y\) are relatively decomposable._
_In addition, if_
\[F_{s}\left(X,Y\right):=\inf\left\{M_{[q]}\left(X\right)M^{[p]}\left(Y\right): \frac{1}{s}=\frac{1}{p}-\frac{1}{q},1\leq p\leq q\leq\infty\right\},\]
_it holds_
\[D_{s}\left(X,Y\right)\leq F_{s}\left(X,Y\right)\leq D_{s}\left(X,Y\right)^{2}.\]
_Moreover, \(\sigma\left(X\right)\leq\delta\left(Y\right)\) if and only if \(s_{\max}(X,Y)=\infty\)._
The proof of this theorem is presented in Section 4 below.
Now, by using standard arguments (see e.g. [20, Proposition 1.f.6]), one can prove that Definition 2.1 is equivalent to the following assertion for general sequences of elements in Banach lattices.
**Corollary 2.9**.: _Infinite dimensional Banach lattices \(X\) and \(Y\) are relatively \(s\)-decomposable if and only if there exists a constant \(D_{s}>0\) such that for each \(n\in\mathbb{N}\) and all sequences \(\left\{x_{i}\right\}_{i=1}^{n}\subseteq S_{X}\), \(\left\{y_{i}\right\}_{i=1}^{n}\subseteq S_{Y}\) and \(\left\{a_{i}\right\}_{i=1}^{n}\subseteq\mathbb{R}\) we have_
\[\left\|\vee_{i=1}^{n}\left|a_{i}y_{i}\right|\right\|_{Y}\leq D_{s}\left(\sum_{ i=1}^{n}\left|a_{i}\right|^{s}\right)^{1/s}\left\|\vee_{i=1}^{n}\left|x_{i} \right|\right\|_{X}\]
**Corollary 2.10**.: _Suppose infinite dimensional Banach lattices \(X\) and \(Y\) are relatively \(s\)-decomposable for some \(s\in[1,\infty]\). Then, there exist equivalent norms on \(X\) and \(Y\) such that \(X\) and \(Y\) are relatively \(s\)-decomposable Banach lattices with constant one._
Proof.: By [22, Lemma 2.8.8], every Banach lattice which satisfies a lower \(p\)-estimate/an upper \(q\)-estimate admits an equivalent Banach lattice norm such that the corresponding lower/upper estimate constant is equal to one. Consequently, after a suitable renorming \(X\) and \(Y\), in the notation of Theorem 2.8 we have \(F_{s}\left(X,Y\right)=1\), which implies that \(D_{s}\left(X,Y\right)=1\).
### Rearrangement invariant spaces
For a detailed theory of rearrangement invariant spaces we refer to the monographs [20, 17].
Let \(I=[0,1]\) or \((0,\infty)\) and let \(m\) be the Lebesgue measure on \(I\). Given a measurable function \(x(t)\) on \(I\) we define its distribution function by
\[n_{x}(\tau):=m\{t\in I:\,|x(t)|>\tau\},\ \ \tau>0.\]
Measurable functions \(x(t)\) and \(y(t)\) on \(I\) are called _equimeasurable_ if \(n_{x}(\tau)=n_{y}(\tau)\) for all \(\tau>0\). In particular, each function \(x(t)\) on \(I\) is equimeasurable with its non-increasing left-continuous rearrangement \(x^{*}(t)\) of \(|x(t)|\), which defines by
\[x^{*}(t):=\inf\{\tau>0:\,n_{x}(\tau)<t\},\ \ t\in I.\]
**Definition 2.11**.: A Banach function space \(X\) on \(I\) is said to be _rearrangement invariant_ (in short, r.i.) (or _symmetric_) if the conditions \(x\in X\) and \(n_{y}(\tau)\leq n_{x}(\tau)\) for all \(\tau>0\) imply that \(y\in X\) and \(\|y\|_{X}\leq\|x\|_{X}\).
Let \(X\) be a r.i. space. If \(I=[0,1]\) (resp. \(I=(0,\infty)\)) we have \(L_{\infty}\hookrightarrow X\hookrightarrow L_{1}\) (resp. \(L_{\infty}\cap L_{1}\hookrightarrow X\hookrightarrow L_{\infty}+L_{1}\)).
The _Kothe dual space_\(X^{\prime}\) consists of all measurable functions \(y\) such that
\[\|y\|_{X^{\prime}}:=\sup\{\int_{I}|x(t)y(t)|\,dt:\,x\in X,\,\|x\|_{X}\leq 1\}<\infty.\]
Then, \(X^{\prime}\) equipped with the norm \(\|\cdot\|_{X^{\prime}}\) is a r.i. space. Moreover, \(X\subset X^{\prime\prime}\), and the isometric equality \(X=X^{\prime\prime}\) holds if and only if the norm in \(X\) has the _Fatou property_, that is, if the conditions \(0\leq x_{n}\uparrow x\) a.e. on \(I\) and \(\sup_{n\in\mathbb{N}}\|x_{n}\|<\infty\) imply \(x\in X\) and \(\|x_{n}\|\uparrow\|x\|\).
The _fundamental function_\(\phi_{X}\) of a r.i. space \(X\) is defined by \(\phi_{X}(t):=\|\chi_{A}\|_{X}\), where \(\chi_{A}\) is the characteristic function of a measurable set \(A\subset I\) with \(m(A)=t\). The function \(\phi_{X}\) is _quasi-concave_ (i.e., \(\phi_{X}(0)=0\), \(\phi_{X}\) increases and \(\phi_{X}(t)/t\) decreases).
Most important examples of r.i. spaces are the \(L_{p}\)-spaces, \(1\leq p\leq\infty\), and their natural generalization, the Orlicz spaces (for their detailed theory we refer to the monographs [16, 26, 21]).
Let \(M\) be an Orlicz function, that is, an increasing convex continuous function on \([0,\infty)\) such that \(M(0)=0\) and \(\lim_{t\to\infty}M(t)=\infty\). In what follows, we will assume also that \(M(1)=1\). Denote by \(L_{M}:=L_{M}(I)\) the _Orlicz space_ endowed with the Luxemburg norm
\[\|f\|_{L_{M}}:=\inf\left\{\lambda>0\colon\,\int_{I}M\Big{(}\frac{|f(t)|}{ \lambda}\Big{)}\,dt\leq 1\right\}.\]
In particular, if \(M(u)=u^{p}\), \(1\leq p<\infty\), we obtain \(L_{p}\).
Note that the definition of an Orlicz function space \(L_{M}[0,1]\) depends (up to equivalence of norms) only on the behaviour of the function \(M(t)\) for large values of argument \(t\). An easy calculation (see also formula (9.23) in [16] on page 79 of the English version) shows that
\[\varphi_{L_{M}}(t)=\frac{1}{M^{-1}(1/t)},\ \ 0<t\leq 1, \tag{2.2}\]
where \(M^{-1}\) is the inverse for \(M\).
If \(M\) is an Orlicz function, then the _Young conjugate_ function \(\tilde{M}\) is defined by
\[\tilde{M}(u):=\sup_{t>0}(ut-M(t)),\ \ u\geq 0.\]
Moreover, \(\tilde{M}\) is also an Orlicz function and the Young conjugate for \(\tilde{M}\) is \(M\).
Every Orlicz space \(L_{M}(I)\) has the Fatou property; \(L_{M}[0,1]\) (resp. \(L_{M}(0,\infty)\)) is separable if and only if the function \(M\) satisfies the \(\Delta_{2}^{\infty}\)-_condition_ (resp. \(\Delta_{2}\)-_condition_), i.e., \(\sup_{u\geq 1}M(2u)/M(u)<\infty\) (resp. \(\sup_{u>0}M(2u)/M(u)<\infty\)). In this case we have \(L_{M}(I)^{*}=L_{M}(I)^{\prime}=L_{\tilde{M}}(I)\).
Let \(1<q<\infty\), \(1\leq r<\infty\). The Lorentz space \(L_{q,r}=L_{q,r}(I)\) consists of all measurable functions \(x\) such that
\[\|x\|_{q,r}:=\Big{(}\frac{r}{q}\int_{I}(t^{1/q}x^{*}(t))^{r}\frac{dt}{t}\Big{)} ^{1/r}<\infty.\]
The functional \(x\mapsto\|x\|_{q,r}\) is not subadditive, but it is equivalent to the norm \(x\mapsto\|x^{**}\|_{q,r}\), where \(x^{**}(t):=\frac{1}{t}\int_{0}^{t}x^{*}(s)\,ds\), \(t>0\). Moreover, \(L_{q,r_{1}}\hookrightarrow L_{q,r_{2}}\), \(1\leq r_{1}\leq r_{2}<\infty\) and \(L_{q,q}=L_{q}\) isometrically.
Rearrangement invariant (r.i.) sequence spaces are defined quite similarly. In particular, the _fundamental function_ of a r.i. sequence space \(X\) is defined by \(\phi_{X}(n):=\|\sum_{k=1}^{n}e_{k}\|_{X}\), \(n=1,2,\dots\). In what follows, \(e_{k}\) are the canonical unit vectors, i.e., \(e_{k}=(e_{k}^{i})_{i=1}^{\infty}\), \(e_{k}^{i}=0\) for \(i\neq k\) and \(e_{k}^{k}=1\), \(k,i=1,2,\dots\).
Recall that an _Orlicz sequence space_\(\ell_{\psi}\), where \(\psi\) is an Orlicz function, consists of all sequences \((a_{k})_{k=1}^{\infty}\) such that
\[\|(a_{k})\|_{\ell_{\psi}}:=\inf\left\{u>0:\sum_{k=1}^{\infty}\psi\Big{(}\frac{ |a_{k}|}{u}\Big{)}\leq 1\right\}<\infty.\]
Clearly, if \(\psi(t)=t^{p}\), \(p\geq 1\), then \(\ell_{\psi}=\ell^{p}\) isometrically.
The fundamental function of an Orlicz sequence space \(\ell_{\psi}\) can be calculated by the formula: \(\phi_{\ell_{\psi}}(n)=\frac{1}{\psi^{-1}(1/n)}\), \(n=1,2,\dots\) Furthermore, an Orlicz sequence space \(\ell_{\psi}\) is separable if and only if \(\psi\) satisfies the \(\Delta_{2}^{0}\)-condition (\(\psi\in\Delta_{2}^{0}\)), that is,
\[\sup_{0<u\leq 1}\psi(2u)/\psi(u)<\infty.\]
In this case we have \(\ell_{\psi}^{*}=\ell_{\psi}^{\prime}=\ell_{\tilde{\psi}}\), with the Young conjugate function \(\tilde{\psi}\) for \(\psi\).
Observe that the definition of an Orlicz sequence space \(\ell_{\psi}\) depends (up to equivalence of norms) only on the behaviour of \(\psi\) near zero.
## 3. Optimal Upper and Lower Sequence Lattices
In this section we introduce and study some specialized notions which will play an important role in the proof of our main Theorem 2.8. They are a special kind of sequence spaces which are generated via some appropriate sequences of norms, defined on \(\mathbb{R}^{n}\), \(n\in\mathbb{N}\).
### Definitions and general properties
**Definition 3.1**.: Let \(X\) be a Banach lattice. For each integer \(n\), let \(\mathfrak{B}_{n}\left(X\right)\) denote the set of all sequences \(\left\{x_{i}\right\}_{i=1}^{n}\subseteq S_{X}\) of elements with pair-wise disjoint support.
**Lemma 3.2**.: _If \(X\) is a Banach lattice of dimension at least \(n,\) then the set \(\mathfrak{B}_{n}\left(X\right)\) is non-empty._
The proof of this lemma will be provided in Section 6.
Let now \(X\) be an infinite dimensional Banach lattice. Based on \(X\) we associate two auxiliary constructions, which yield two sequence spaces \(X_{U}\) and \(X_{L}\) that satisfy the following norm one continuous embeddings:
\[l_{1}\overset{1}{\hookrightarrow}X_{U}\overset{1}{\hookrightarrow}X_{L} \overset{1}{\hookrightarrow}l_{\infty}. \tag{3.1}\]
We will call \(X_{U}\) and \(X_{L}\) the _optimal upper_ and respectively _optimal lower sequence spaces_ generated by \(X.\) Note that the construction, which leads to the space \(X_{L},\) is close to the one developed in the paper [14] and related to the optimal cotype and summing properties of a Banach space.
To construct \(X_{U}\) we define first, for each fixed integer \(n,\) the following norm on \(\mathbb{R}^{n}\) by
\[\left\|\left\{a_{i}\right\}_{i=1}^{n}\right\|_{X_{U}(n)}:=\sup\left\{\left\| \sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}:\left\{x_{i}\right\}_{i=1}^{n}\in \mathfrak{B}_{n}\left(X\right)\right\}.\]
Let \(X_{U}\) be the space of all real-valued sequences \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\), for which
\[\left\|a\right\|_{X_{U}}:=\sup_{n}\left\|\left\{a_{i}\right\}_{i=1}^{n}\right\| _{X_{U}(n)}<\infty.\]
Since
\[\left\|\left\{a_{i}\right\}_{i=1}^{n}\right\|_{X_{U}(n)}\leq\sum_{i=1}^{n} \left|a_{i}\right|,\ \ n\in\mathbb{N},\]
it follows the left-hand side embedding in (3.1).
The first step in the definition of the space \(X_{L}\) is the introduction of the functionals \(\Phi_{n}\), defined for \(a=\left\{a_{i}\right\}_{i=1}^{n}\in\mathbb{R}^{n},\)\(n\in\mathbb{N},\) by
\[\Phi_{n}\left(a\right):=\inf\left\{\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X }\ :\ \left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\right\}.\]
Next, we set
\[\left\|a\right\|_{X_{L}(n)}:=\inf\left\{\sum_{k\in F}\Phi_{n}\left(a^{k} \right):\,F\subseteq\mathbb{N},\left|F\right|<\infty,a^{k}\in\mathbb{R}^{n},a= \sum_{k\in F}a^{k}\right\}.\]
Note that \(\sup_{1\leq i\leq n}\left|a_{i}\right|\leq\Phi_{n}\left(a\right)\) and hence \(\left\|a\right\|_{l_{\infty}^{n}}\leq\left\|a\right\|_{X_{L}(n)}\), which implies that the mapping \(a\mapsto\left\|a\right\|_{X_{L}(n)}\) defines a norm on \(\mathbb{R}^{n}\). Finally, we define \(X_{L}\) to be the space of all real-valued sequences \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\), for which
\[\left\|a\right\|_{X_{L}}:=\sup_{n}\left\|\left\{a_{i}\right\}_{i=1}^{n}\right\| _{X_{L}(n)}<\infty.\]
Clearly, these definitions imply the second and third embeddings in (3.1).
The proof of the following important properties of the spaces \(X_{U}\) and \(X_{L}\) we provide in Section 6 below.
**Theorem 3.3**.: _Let \(X\) be an infinite dimensional Banach lattice. Then \(X_{L}\) is a r.i. sequence space and \(X_{U}\) is a Banach sequence lattice. Moreover, if \(l_{\infty}\) is not finitely representable in \(X\), \(X_{U}\) is a r.i. sequence space as well._
Denote by \(X_{L}^{0}\) (resp. \(X_{U}^{0}\)) the closed linear span of all finitely supported sequences in \(X_{L}\) (resp. \(X_{U}\)). Recall that \(e_{m}\), \(m=1,2,\dots,\) are the unit basis vectors in spaces of real-valued sequences.
**Corollary 3.4**.: _If \(X\) is an infinite dimensional Banach lattice, then \(X_{L}^{0}\) is a Banach space in which the vectors \(e_{m}\), \(m=1,2,\dots\), form a symmetric normed basis. If \(l_{\infty}\) is not finitely representable in \(X\), the same conclusion applies also to \(X_{U}^{0}\)._
_In particular, if \(\operatorname{supp}a\subset\{1,2,\dots,n\}\) for some \(n\in\mathbb{N}\), we have \(\|a\|_{X_{L}}=\|a\|_{X_{L}(n)}\) and \(\|a\|_{X_{U}}=\|a\|_{X_{U}(n)}\)._
**Example 3.5**.: We claim that \(\left(c_{0}\right)_{U}=\left(c_{0}\right)_{L}=l_{\infty}\) and hence \(\left(c_{0}\right)_{U}^{0}=\left(c_{0}\right)_{L}^{0}=c_{0}\).
Indeed, first by (3.1), we have \(\left(c_{0}\right)_{U}\overset{1}{\hookrightarrow}\left(c_{0}\right)_{L} \overset{1}{\hookrightarrow}l_{\infty}.\) For the converse, fix an integer \(n\) and let \(\left\{x_{i}\right\}_{i=1}^{n}\subseteq c_{0}\) be a positive unit norm sequence with pair-wise disjoint support. Clearly, for all \(a_{i}\in\mathbb{R}\), \(i=1,2,\dots,n\),
\[\left|\sum_{i=1}^{n}a_{i}x_{i}\right|\leq\sup_{1\leq n\leq n}\left|a_{i}\right|,\]
and hence \(l_{\infty}^{n}\overset{1}{\hookrightarrow}\left(c_{0}\right)_{U}\left(n\right)\). In consequence, \(l_{\infty}\overset{1}{\hookrightarrow}\left(c_{0}\right)_{U}.\) Thus, in view of (3.1), everything is done.
The above example shows that the spaces \(X_{U}\) and \(X_{L}\) do not need to have \(\sigma\)-order continuous norm even if \(X\) has so. At the same time, the construction of the optimal upper and lower sequence spaces ensures that they _always_ have the following somewhat weaker property.
Recall that a Banach function lattice \(X\) on a measure space \((T,\mu)\) is called _order semi-continuous_ if the conditions \(x_{n}\in X\), \(n=1,2,\dots,\)\(x\in X\) and \(x_{n}\chi_{B}\to x\chi_{B}\)\(\mu\)-a.e. for each set \(B\subset T\) such that \(\mu(B)<\infty\) imply that \(\|x\|_{X}\leq\liminf_{n\to\infty}\|x_{n}\|_{X}\).
In particular, Banach sequence lattice \(E\) (in this case \(T=\mathbb{N}\) with the counting measure \(\mu\)) is order semi-continuous if \(\|a\|_{E}\leq\liminf_{n\to\infty}\|a^{n}\|_{E}\) whenever a sequence \(\{a^{n}\}_{n=1}^{\infty}\subset E\) converges coordinate-wise to \(a\in E\).
**Lemma 3.6**.: \(X_{U}\) _and \(X_{L}\) are order semi-continuous Banach sequence lattices for each Banach lattice \(X\)._
Proof.: We prove this result only for \(X_{U}\), because for \(X_{L}\) this can be done in the same way.
Assume that a sequence \(\{a^{n}\}_{n=1}^{\infty}\subset X_{U}\) converges coordinate-wise to an element \(a\in X_{U}\). Let \(a^{n}=\{a_{i}^{n}\}_{i=1}^{\infty}\), \(a=\{a_{i}\}_{i=1}^{\infty}\). For arbitrary \(\varepsilon>0\) select \(m\in\mathbb{N}\) so that
\[\left\|a\right\|_{X_{U}}\leq\left(1+\varepsilon\right)\left\|\{a_{i}\}_{i=1}^ {m}\right\|_{X_{U}(m)}.\]
Then, for all sufficiently large \(n\in\mathbb{N}\) we have
\[|a_{i}^{n}|\geq(1-\varepsilon)|a_{i}|,\ \ i=1,2,\dots,m,\]
whence
\[\left\|\left\{a_{i}^{n}\right\}_{i=1}^{m}\right\|_{X_{U}(m)}\geq\left(1-\varepsilon \right)\left\|\left\{a_{i}\right\}_{i=1}^{m}\right\|_{X_{U}(m)}.\]
Combining the above inequalities, we get
\[\left\|a\right\|_{X_{U}}\leq\frac{1+\varepsilon}{1-\varepsilon}\left\|\left\{a_ {i}^{n}\right\}_{i=1}^{m}\right\|_{X_{U}(m)}\]
if \(n\in\mathbb{N}\) is sufficiently large. This implies that
\[\left\|a\right\|_{X_{U}}\leq\frac{1+\varepsilon}{1-\varepsilon}\liminf_{n\to \infty}\left\|a^{n}\right\|_{X_{U}}.\]
Since \(\varepsilon>0\) is arbitrary, the desired result for \(X_{U}\) is proved.
For any sequence \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\) we have \(a_{i}=\left\langle a,e_{i}\right\rangle,\) where \(\left\langle\cdot,\cdot\right\rangle\) is the usual inner product. In what follows, the properties of the optimal sequence spaces from the next proposition, will play a crucial role.
**Proposition 3.7**.: _Let \(X\) be a Banach lattice._
\((i)\)_. For any \(m\in\mathbb{N}\) and every pairwise disjoint sequences \(u_{k}\in X_{U}\), \(k=1,2,\ldots,m\), we have_
\[\left\|\sum_{k=1}^{m}u_{k}\right\|_{X_{U}}\leq\left\|\sum_{k=1}^{m}\left\|u_{ k}\right\|_{X_{U}}e_{k}\right\|_{X_{U}};\]
\((ii)\)_. For any \(m\in\mathbb{N}\) and every pairwise disjoint sequences \(u_{k}\in X_{L}\), \(k=1,2,\ldots,m\),_
\[\left\|\sum_{k=1}^{m}\left\|u_{k}\right\|_{X_{L}}e_{k}\right\|_{X_{L}}\leq \left\|\sum_{k=1}^{m}u_{k}\right\|_{X_{L}}.\]
Proof.: First, we prove \((i)\) assuming additionally that the elements \(u_{k}\), \(k=1,2,\ldots,m\), have finite support. Then, denoting \(u:=\sum_{k=1}^{m}u_{k}\) and \(n:=\left|\operatorname{supp}u\right|\), we get
\[u_{k}=\sum_{i\in\operatorname{supp}u_{k}}\left\langle u_{k},e_{i}\right\rangle e _{i},\ \ k=1,2,\ldots,m.\]
Take \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) and put
\[z_{k}:=\sum_{i\in\operatorname{supp}u_{k}}\left\langle u_{k},e_{i}\right\rangle x _{i},\ \ k=1,2,\ldots,m.\]
Without any loss of generality, we may assume that \(u_{k}\geq 0\) and \(u_{k}\neq 0\) for all \(k\). Hence, \(z_{k}\neq 0\) for all \(k=1,2,\ldots,m\) and
\[\left\|z_{k}\right\|_{X}\leq\left\|\sum_{i=1}^{n}\left\langle u_{k},e_{i} \right\rangle e_{i}\right\|_{X_{U}(n)}=\left\|u_{k}\right\|_{X_{U}}. \tag{3.2}\]
Moreover, since \(u_{k}\), \(k=1,2,\ldots,m\), are pairwise disjoint sequences and \(x_{i}\), \(i=1,2,\ldots,n\), are pairwise disjoint elements from \(X\), we infer that \(z_{i}\wedge z_{j}=0\) if \(i\neq j\). Thus, \(\left\{z_{k}/\left\|z_{k}\right\|_{X}:\,1\leq k\leq m\right\}\in\mathfrak{B}_{m} \left(X\right)\). Consequently, from (3.2) it follows
\[\left\|\sum_{i=1}^{n}\left\langle u,e_{i}\right\rangle x_{i} \right\|_{X} = \left\|\sum_{k=1}^{m}\sum_{i=1}^{n}\left\langle u_{k},e_{i} \right\rangle x_{i}\right\|_{X}=\left\|\sum_{k=1}^{m}\sum_{i\in\mathrm{supp}\,u _{k}}\left\langle u_{k},e_{i}\right\rangle x_{i}\right\|_{X}\] \[= \left\|\sum_{k=1}^{m}z_{k}\right\|_{X}=\left\|\sum_{k=1}^{m} \left\|z_{k}\right\|_{X}\frac{z_{k}}{\left\|z_{k}\right\|_{X}}\right\|_{X}\] \[\leq \left\|\sum_{k=1}^{m}\left\|z_{k}\right\|_{X}e_{k}\right\|_{X_{U }(m)}\leq\left\|\sum_{k=1}^{m}\left\|u_{k}\right\|_{X_{U}}e_{k}\right\|_{X_{U}}.\]
Hence, as a sequence \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) is arbitrary, we conclude that
\[\left\|u\right\|_{X_{U}}=\left\|\sum_{i=1}^{n}\left\langle u,e_{i}\right\rangle e _{i}\right\|_{X_{U}(n)}\leq\left\|\sum_{k=1}^{m}\left\|u_{k}\right\|_{X_{U}}e _{k}\right\|_{X_{U}},\]
and for finitely supported sequences assertion \((i)\) is proved.
Let now \(u_{k}\in X_{U}\), \(k=1,2,\ldots,m\), be arbitrary pairwise disjoint non-negative elements. Denote by \(u_{k}^{(n)}\) the truncations of \(u_{k}\) to the set \(\left\{1,\ldots,n\right\}\), that is,
\[u_{k}^{(n)}:=\sum_{1\leq j\leq n,j\in\mathrm{supp}\,u_{k}}a_{j}e_{j},\ \ k=1,\ldots,m, \tag{3.3}\]
Since \(u_{k}^{(n)}\), \(k=1,2,\ldots,m\), are pairwise disjoint sequences with finite support, by the first part of the proof, we have
\[\left\|\sum_{k=1}^{m}u_{k}^{(n)}\right\|_{X_{U}}\leq\left\|\sum_{k=1}^{m} \left\|u_{k}^{(n)}\right\|_{X_{U}}e_{k}\right\|_{X_{U}}\leq\left\|\sum_{k=1}^{ m}\left\|u_{k}\right\|_{X_{U}}e_{k}\right\|_{X_{U}}.\]
Therefore, taking into account that the sequence \(\sum_{k=1}^{m}u_{k}^{(n)}\) tends coordinate-wise to \(\sum_{k=1}^{m}u_{k}\) as \(n\rightarrow\infty\), by Lemma 3.6, we obtain
\[\left\|\sum_{k=1}^{m}u_{k}\right\|_{X_{U}}\leq\liminf_{n\rightarrow\infty} \left\|\sum_{k=1}^{m}u_{k}^{(n)}\right\|_{X_{U}}\leq\left\|\sum_{k=1}^{m}\left\| u_{k}\right\|_{X_{U}}e_{k}\right\|_{X_{U}},\]
which implies \((i)\) in the general case.
Proceeding with the proof of \((ii)\), we again consider first the case when the elements \(u_{k}\), \(k=1,2,\ldots,m\), have finite support. Let \(u\), \(n\in\mathbb{N}\), \(\left\{x_{j}\right\}_{i=1}^{n}\) and \(z_{k}\), \(k=1,2,\ldots,m\), be defined in the same way as in the beginning of the proof of \((i)\). Assuming as above that \(u_{k}\geq 0\) and \(u_{k}\neq 0\), \(k=1,2,\ldots,m\), we get \(z_{k}\neq 0\), \(k=1,2,\ldots,m\), and \(n\geq m\). Consequently, \(\left\{z_{k}/\left\|z_{k}\right\|_{X}\right\}_{k=1}^{m}\in\mathfrak{B}_{m} \left(X\right).\) Moreover, by the definition of the norm in \(X_{L}\), we have
\[\left\|u_{k}\right\|_{X_{L}}\leq\Phi_{n}\left(u_{k}\right)\leq\left\|z_{k} \right\|_{X},\ \ k=1,2,\ldots,m.\]
Hence, by Theorem 3.3, it follows
\[\left\|\sum_{k=1}^{m}\left\|u_{k}\right\|_{X_{L}}e_{k}\right\|_{X_{L }(m)} \leq \left\|\sum_{k=1}^{m}\left\|z_{k}\right\|_{X}e_{k}\right\|_{X_{L}(m )}\leq\left\|\sum_{k=1}^{m}\left\|z_{k}\right\|_{X}\frac{z_{k}}{\left\|z_{k} \right\|_{X}}\right\|_{X}\] \[= \left\|\sum_{k=1}^{m}z_{k}\right\|_{X}=\left\|\sum_{k=1}^{m}\sum_ {j\in\sigma_{k}}\left\langle u_{k},e_{j}\right\rangle x_{j}\right\|_{X}=\left\| \sum_{j=1}^{m}\left\langle u,e_{j}\right\rangle x_{j}\right\|_{X}.\]
Passing to the infimum over all \(\left\{x_{j}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\), we obtain
\[\left\|\sum_{k=1}^{m}\left\|u_{k}\right\|_{X_{L}}e_{k}\right\|_{X_{L}(m)}\leq \Phi_{n}\left(u\right). \tag{3.4}\]
Next, let \(u=\sum_{l\in F}v_{l}\) for some finite set \(F\) of positive integers. Clearly, we may assume that the supports of \(v_{l}\) are contained in that of \(u\) and hence in the set \(\cup_{k=1}^{m}\mathrm{supp}\,u_{k}\). Then, if
\[v_{l,k}:=\sum_{i\in\mathrm{supp}\,u_{k}}\left\langle v_{l},e_{i}\right\rangle e _{i},\ \ k=1,2,\ldots,m,\]
we have \(v_{l}=\sum_{k=1}^{m}v_{l,k}\), \(l\in F\), and \(u_{k}=\sum_{l\in F}v_{l,k}\), \(k=1,2,\ldots,m\). Furthermore, since \(u_{k}^{l}\), \(k=1,2,\ldots,m\), are pairwise disjoint and have finite support, applying (3.4) for \(v_{l}\), we infer
\[\left\|\sum_{k=1}^{m}\left\|v_{l,k}\right\|_{X_{L}}e_{k}\right\|_{X_{L}(m)} \leq\Phi_{n}\left(v_{l}\right),\ \ l\in F.\]
Hence, by the triangle inequality,
\[\left\|\sum_{k=1}^{m}\left\|u_{k}\right\|_{X_{L}}e_{k}\right\|_{X _{L}(m)} \leq \left\|\sum_{k=1}^{m}\sum_{l\in F}\left\|v_{l,k}\right\|_{X_{L}} e_{k}\right\|_{X_{L}(m)}\] \[\leq \sum_{l\in F}\left\|\sum_{k=1}^{m}\left\|v_{l,k}\right\|_{X_{L}} e_{k}\right\|_{X_{L}(m)}\] \[\leq \sum_{l\in F}\Phi_{n}\left(v_{l}\right).\]
Since the above representation of \(u\) is arbitrary, from Theorem 3.3 it follows
\[\left\|\sum_{k=1}^{m}\left\|u_{k}\right\|_{X_{L}}e_{k}\right\|_{X_{L}}=\left\| \sum_{k=1}^{m}\left\|u_{k}\right\|_{X_{L}}e_{k}\right\|_{X_{L}(m)}\leq\left\|u \right\|_{X_{L}(n)}=\left\|u\right\|_{X_{L}}.\]
Thus, for sequences with finite support \((ii)\) is proved.
To extend the assertion \((ii)\) to the general case, assume that \(u_{k}\in X_{L}\), \(k=1,\ldots,m\), are pairwise disjoint and non-negative. Let \(n\in\mathbb{N}\) be arbitrary and
be the truncations defined by formula (3.3). Since \(u_{k}^{(n)}\), \(k=1,\ldots,m\), are finitely supported, as was already proved, it holds
\[\Big{\|}\sum_{k=1}^{m}u_{k}\Big{\|}_{X_{L}}\geq\Big{\|}\sum_{k=1}^{m}u_{k}^{(n)} \Big{\|}_{X_{L}}\geq\Big{\|}\sum_{k=1}^{m}\|u_{k}^{(n)}\|_{X_{L}}e_{k}\Big{\|}_ {X_{L}}.\]
Observe that from Lemma 3.6 it follows \(\lim_{n\to\infty}\|u_{k}^{(n)}\|_{X_{L}}=\|u_{k}\|_{X_{L}}\) for each \(k=1,\ldots,m\). In consequence, we have
\[\lim_{n\to\infty}\Big{\|}\sum_{k=1}^{m}\|u_{k}^{(n)}\|_{X_{L}}e_{k}\Big{\|}_{X _{L}}=\Big{\|}\sum_{k=1}^{m}\|u_{k}\|_{X_{L}}e_{k}\Big{\|}_{X_{L}}.\]
Combining this together with the preceding estimate, we infer that
\[\Big{\|}\sum_{k=1}^{m}u_{k}\Big{\|}_{X_{L}}\geq\Big{\|}\sum_{k=1}^{m}\|u_{k}\| _{X_{L}}e_{k}\Big{\|}_{X_{L}},\]
and so the proof is completed.
By Theorem 3.3, both spaces \(X_{U}\) and \(X_{L}\) are Banach lattices and hence \(X_{U}\)- and \(X_{L}\)-constructions can be applied also to them. However, this process terminates already on the second step, because of the following result.
**Proposition 3.8**.: _(a) For every Banach sequence lattice \(E\) we have \(E\stackrel{{ 1}}{{\hookrightarrow}}E_{L}\). If additionally \(E\) is order semi-continuous, then \(E_{U}\stackrel{{ 1}}{{\hookrightarrow}}E\)._
_(b) For every Banach lattice \(X\) we have \((X_{L})_{L}=X_{L}\) and \((X_{U})_{U}=X_{U}\) isometrically._
Proof.: (a) We show first that \(E\stackrel{{ 1}}{{\hookrightarrow}}E_{L}\). Let \(n\in\mathbb{N}\) be arbitrary. Since \(\{e_{i}\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(E\right)\), for every \(a=(a_{i})_{i=1}^{\infty}\in E\), we can write
\[\|a\|_{E}\geq\Big{\|}\sum_{i=1}^{n}a_{i}e_{i}\Big{\|}_{E}\geq\Phi_{n}((a_{i} )_{i=1}^{n})\geq\|(a_{i})_{i=1}^{n}\|_{E_{L}(n)}.\]
Consequently, \(\|a\|_{E}\geq\|a\|_{E_{L}}\), i.e., \(E\stackrel{{ 1}}{{\hookrightarrow}}E_{L}\).
Assume now that \(E\) is order semi-continuous. Then, for every \(a=(a_{i})_{i=1}^{\infty}\in E_{U}\) and \(n\in\mathbb{N}\) we have
\[\|(a_{i})_{i=1}^{n}\|_{E_{U}(n)}\geq\Big{\|}\sum_{i=1}^{n}a_{i}e_{i}\Big{\|}_ {E},\]
whence
\[\|a\|_{E_{U}}\geq\Big{\|}\sum_{i=1}^{n}a_{i}e_{i}\Big{\|}_{E},\ \ n\in\mathbb{N}.\]
Therefore, since \(E\) is order semi-continuous, we get \(\|a\|_{E_{U}}\geq\|a\|_{E}\) for each \(a\in E_{U}\). Thus, the proof of (a) is completed.
(b) If \(X\) is an arbitrary Banach lattice, then by Lemma 3.6, \(X_{L}\) is an order semi-continuous Banach sequence lattice. Hence, from the already proved part (a) it follows that \(X_{L}\stackrel{{ 1}}{{\hookrightarrow}}(X_{L})_{L}\). It remains to check that \((X_{L})_{L}\stackrel{{ 1}}{{\hookrightarrow}}X_{L}\).
Suppose \(n\in\mathbb{N}\) and \(b=(b_{i})_{i=1}^{n}\in\mathbb{R}\) is arbitrary. For every \(\varepsilon>0\) there is a sequence \(\left\{u_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X_{L}\right)\) such that
\[\Phi_{n}(b)\geq(1-\varepsilon)\Big{\|}\sum_{i=1}^{n}b_{i}u_{i}\Big{\|}_{X_{L}}.\]
Hence, by Proposition 3.7(ii), we obtain
\[\Phi_{n}(b)\geq(1-\varepsilon)\Big{\|}\sum_{i=1}^{n}b_{i}\|u_{i}\|_{X_{L}}e_{i }\Big{\|}_{X_{L}}=(1-\varepsilon)\Big{\|}\sum_{i=1}^{n}b_{i}e_{i}\Big{\|}_{X_{L }}.\]
Now, let \(a=(a_{i})_{i=1}^{\infty}\in(X_{L})_{L}\) and \(n\in\mathbb{N}\). Let \(\sum_{i=1}^{n}a_{i}e_{i}=\sum_{k\in F}b^{k}\), where \(F\subset\mathbb{N}\) is a finite set and \(b^{k}=(b_{i}^{k})_{i=1}^{n}\), \(k\in F\). Then, from the preceding estimate and the triangle inequality it follows that
\[\sum_{k\in F}\Phi_{n}((b_{i}^{k})_{i=1}^{n})\geq(1-\varepsilon)\sum_{k\in F} \Big{\|}\sum_{i=1}^{n}b_{i}^{k}e_{i}\Big{\|}_{X_{L}}\geq(1-\varepsilon)\Big{\|} \sum_{i=1}^{n}a_{i}e_{i}\Big{\|}_{X_{L}},\]
Hence, taking the infimum over all the above representations of \((a_{i})_{i=1}^{n}\), implies
\[\|a\|_{(X_{L})_{L}}\geq\|(a_{i})_{i=1}^{n}\|_{(X_{L})_{L}(n)}\geq(1-\varepsilon )\|(a_{i})_{i=1}^{n}\|_{X_{L}(n)}.\]
Since \(\varepsilon>0\) is arbitrary and \(X_{L}\) is order semi-continuous, one can easily get now that \(\|a\|_{(X_{L})_{L}}\geq\|a\|_{X_{L}}\), and so the proof of the equality \((X_{L})_{L}=X_{L}\) is completed.
The proof of the fact that \((X_{U})_{U}=X_{U}\) is very similar and simpler. Again, in view of Lemma 3.6 and the part (a) of this proposition, it suffices to show that \(X_{U}\overset{1}{\hookrightarrow}(X_{U})_{U}\). Indeed, if \(a=(a_{i})_{i=1}^{\infty}\in X_{U}\) and \(n\in\mathbb{N}\), then for each sequence \(\left\{u_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X_{U}\right)\), by Proposition 3.7(i), we have
\[\Big{\|}\sum_{i=1}^{n}a_{i}u_{i}\Big{\|}_{X_{U}}\leq\Big{\|}\sum_{i=1}^{n}a_{i }\|u_{i}\|_{X_{U}}e_{i}\Big{\|}_{X_{U}}=\Big{\|}\sum_{i=1}^{n}a_{i}e_{i}\Big{\|} _{X_{U}}\leq\|a\|_{X_{U}}.\]
Therefore,
\[\|(a_{i})_{i=1}^{n}\|_{(X_{U})_{U}(n)}\leq\|a\|_{X_{U}}\ \ \text{for all}\ n\in \mathbb{N},\]
whence \(\|a\|_{(X_{U})_{U}}\leq\|a\|_{X_{U}}\). Thus, the proof of the proposition is completed.
### Optimal sequence spaces and upper/lower estimates of Banach lattices
As we will see in this section, properties of the spaces \(X_{U}\) and \(X_{L}\) are largely determined by the optimal upper and lower estimate information related to the given Banach lattice \(X\). The connections, revealed in the next proposition, will play an important role in the proof of our main Theorem 2.8. Recall that \(\delta\left(X\right)\) and \(\sigma\left(X\right)\) are the Grobler-Dodds indices of a Banach lattice \(X\) (see Section 2.2).
**Proposition 3.9**.: _Let \(X\) be a Banach lattice. Then,_
_(i) \(X_{U}\overset{1}{\hookrightarrow}l_{\delta(X)}\) and \(l_{p}\hookrightarrow X_{U}\) for every \(p<\delta(X)\);_
_(ii) \(l_{\sigma(X)}\overset{1}{\hookrightarrow}X_{L}\) and \(X_{L}\hookrightarrow l_{q}\) for every \(q>\sigma(X)\);_
_(iii) \(X_{U}=l_{p}\) if and only if \(p=\delta(X)\) and \(X\) admits an upper \(\delta(X)\)-estimate;_
_(iv) \(X_{L}=l_{q}\) if and only if \(q=\sigma(X)\) and \(X\) admits a lower \(\sigma(X)\)-estimate._
Proof.: (i) Let \(p<\delta(X)\) and \(a=(a_{k})_{k=1}^{\infty}\in l_{p}\). Since \(X\) admits an upper \(p\)-estimate, then for every \(n\in\mathbb{N}\) and \(\left\{x_{k}\right\}_{k=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) we have
\[\left\|\sum_{k=1}^{n}a_{k}x_{k}\right\|_{X}\leq C_{p}\left(\sum_{k=1}^{n}|a_{k }|^{p}\right)^{1/p},\]
where \(C_{p}\) depends only on \(p\). Consequently, by the definition of \(X_{U}\), we get
\[\|a\|_{X_{U}}=\sup_{n=1,2,\ldots}\|(a_{k})_{k=1}^{n}\|_{X_{U}(n)}\leq C_{p}\|a \|_{l_{p}}.\]
Next, suppose \(a=(a_{k})_{k=1}^{\infty}\in X_{U}\). By Schep's result (see [29]), \(l_{\delta(X)}\) is finitely lattice representable in \(X\), which implies that for every \(\varepsilon>0\) and \(n\in\mathbb{N}\) there exists a sequence \(\left\{y_{k}\right\}_{k=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) such that for any \(a_{k}\in\mathbb{R}\), \(k=1,2,\ldots,n\), we have
\[\left\|\sum_{k=1}^{n}a_{k}y_{k}\right\|_{X}\geq(1-\varepsilon)\|(a_{k})_{k=1} ^{n}\|_{l_{\delta(X)}}.\]
Hence,
\[\|(a_{k})_{k=1}^{n}\|_{l_{\delta(X)}}\leq\frac{1}{1-\varepsilon}\left\|\sum_{ k=1}^{n}a_{k}y_{k}\right\|_{X}\leq\frac{1}{1-\varepsilon}\|(a_{k})_{k=1}^{n}\| _{X_{U}(n)}\leq\frac{1}{1-\varepsilon}\|a\|_{X_{U}}.\]
Since \(n\in\mathbb{N}\) and \(\varepsilon>0\) are arbitrary, we have \(\|a\|_{l_{\delta(X)}}\leq\|a\|_{X_{U}}\), which completes the proof of (i).
(ii) Let \(q>\sigma(X)\), \(n\in\mathbb{N}\) and \(a=(a_{k})_{k=1}^{n}\in\mathbb{R}^{n}\). Since \(X\) admits a lower \(q\)-estimate, then for every \(\left\{x_{k}\right\}_{k=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) we have
\[\|a\|_{l_{q}^{n}}=\left(\sum_{k=1}^{n}|a_{k}|^{q}\right)^{1/q}\leq C_{q}\left\| \sum_{k=1}^{n}a_{k}x_{k}\right\|_{X},\]
where \(C_{q}\) depends only on \(q\). Passing to the infimum over all sequences \(\left\{x_{k}\right\}_{k=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\), we come to the inequality
\[\|a\|_{l_{q}^{n}}\leq C_{q}\Phi_{n}(a).\]
Next, if \(a=\sum_{l\in F}b^{l}\), where \(F\subset\mathbb{N}\) is finite and \(b^{l}\in\mathbb{R}^{n}\), we have
\[\|a\|_{l_{q}^{n}}\leq\sum_{l\in F}\|b^{l}\|_{l_{q}^{n}}\leq C_{q}\sum_{l\in F }\Phi_{n}(b^{l}).\]
Passing to the infimum over all above representations of \(a\), we obtain
\[\|a\|_{l_{q}^{n}}\leq C_{q}\|a\|_{X_{L}(n)}.\]
Since this holds for every \(n\in\mathbb{N}\) and \(a=(a_{k})_{k=1}^{n}\in\mathbb{R}^{n}\), by using Lemma 3.6, we conclude that \(X_{L}\hookrightarrow l_{q}\).
Further, again appealing to [29], we have that \(l_{\sigma(X)}\) is finitely lattice representable in \(X\). Therefore, for every \(\varepsilon>0\) and \(n\in\mathbb{N}\) there exists a sequence
\(\left\{y_{k}\right\}_{k=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) such that for any \(a=(a_{k})_{k=1}^{n}\in\mathbb{R}^{n}\) it holds
\[\left\|\sum_{k=1}^{n}a_{k}y_{k}\right\|_{X}\leq(1+\varepsilon)\|a\|_{l_{ \sigma(X)}}.\]
In consequence, by the definition of the \(X_{L}(n)\)-norm,
\[\|a\|_{X_{L}(n)}\leq\Phi_{n}(a)\leq(1+\varepsilon)\|a\|_{l_{\sigma(X)}},\]
and hence again for each \(a\in l_{\sigma(X)}\) and any \(\varepsilon>0\) it follows that
\[\|a\|_{X_{L}}\leq(1+\varepsilon)\|a\|_{l_{\sigma(X)}}.\]
Application of Lemma 3.6 again completes the proof.
(iii) If \(X\) admits an upper \(\delta(X)\)-estimate, the same argument as in the proof of (i) implies that \(l_{\delta(X)}\hookrightarrow X_{U}\). Combining this with the first embedding in (i), we get that \(X_{U}=l_{\delta(X)}\).
Conversely, let \(X_{U}=l_{p}\) for some \(p\geq 1\). Then, from (i) it follows immediately that \(p\) should be equal to \(\delta(X)\). It remains to show that \(X\) admits an upper \(\delta(X)\)-estimate.
Suppose that \(x_{k}\in X\), \(k=1,2,\ldots,n\), are arbitrary pair-wise disjoint elements. Then, by the definition of the \(X_{U}(n)\)-norm and the fact that \(X_{U}=l_{\delta(X)}\), we have
\[\left\|\sum_{k=1}^{n}x_{k}\right\|_{X}\leq\left\|\sum_{k=1}^{n}\|x_{k}\|_{X}e_ {k}\right\|_{X_{U}(n)}\leq C\left(\sum_{k=1}^{n}\|x_{k}\|_{X}^{\delta(X)} \right)^{1/\delta(X)},\]
where \(C\) does not depend on \(n\) and \(x_{k}\). This means that \(X\) admits an upper \(\delta(X)\)-estimate.
(iv) If \(X\) admits a lower \(\sigma(X)\)-estimate, then from (ii) it follows immediately then \(X_{L}=l_{\sigma(X)}\).
Conversely, if \(X_{L}=l_{q}\), then, by (ii), \(q=\sigma(X)\). Consequently, for every pair-wise disjoint \(x_{k}\in X\), \(k=1,2,\ldots,n\), by the definition of the \(X_{L}(n)\)-norm, it follows
\[\left(\sum_{k=1}^{n}\|x_{k}\|_{X}^{\sigma(X)}\right)^{1/\sigma(X)}\leq C \left\|\sum_{k=1}^{n}\|x_{k}\|_{X}e_{k}\right\|_{X_{L}(n)}\leq C\left\|\sum_{ k=1}^{n}x_{k}\right\|_{X}.\]
Therefore, \(X\) admits a lower \(\sigma(X)\)-estimate, and the proof is complete.
In some cases, an application of the last proposition allows to find immediately the optimal sequence spaces.
**Example 3.10**.: Let \(1<p<\infty\), \(1\leq q<\infty\) and let \(L_{p,q}=L_{p,q}(I)\) be the Lorentz space, where \(I=[0,1]\) or \((0,\infty)\) (see Section 2.4). It is well known that \(\delta(L_{p,q})=\min(p,q)\), \(\sigma(L_{p,q})=\max(p,q)\), and moreover, that \(L_{p,q}\) admits an upper \(\delta(L_{p,q})\)-estimate and a lower \(\sigma(L_{p,q})\)-estimate (see e.g. [12, Theorem 3]). Consequently, by Proposition 3.9, \((L_{p,q})_{U}=l_{\min(p,q)}\) and \((L_{p,q})_{L}=l_{\max(p,q)}\).
**Corollary 3.11**.: _Let \(X\) be a Banach lattice. Then, \(\delta(X_{U})=\delta(X)\) and \(\sigma(X_{L})=\sigma(X)\)._
Proof.: First, we claim that \(X_{U}\) admits an upper \(p\)-estimate for every \(p<\delta(X)\). Indeed, if \(p<\delta(X)\) and \(u_{i}\in X_{U}\), \(i=1,\ldots,n\), are disjoint, by Propositions 3.7 and 3.9(i), we have
\[\Big{\|}\sum_{i=1}^{n}u_{i}\Big{\|}_{X_{U}}\leq\Big{\|}\sum_{i=1}^{n}\|u_{i}\|_ {X_{U}}e_{i}\Big{\|}_{X_{U}}\leq C_{p}\Big{(}\sum_{i=1}^{n}\|u_{i}\|_{X_{U}}^{p }\Big{)}^{1/p}.\]
Since \(p<\delta(X)\) is arbitrary, this inequality implies that \(\delta(X)\leq\delta(X_{U})\). It remains to prove the opposite inequality.
Suppose that \(X_{U}\) admits an upper \(p\)-estimate with \(p>\delta(X)\). Then, there is a constant \(C>0\) such that for every \(n\in\mathbb{N}\) and any \(a_{k}\in\mathbb{R}\), \(k=1,2,\ldots,n\), we have
\[\Big{\|}\sum_{k=1}^{n}a_{k}e_{k}\Big{\|}_{X_{U}}\leq C\|(a_{k})_{k=1}^{n}\|_ {l_{p}}.\]
On the other hand, by Proposition 3.9(i),
\[\Big{\|}\sum_{k=1}^{n}a_{k}e_{k}\Big{\|}_{X_{U}}\geq\|(a_{k})_{k=1}^{n}\|_{l_ {\delta(X)}}\]
for all \(n\in\mathbb{N}\) and \(a_{k}\in\mathbb{R}\), \(k=1,2,\ldots,n\). Since \(p>\delta(X)\), combining these inequalities, we come to a contradiction. Thus, \(\delta(X_{U})=\delta(X)\), as required.
Similarly, by using Propositions 3.7 and 3.9(ii), one can easily check that the space \(X_{L}\) has a lower \(p\)-estimate for every \(p>\sigma(X)\). Therefore, \(\sigma(X_{L})\leq\sigma(X)\), and the equality \(\sigma(X_{L})=\sigma(X)\) will be proved, once we check that \(X_{L}\) does not admit a lower \(p\)-estimate with any \(p<\sigma(X)\). To the contrary, assume that for some \(p<\sigma(X)\) there is a constant \(C>0\) such that for every \(n\in\mathbb{N}\) and any \(a_{k}\in\mathbb{R}\), \(k=1,2,\ldots,n\),
\[\|(a_{k})_{k=1}^{n}\|_{l_{p}}\leq C\left\|\sum_{k=1}^{n}a_{k}e_{k}\right\|_{X_ {L}}.\]
On the other hand, from Proposition 3.9(ii) it follows that
\[\left\|\sum_{k=1}^{n}a_{k}e_{k}\right\|_{X_{L}}\leq\|(a_{k})_{k=1}^{n}\|_{l_{ \sigma(X)}}\]
for all \(n\in\mathbb{N}\) and \(a_{k}\in\mathbb{R}\), \(k=1,2,\ldots,n\). Since the latter estimates imply a contradiction, everything is done.
## 4. Proof of Theorem 2.8
We start with some auxiliary assertions.
Our first result shows that relative \(s\)-decomposability of Banach lattices \(X\) and \(Y\) implies that each sequence from the space \(l_{s}\) can be treated as a multiplicator, bounded from \(X_{L}\) into \(Y_{U}\).
**Proposition 4.1**.: _Let \(X\) and \(Y\) be relatively \(s\)-decomposable Banach lattices. Then, we have_
\[X_{L}\cdot l_{s}\hookrightarrow Y_{U},\]
_i.e., the conditions \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\in X_{L}\), \(b=\left\{b_{i}\right\}_{i=1}^{\infty}\in l_{s}\) imply \(ab:=\left\{a_{i}b_{i}\right\}_{i=1}^{\infty}\in Y_{U}\) and_
\[\left\|ab\right\|_{Y_{U}}\leq D_{s}\left(X,Y\right)\left\|b\right\|_{l_{s}} \left\|a\right\|_{X_{L}}.\]
Proof.: Let \(n\) be any positive integer and \(D>D_{s}\left(X,Y\right).\) For arbitrary \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\in X_{L},\)\(b=\left\{b_{i}\right\}_{i=1}^{\infty}\in l_{s}\) we put \(a^{(n)}:=\sum_{i=1}^{n}a_{i}e_{i},b^{(n)}:=\sum_{i=1}^{n}b_{i}e_{i}.\) Since \(X\) and \(Y\) are relatively \(s\)-decomposable, for any \(\left\{x_{i}\right\}\in\mathfrak{B}_{n}\left(X\right)\) and \(\left\{y_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(Y\right)\) it holds
\[\left\|\sum_{i=1}^{n}a_{i}b_{i}y_{i}\right\|_{Y}\leq D\left(\sum_{i=1}^{n} \left|b_{i}\right|^{s}\right)^{1/s}\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}.\]
By taking the supremum over all sequences \(\left\{y_{i}\right\}_{i=1}^{n}\) and the infimum over all sequences \(\left\{x_{i}\right\}_{i=1}^{n}\) we infer
\[\left\|a^{(n)}b^{(n)}\right\|_{Y_{U}(n)}\leq D\left\|b\right\|_{l_{s}}\Phi_{n }\left(a^{(n)}\right)\]
Next, write \(a^{(n)}=\sum_{k\in F}a^{k}\) for some finite set \(F\subset\mathbb{N}\) and \(a^{k}=\left\{a_{i}^{k}\right\}_{i=1}^{n}\), \(k\in F\). Then \(a^{(n)}b^{(n)}=\sum_{k\in F}b^{(n)}a^{k}\) and the preceding estimate implies
\[\left\|a^{(n)}b^{(n)}\right\|_{Y_{U}(n)}\leq\sum_{k\in F}\left\|b^{(n)}a^{k} \right\|_{Y_{U}(n)}\leq D\left\|b\right\|_{l_{s}}\left(\sum_{k\in F}\Phi_{n} \left(a^{k}\right)\right).\]
After taking the infimum over all such decompositions of \(a^{(n)}\) we obtain
\[\left\|a^{(n)}b^{(n)}\right\|_{Y_{U}(n)}\leq D\left\|b\right\|_{l_{s}}\left\| a^{(n)}\right\|_{X_{L}(n)},\ \ n\in\mathbb{N},\]
which implies the claimed result (see also Lemma 3.6).
**Proposition 4.2**.: _Suppose that \(X\) and \(Y\) are relatively \(s\)-decomposable Banach lattices for some \(1\leq s\leq\infty\), \(l_{p}\) is finitely lattice representable in \(Y_{U}\), where \(p\leq s\)._
_Then, \(X\) satisfies a lower \(q\)-estimate for every \(q\) such that \(1/p\geq 1/q+1/s\), and \(M_{[q]}\left(X\right)\leq D_{s}\left(X,Y\right)\)._
Proof.: Let \(n\) be any positive integer and \(\varepsilon>0\) be arbitrary. By the assumption, we can find pair-wise disjoint elements \(u_{i}\in Y_{U},\)\(i=1,\ldots,n,\) satisfying
\[\left\|b\right\|_{l_{p}^{n}}\leq\left\|\sum_{i=1}^{n}b_{i}u_{i}\right\|_{Y_{U} }\leq\left(1+\varepsilon\right)\left\|b\right\|_{l_{p}^{n}} \tag{4.1}\]
for all sequences \(b=\left\{b_{i}\right\}_{i=1}^{n}\) of scalars.
Let \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) and \(D>D_{s}\left(X,Y\right).\) Then for any sequence \(\left\{a_{i}\right\}_{i=1}^{n}\) of scalars, by using (4.1), Proposition 3.7(i) and relative \(s\)-decomposability of
and \(Y\), we have
\[\left(\sum_{i=1}^{n}\left|a_{i}b_{i}\right|^{p}\right)^{1/p} \leq \left\|\sum_{i=1}^{n}a_{i}b_{i}u_{i}\right\|_{Y_{U}}\leq\left\|\sum _{i=1}^{n}a_{i}b_{i}\left\|u_{i}\right\|_{Y_{U}}e_{i}\right\|_{Y_{U}}\] \[= \sup\left\{\left\|\sum_{i=1}^{n}a_{i}b_{i}\left\|u_{i}\right\|_{Y _{U}}y_{i}\right\|_{Y}:\left\{y_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left( Y\right)\right\}\] \[\leq D\left(\sum_{i=1}^{n}\left|b_{i}\right|^{s}\right)^{1/s}\sup_{1 \leq i\leq n}\left\|u_{i}\right\|_{Y_{U}}\left\|\sum_{i=1}^{n}a_{i}x_{i} \right\|_{X}\]
Consequently, since from (4.1) it follows \(\left\|u_{i}\right\|_{Y_{U}}\leq 1+\varepsilon\), we get
\[\left(\sum_{i=1}^{n}\left|a_{i}b_{i}\right|^{p}\right)^{1/p}\leq\left(1+ \varepsilon\right)D\left(\sum_{i=1}^{n}\left|b_{i}\right|^{s}\right)^{1/s} \left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}.\]
Hence, by the reverse Holder inequality,
\[\left(\sum_{i=1}^{n}\left|a_{i}\right|^{q}\right)^{1/q}\leq\left(1+\varepsilon \right)D\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}\]
whenever \(1/q\leq 1/p-1/s\). Thus, \(X\) satisfies a lower \(q\)-estimate and \(M_{\left[q\right]}(X)\leq D_{s}\left(X,Y\right)\).
**Proposition 4.3**.: _Suppose that \(X\) and \(Y\) are relatively \(s\)-decomposable Banach lattices for some \(1\leq s\leq\infty\), \(l_{q}\) is finitely lattice representable in \(X_{L}\) and \(1/q+1/s\leq 1\)._
_Then, \(Y\) satisfies an upper \(p\)-estimate for every \(p\) such that \(1/p\geq 1/q+1/s\), and \(M^{\left[p\right]}\left(Y\right)\leq D_{s}\left(X,Y\right)\)._
Proof.: Let \(n\) be a positive integer and \(\varepsilon>0\) be arbitrary. By the assumption, we can select pair-wise disjoint elements \(u_{i}\in X_{L}\), \(i=1,\ldots,n\), such that
\[\left\|b\right\|_{l_{q}^{n}}\leq\left\|\sum_{i=1}^{n}b_{i}u_{i}\right\|_{X_{L }}\leq\left(1+\varepsilon\right)\left\|b\right\|_{l_{q}^{n}} \tag{4.2}\]
for all sequences \(b=\left\{b_{i}\right\}_{i=1}^{n}\) of scalars.
Suppose \(\left\{y_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(Y\right)\) and \(D>D_{s}\left(X,Y\right).\) For each \(b=\left\{b_{i}\right\}_{i=1}^{n}\) we write \(b=\sum_{k\in F}b^{k}\), where \(F\subset\mathbb{N}\) is a finite set and \(b^{k}=\left\{b_{i}^{k}\right\}_{i=1}^{n}\) are arbitrary. Then, for every sequences \(\left\{x_{i}^{k}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\), \(k\in F\), and any sequence \(a=\left\{a_{i}\right\}_{i=1}^{n}\) of scalars, by the triangle inequality and relative \(s\)-decomposability of \(X\) and \(Y\), we have
\[\left\|\sum_{i=1}^{n}a_{i}b_{i}y_{i}\right\|_{Y}\leq\sum_{k\in F}\left\|\sum_{i =1}^{n}a_{i}b_{i}^{k}y_{i}\right\|_{Y}\leq D\left(\sum_{i=1}^{n}\left|a_{i} \right|^{s}\right)^{1/s}\sum_{k\in F}\left\|\sum_{i=1}^{n}b_{i}^{k}x_{i}^{k} \right\|_{X}.\]
Therefore, taking the infimum over all sequences \(\left\{x_{i}^{k}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) for each \(k\in F\) implies that
\[\left\|\sum_{i=1}^{n}a_{i}b_{i}y_{i}\right\|_{Y}\leq D\left\|a\right\|_{l_{s}^{ n}}\sum_{k\in F}\Phi_{n}\left(b^{k}\right),\]
and hence
\[\left\|\sum_{i=1}^{n}a_{i}b_{i}y_{i}\right\|_{Y}\leq D\left\|a\right\|_{l_{s}^{ n}}\left\|b\right\|_{X_{L}}.\]
Thus, applying Proposition 3.7(ii) and inequalities (4.2), we obtain
\[\left\|\sum_{i=1}^{n}a_{i}b_{i}y_{i}\right\|_{Y} \leq D\left\|a\right\|_{l_{s}^{n}}\left\|\sum_{i=1}^{n}b_{i}e_{i} \right\|_{X_{L}}\leq D\left\|a\right\|_{l_{s}^{n}}\left\|\sum_{i=1}^{n}b_{i} \frac{u_{i}}{\left\|u_{i}\right\|_{X_{L}}}\right\|_{X_{L}}\] \[\leq D\left\|a\right\|_{l_{s}^{n}}\left\|\sum_{i=1}^{n}b_{i}u_{i} \right\|_{X_{L}}\leq D\left(1+\varepsilon\right)\left\|a\right\|_{l_{s}^{n}} \left\|b\right\|_{l_{q}^{n}}.\]
By the reverse Holder inequality, this implies that
\[\left\|\sum_{i=1}^{n}c_{i}y_{i}\right\|_{Y}\leq D\left(1+\varepsilon\right) \left(\sum_{i=1}^{n}\left|c_{i}\right|^{p}\right)^{1/p},\]
whenever \(\frac{1}{p}\geq\frac{1}{q}+\frac{1}{s}\). As a result, \(Y\) satisfies an upper \(p\)-estimate and \(M^{\left|p\right|}(Y)\leq D_{s}(X,Y)\).
**Proposition 4.4**.: _Suppose Banach lattices \(X\) and \(Y\) satisfy the following conditions:_
\((a)\)_\(X\), \(Y\) are relatively s-decomposable for some \(1\leq s\leq\infty\);_
\((b)\)_\(l_{p}\) is finitely lattice representable in \(Y_{U}\) ;_
\((c)\)_\(l_{q}\) is finitely lattice representable in \(X_{L}\)._
_Then, it holds_
\[\frac{1}{p}\leq\frac{1}{q}+\frac{1}{s}\]
Proof.: Let \(n\) be a positive integer and \(\varepsilon>0\) be arbitrary. By assumption \((b)\), there exist pair-wise disjoint elements \(y_{i}\in Y_{U}\), \(i=1,\ldots,n\), such that for all scalar sequences \(b=\left\{b_{i}\right\}_{i=1}^{n}\)
\[\left\|b\right\|_{l_{p}^{n}}\leq\left\|\sum_{i=1}^{n}b_{i}y_{i}\right\|_{Y_{U }(n)}\leq\left(1+\varepsilon\right)\left\|b\right\|_{l_{p}^{n}}.\]
In the same manner, using \((c)\), we can select pair-wise disjoint \(x_{i}\in X_{L}\), \(i=1,\ldots,n\), such that for all scalar sequences \(a=\left\{a_{i}\right\}_{i=1}^{n}\)
\[\left\|a\right\|_{l_{q}^{n}}\leq\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X_{L }(n)}\leq\left(1+\varepsilon\right)\left\|a\right\|_{l_{q}^{n}}.\]
Applying these inequalities and Propositions 3.7(i), 4.1 and 3.7(ii), we obtain
\[\left\|ab\right\|_{l_{p}^{n}} \leq \left\|\sum_{i=1}^{n}a_{i}b_{i}y_{i}\right\|_{Y_{U}(n)}\leq\left\| \sum_{i=1}^{n}a_{i}b_{i}\left\|y_{i}\right\|_{Y_{U}}e_{i}\right\|_{Y_{U}(n)}\] \[\leq \left(1+\varepsilon\right)D_{s}\left(X,Y\right)\left\|b\right\|_ {l_{s}^{n}}\left\|\sum_{i=1}^{n}a_{i}e_{i}\right\|_{X_{L}(n)}\] \[\leq \left(1+\varepsilon\right)D_{s}\left(X,Y\right)\left\|b\right\|_ {l_{s}^{n}}\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X_{L}(n)}\] \[\leq \left(1+\varepsilon\right)^{2}D_{s}\left(X,Y\right)\left\|b \right\|_{l_{s}^{n}}\left\|a\right\|_{l_{q}^{n}}.\]
Since \(n\in\mathbb{N}\), \(b=\left\{b_{i}\right\}_{i=1}^{n}\) and \(a=\left\{a_{i}\right\}_{i=1}^{n}\) are arbitrary, the claim follows (see also Example 2.2).
Recall that
\[s_{\max}=s_{\max}(X,Y):=\sup\{s\in\left[1,\infty\right]:\,X,Y\text{are $s$-decomposable}\}.\]
**Proposition 4.5**.: _Let \(X\) and \(Y\) be Banach lattices such that \(\delta(Y)\leq\sigma(X)\). Then, we have_
\[\frac{1}{\delta\left(Y\right)}=\frac{1}{\sigma\left(X\right)}+\frac{1}{s_{\max }}. \tag{4.3}\]
Proof.: Assume first that \(s_{\max}>1\). Then, if \(1\leq s<s_{\max}\), \(X\) and \(Y\) are relatively \(s\)-decomposable. Moreover, by Schep's result [29], \(l_{s\left(Y_{U}\right)}\) and \(l_{\sigma\left(X_{L}\right)}\) are finitely representable in \(Y_{U}\) and \(X_{L}\), respectively. Hence, all the conditions of Proposition 4.4 are fulfilled and we conclude
\[\frac{1}{\delta\left(Y_{U}\right)}\leq\frac{1}{\sigma\left(X_{L}\right)}+ \frac{1}{s}.\]
On the other hand, by Corollary 3.11, \(s\left(Y_{U}\right)=s\left(Y\right)\) and \(\sigma\left(X_{L}\right)=\sigma\left(X\right)\). Consequently, we have
\[\frac{1}{\delta\left(Y\right)}\leq\frac{1}{\sigma\left(X\right)}+\frac{1}{s}.\]
Since this holds for all \(s<s_{\max}\), it follows
\[\frac{1}{\delta\left(Y\right)}\leq\frac{1}{\sigma\left(X\right)}+\frac{1}{s_{ \max}}. \tag{4.4}\]
Observe that the same arguments work also in the case when \(s_{\max}=1\), because every Banach couples \(X\) and \(Y\) are relatively \(1\)-decomposable. Therefore, we again get inequality (4.4).
For the opposite inequality, assume first that \(s_{\max}=\infty\). Then, (4.4) implies that \(\sigma\left(X\right)\leq\delta\left(Y\right)\). Combining this inequality with the assumption, we conclude that \(\sigma\left(X\right)=\delta\left(Y\right)\), and hence (4.4) becomes (4.3).
Let now \(s_{\max}<\infty\). Assume that (4.3) fails, i.e.,
\[\frac{1}{\delta\left(Y\right)}<\frac{1}{\sigma\left(X\right)}+\frac{1}{s_{ \max}}.\]
If \(\delta\left(Y\right)>1\) and \(\sigma\left(X\right)<\infty\), we can find \(1\leq p<\delta\left(Y\right)\), \(q>\sigma\left(X\right)\) and \(s>s_{\max}\) such that \(1/p=1/q+1/s\). Since \(X\) satisfies a lower \(q\)-estimate and \(Y\) an upper \(p\)-estimate, from Proposition 2.4 it follows that \(X\) and \(Y\) are relatively \(s\)-decomposable, which is impossible, since \(s>s_{\max}\). Thus, in this case (4.3) is proved.
If \(\delta\left(Y\right)=1\) or \(\sigma\left(X\right)=\infty\), the proof follows by the same lines in view of the fact that each Banach lattice admits an upper \(1\)-estimate and a lower \(\infty\)-estimate.
Proof of Theorem 2.8.: We start with the case when \(\delta(Y)\leq\sigma(X)\).
\(\left(i\right)\Longrightarrow\left(ii\right)\). Assume first that \(s_{\max}>1\), \(\delta(Y)>1\) and \(\sigma(X)<\infty\).
Let \(1\leq s<s_{\max}\). Then, by Proposition 4.5, we have
\[\frac{1}{\delta\left(Y\right)}<\frac{1}{\sigma\left(X\right)}+\frac{1}{s}.\]
Consequently, for some \(1\leq p_{1}<\delta\left(Y\right)\) and \(q_{1}>\sigma\left(X\right)\) we obtain
\[\frac{1}{p_{1}}=\frac{1}{\sigma\left(X\right)}+\frac{1}{s}\;\;\text{and}\;\; \frac{1}{\delta\left(Y\right)}=\frac{1}{q_{1}}+\frac{1}{s}.\]
Since \(\delta\left(Y_{U}\right)=\delta\left(Y\right)\) and \(\sigma\left(X_{L}\right)=\sigma\left(X\right)\) (see Corollary 3.11), by [29], \(l_{\delta\left(Y\right)}\) (resp. \(l_{\sigma\left(X\right)}\)) is finitely lattice representable in \(Y_{U}\) (resp. in \(X_{L}\)). Therefore, according to Propositions 4.2 and 4.3, \(X\) satisfies a lower \(q_{1}\)-estimate, \(Y\) satisfies an upper \(p_{1}\)-estimate and \(M_{\left[q_{1}\right]}\left(X\right)\leq D_{s}\left(X,Y\right)\), \(M^{\left[p_{1}\right]}\left(Y\right)\leq D_{s}\left(X,Y\right)\). Next, if \(1/p=1/q+1/s\), where \(p<\delta\left(Y\right)\) and \(q>\sigma\left(X\right)\), we have \(p<p_{1}\) and \(q>q_{1}\). Hence, \(X\) satisfies a lower \(q\)-estimate, \(Y\) satisfies an upper \(p\)-estimate and
\[M_{\left[q\right]}\left(X\right)M^{\left[p\right]}\left(Y\right)\leq M_{\left[ q_{1}\right]}\left(X\right)M^{\left[p_{1}\right]}\left(Y\right)\leq D_{s} \left(X,Y\right)^{2}.\]
Suppose now that \(X\) and \(Y\) are relatively \(s_{\max}\)-decomposable. Since \(l_{\delta\left(Y\right)}\) is finitely lattice representable in \(Y_{U}\), by Propositions 4.2 and 4.5, \(X\) satisfies a lower \(\sigma\left(X\right)\)-estimate and \(M_{\left[\sigma\left(X\right)\right]}\left(X\right)\leq D_{s}\left(X,Y\right)\). In the same manner, applying this time Proposition 4.3, we infer that \(Y\) satisfies an upper \(\delta\left(Y\right)\)-estimate and \(M^{\left[\delta\left(Y\right)\right]}\left(Y\right)\leq D_{s}\left(X,Y\right)\). Combining this together with equality (4.3), we come to the desired result.
If \(s_{\max}=1\), or \(\delta\left(Y\right)=1\), or \(\sigma\left(X\right)=\infty\), we can use the same arguments, taking into account that every Banach lattices \(X\) and \(Y\) are relatively \(1\)-decomposable and each Banach couple satisfies an upper \(1\)-estimate and a lower \(\infty\)-estimate.
\(\left(ii\right)\Longrightarrow\left(i\right)\). This implication together with the inequality
\[D_{s}\left(X,Y\right)\leq M_{\left[q\right]}\left(X\right)M^{\left[p\right]} \left(Y\right)\]
is an immediate consequence of Proposition 2.4.
To complete the proof of the equivalence of \(\left(i\right)\), \(\left(ii\right)\) and \(\left(iii\right)\) it remains now to refer to Proposition 2.3.
Finally, let us prove the equivalence of the conditions \(\sigma(X)\leq\delta(Y)\) and \(s_{max}=\infty\).
If \(\sigma\left(X\right)<\delta\left(Y\right)\), then \(X\) satisfies a lower \(p\)-estimate and \(Y\) an upper \(p\)-estimate for \(p\in\left(\sigma\left(X\right),\delta\left(Y\right)\right)\). Therefore, by Proposition 2.4, \(X\), \(Y\) are relatively decomposable. Hence, \(s_{max}=\infty\). If \(\sigma\left(X\right)=\delta\left(Y\right)\), the same result follows from Proposition 4.5.
On the contrary, assume that \(s_{max}=\infty\). Then, \(X\) and \(Y\) are relatively \(s\)-decomposable for each \(s<\infty\). Therefore, since \(\delta\left(Y_{U}\right)=\delta\left(Y\right)\) and \(\sigma\left(X_{L}\right)=\sigma\left(X\right)\), by Proposition 4.4, we infer
\[\frac{1}{\delta\left(Y\right)}\leq\frac{1}{\sigma\left(X\right)}+\frac{1}{s}.\]
Tending \(s\rightarrow\infty\), we get the required inequality, and so the proof is completed.
Recall that the main result of the paper [11] (see Theorem 1.3) reads that Banach function lattices \(X\), \(Y\) are relatively decomposable (or \(\infty\)-decomposable) if and only if there exists \(p\geq 1\) such that \(X\) satisfies a lower \(p\)-estimate and \(Y\) an upper \(p\)-estimate. As an immediate consequence of Theorem 2.8 and its proof we obtain the following extension of this result to general Banach lattices.
**Corollary 4.6**.: _Banach lattices \(X\), \(Y\) are relatively decomposable if and only if there exists \(p\geq 1\) such that \(X\) satisfies a lower \(p\)-estimate and \(Y\) an upper \(p\)-estimate._
_Remark 4.7_.: In contrast to [11], our definition of relative decomposability (see Definition 2.1) deals only with finite sums. Thanks to that, we need not to impose on lattices \(X\) and \(Y\) any extra condition. In particular, if \(X\) and \(Y\) are Banach lattices of measurable functions on a \(\sigma\)-finite measure space we omit the assumption from [11, Theorem 1.3] that \(Y\) has the Fatou property.
From Proposition 3.9 and the proof of Theorem 2.8 we also deduce the following result.
**Corollary 4.8**.: _If Banach lattices \(X\), \(Y\) are relatively \(s_{max}\)-decomposable, then \(X\) admits a lower \(\sigma(X)\)-estimate and \(Y\) admits an upper \(\delta(Y)\)-estimate (equivalently, \(X_{L}=l_{\sigma(X)}\) and \(Y_{U}=l_{\delta(Y)}\))._
## 5. Applications to interpolation theory: Calderon-Mityagin couples of type \(s\).
In this section, we freely use notation and results from interpolation theory as in [4], [3], [5].
Let \(\left(X,\Sigma,\mu\right)\) be a \(\sigma\)-finite measure space. A \(\Sigma\)-measurable function \(\omega\) is called a _weight_ if \(\omega\) is non-negative \(\mu\)-a.e. on \(X\). Let \(1\leq p\leq\infty\) and let \(L_{p}\left(\omega,\mu\right)\) be the Banach space of all (equivalence classes of) \(\Sigma\)-measurable functions \(f\) with \(f\omega\in L_{p}\left(\mu\right).\) Given \(1\leq p_{0},p_{1}\leq\infty\) put \(\overrightarrow{p}=\left(p_{0},p_{1}\right).\) A Banach couple \(\overrightarrow{U}=\left(U_{0},U_{1}\right)\) of Banach lattices is called a \(L_{\overrightarrow{p}}\)_-couple_ if \(U_{i}=L_{p_{i}}\left(\omega_{i},\mu\right)\), \(i=0,1\), for some measure space \(\left(X,\Sigma,\mu\right)\) and some weights \(\omega_{0},\omega_{1}\) with respect to this measure space.
Let \(1\leq s_{0},s_{1}\leq\infty\) and \(\overrightarrow{X},\overrightarrow{Y}\) be two Banach couples of Banach lattices such that \(X_{i},Y_{i}\) are relatively \(s_{i}\)-decomposable for \(i=0,1\). Then, by Theorem 2.8, there exist \(1\leq p_{0},p_{1},q_{0},q_{1}\leq\infty\) with \(1/p_{i}=1/q_{i}+1/s_{i}\), \(i=0,1\), such that for every \(L_{\overrightarrow{q}}\)-couple \(\overrightarrow{U}=(U_{0},U_{1})\) and \(L_{\overrightarrow{p}}\)-couple \(\overrightarrow{V}=(V_{0},V_{1})\) both \(X_{i},U_{i}\) and \(V_{i},Y_{i}\) are relative decomposable for \(i=0,1\).
Combining the last observation with the results of [11], we see that each of the pairs of the couples \(\overrightarrow{X},\overrightarrow{U}\) and \(\overrightarrow{V},\overrightarrow{Y}\) have the relative Calderon-Mityagin property (\(\mathcal{C}-\mathcal{M}\) property). Hence, the \(s\)-decomposability relation of couples of Banach lattices has some transitivity property, which is manifested in factorization of this relation through the canonical \(s\)-decomposability of suitable \(L_{\overrightarrow{q}}\)- and \(L_{\overrightarrow{p}}\)-couples. More precisely, we get the following result.
**Theorem 5.1**.: _Let \(\overrightarrow{X},\overrightarrow{Y}\) be two couples of Banach lattices over a \(\sigma\)-finite measure space. If the spaces \(X_{i},Y_{i}\) are relative \(s_{i}\)-decomposable for \(i=0,1\), where \(1\leq s_{i}\leq\infty,\) then there exist pairs \(\overrightarrow{p}\), \(\overrightarrow{q}\) of parameters such that, for every \(L_{\overrightarrow{q}}\)-couple \(\overrightarrow{U}=(U_{0},U_{1})\) and every \(L_{\overrightarrow{p}}\)-couple \(\overrightarrow{V}=(V_{0},V_{1}),\) pairs of the couples \(\overrightarrow{X},\overrightarrow{U}\) and \(\overrightarrow{V},\overrightarrow{Y}\) have the relative \(\mathcal{C}-\mathcal{M}\) property and the spaces \(U_{i},V_{i}\) are relatively \(s_{i}\)-decomposable, \(i=0,1\)._
There are many pairs of Banach couples \(\overrightarrow{X}\) and \(\overrightarrow{Y}\), which fail to have relative \(\mathcal{C}-\mathcal{M}\) property. In [9], Cwikel introduced the following weaker condition that may be satisfied by such a pair of Banach couples.
Let \(\overrightarrow{X}=(X_{0},X_{1})\) and \(\overrightarrow{Y}=(Y_{0},Y_{1})\) be two Banach couples. Given \(1\leq s\leq\infty\), define the relation \(R_{s}\) for \((x,y)\in(X_{0}+X_{1})\times(Y_{0}+Y_{1})\) by
\[xR_{s}y\iff\exists w\in L_{s}\left((0,\infty),dt/t\right)\text{ with }K(t,y; \overrightarrow{Y})\leq w(t)\cdot K(t,x;\overrightarrow{X}),\;t>0.\]
We say that the Banach couples \(\overrightarrow{X},\overrightarrow{Y}\) are of _relative \(\mathcal{C}-\mathcal{M}\) type_\(s\) whenever the relation \(xR_{s}y\) implies that \(y=Tx\) for some linear operator \(T:\overrightarrow{X}\to\overrightarrow{Y}\) (i.e., \(T:\,X_{0}+X_{1}\to Y_{0}+Y_{1}\), and \(T\) is bounded from \(X_{i}\) into \(Y_{i}\), \(i=0,1\)).
Since each \(K\)-functional is a concave nondecreasing function in \(t\), we can assume that the function \(w\) in this definition is continuous or constant on each dyadic interval. From this observation it follows easily that if \(\overrightarrow{X},\overrightarrow{Y}\) are of relative \(\mathcal{C}-\mathcal{M}\) type \(s_{1}\) and \(1\leq s_{2}\leq s_{1}\), then these couples are also of relative \(\mathcal{C}-\mathcal{M}\) type \(s_{2}\). Furthermore, it is known [7, Theorem 1] that arbitrary couples \(\overrightarrow{X},\overrightarrow{Y}\) are of relative \(\mathcal{C}-\mathcal{M}\) type \(1\). Hence, the set of real numbers \(s\) in \([1,\infty]\) such that \(\overrightarrow{X},\overrightarrow{Y}\) are of relative \(\mathcal{C}-\mathcal{M}\) type \(s\) is an interval which includes \(1.\) In [7] and [9] one can find examples of Banach couples, for which this interval is \([1,q]\), \(1\leq q<\infty\), or \([1,\infty)\) (of course, it is \([1,\infty]\) iff \(\overrightarrow{X},\overrightarrow{Y}\) have the relative \(\mathcal{C}-\mathcal{M}\) property).
Further, in [9], Cwikel proved that, if the couples \(\overrightarrow{X},\overrightarrow{Y}\) are mutually closed and \(X_{i},Y_{i}\), \(i=0,1\), are relatively \(s\)-decomposable for some \(1\leq s\leq\infty\), then
these couples are of relative \(\mathcal{C}-\mathcal{M}\) type \(s\) (see also [4, p. 606]). Let us show that, under some conditions, this implies the orbital factorization of relative \(K\)-functional estimates for such couples through suitable \(L_{\overrightarrow{p}}\)- and \(L_{\overrightarrow{q}}\)-couples.
Given Banach couples \(\overrightarrow{X},\overrightarrow{Y}\) the couple \(\overrightarrow{Y}\) is called \(\overrightarrow{X}\)_-abundant_, if for each element \(x\in X_{0}+X_{1}\) there exists \(y\in Y_{0}+Y_{1}\) such that
\[K(t,x;\overrightarrow{X})\asymp K(t,y;\overrightarrow{Y})\]
with constants independent of \(x\in X_{0}+X_{1}\) and \(t>0\) (see e.g. [4, Definition 4.4.8]). For instance, if a couple \(\overrightarrow{X}\) is regular (i.e., \(X_{0}\cap X_{1}\) is dense in \(X_{0}\) and \(X_{1}\)), then the \(L_{\overrightarrow{p}}\)-couples \(\left(l_{p_{0}}\left(\mathbb{Z},(1)_{n\in\mathbb{Z}}\right),l_{p_{1}}\left( \mathbb{Z},(2^{-n})_{n\in\mathbb{Z}}\right)\right)\) and \(\left(L_{p_{0}}\left(\mathbb{R}_{+},dt/t\right),L_{p_{1}}\left(\mathbb{R}_{+}, dt/t\right)\right)\) are \(\overrightarrow{X}\)-abundant for each pair \(\overrightarrow{p}=\left(p_{0},p_{1}\right)\)[4, Theorem 4.5.7]. With this notation, we have the following version of Theorem 5.1.
**Theorem 5.2**.: _Let \(\overrightarrow{X}=\left(X_{0},X_{1}\right)\) and \(\overrightarrow{Y}=\left(Y_{0},Y_{1}\right)\) be two Banach lattice couples over a \(\sigma\)-finite measure space such that \(X_{i},Y_{i}\), \(i=0,1\), are relatively \(s\)-decomposable for some \(1\leq s\leq\infty\). Then, there are pairs \(\overrightarrow{p}=\left(p_{0},p_{1}\right)\) and \(\overrightarrow{q}=\left(q_{0},q_{1}\right)\) of parameters such that for every \(L_{\overrightarrow{q}}\)-couple \(\overrightarrow{U}\), which is \(\overrightarrow{X}\)-abundant, and every \(L_{\overrightarrow{p}}\)-couple \(\overrightarrow{V}\), which is \(\overrightarrow{Y}\)-abundant, we have the following: If \(x\in X_{0}+X_{1},y\in Y_{0}+Y_{1}\) satisfy the relation \(xR_{s}y\), then there exist linear operators \(T_{0}:\overrightarrow{X}\to\overrightarrow{U}\), \(T_{1}:\overrightarrow{U}\to\overrightarrow{V}\), \(T_{2}:\overrightarrow{V}\to\overrightarrow{Y}\) such that \(y=T_{2}T_{1}T_{0}x\)._
Proof.: Applying first Theorem 5.1, we find parameters \(1\leq p_{i},q_{i}\leq\infty\), \(1/p_{i}=1/q_{i}+1/s\), \(i=0,1\), such that, if \(\overrightarrow{p}=\left(p_{0},p_{1}\right)\), \(\overrightarrow{q}=\left(q_{0},q_{1}\right)\), then for every \(L_{\overrightarrow{q}}\)-couple \(\overrightarrow{U}=\left(U_{0},U_{1}\right)\) and every \(L_{\overrightarrow{p}}\)-couple \(\overrightarrow{V}=\left(V_{0},V_{1}\right)\), pairs of the couples \(\overrightarrow{X},\overrightarrow{U}\) and \(\overrightarrow{V},\overrightarrow{Y}\) have relative \(\mathcal{C}-\mathcal{M}\) property and the spaces \(U_{i},V_{i}\), \(i=0,1\), are relatively \(s\)-decomposable. Next, assuming that \(x\in X_{0}+X_{1},y\in Y_{0}+Y_{1}\) satisfy \(xR_{s}y\), by using the abundance assumption, we can select \(u\in U_{0}+U_{1}\) and \(v\in V_{0}+V_{1}\) such that
\[K(t,x;\overrightarrow{X}) \asymp K(t,u;\overrightarrow{U})\] \[K(t,y;\overrightarrow{Y}) \asymp K(t,v;\overrightarrow{V})\]
with constants independent of \(x\in X_{0}+X_{1}\), \(y\in Y_{0}+Y_{1}\) and \(t>0\). Since the couples \(\overrightarrow{X}\), \(\overrightarrow{U}\) and \(\overrightarrow{V}\), \(\overrightarrow{Y}\) have relative \(\mathcal{C}-\mathcal{M}\) property, we can find linear operators \(T_{0}:\overrightarrow{X}\to\overrightarrow{U}\) and \(T_{2}:\overrightarrow{V}\to\overrightarrow{Y}\) satisfying \(u=T_{0}x\) and \(y=T_{2}v\). Moreover, as was above-mentioned (see [9]), the couples \(\overrightarrow{U}\) and \(\overrightarrow{V}\) are of relative \(\mathcal{C}-\mathcal{M}\) type \(s\). Hence, from the relation \(xR_{s}y\) it follows the existence of a linear operator \(T_{1}:\overrightarrow{U}\to\overrightarrow{V}\) such that \(v=T_{1}u\).
Assume now that \(\overrightarrow{X}=\left(X_{0},X_{1}\right)\) and \(\overrightarrow{Y}=\left(Y_{0},Y_{1}\right)\) are two Banach lattice couples such that \(X_{i},Y_{i}\) are relatively \(\infty\)-decomposable for \(i=0,1\). Then, the results of [11] imply that the couples \(\overrightarrow{X}\) and \(\overrightarrow{Y}\) have relative \(\mathcal{C}-\mathcal{M}\) property. Arguing in the same way as in the proof of Theorem 5.2, one can easily deduce the following factorization result.
**Theorem 5.3**.: _Let \(\overrightarrow{X}=\left(X_{0},X_{1}\right)\) and \(\overrightarrow{Y}=\left(Y_{0},Y_{1}\right)\) be two Banach lattice couples over a \(\sigma\)-finite measure space such that \(X_{0},Y_{0}\) and \(X_{1},Y_{1}\) are relatively decomposable. Then, there is a pair \(\overrightarrow{p}=\left(p_{0},p_{1}\right)\) of parameters such that for every \(L_{\overrightarrow{p}}\) couples \(\overrightarrow{U}\) and \(\overrightarrow{V}\) such that \(\overrightarrow{U}\) is \(\overrightarrow{X}\)-abundant and \(\overrightarrow{V}\) is \(\overrightarrow{Y}\)-abundant we have the following: If \(x\in X_{0}+X_{1},y\in Y_{0}+Y_{1}\) satisfy_
\[K(t,y;\overrightarrow{Y})\leq K(t,x;\overrightarrow{X}),\ \ t>0, \tag{5.1}\]
_then there exist linear operators \(T_{0}:\overrightarrow{X}\rightarrow\overrightarrow{U}\), \(T_{1}:\overrightarrow{U}\rightarrow\overrightarrow{V}\), \(T_{2}:\overrightarrow{V}\rightarrow\overrightarrow{Y}\) with \(y=T_{2}T_{1}T_{0}x\)._
## 6. The proof of Theorem 3.3.
This proof will be broken down into a number of lemmas and propositions. The main step is Proposition 6.1 showing under which conditions a scale of norms on \(\mathbb{R}^{n}\), \(n\in\mathbb{N},\) generates a rearrangement invariant Banach sequence lattice. The rest of this section is to secure that these conditions are valid for both \(X_{L}\)- and \(X_{U}\)-constructions.
Recall that a functional \(\Psi\) (in particular, a norm \(\left\|\cdot\right\|\)) defined on \(\mathbb{R}^{n}\) is called _lattice monotone_ or _lattice norm_ if for any elements \(a=\left\{a_{i}\right\}_{i=1}^{n},b=\left\{b_{i}\right\}_{i=1}^{n}\in\mathbb{R} ^{n}\) such that \(\left|a_{i}\right|\leq\left|b_{i}\right|,1\leq i\leq n,\) it holds \(\Psi\left(a\right)\leq\Psi\left(b\right).\) This functional is said to be _symmetric_ if for any permutation \(\sigma\) of the set \(\left\{1,\ldots,n\right\}\) we have \(\Psi\left(\sigma a\right)=\Psi\left(a\right)\) where \(\sigma a=\left\{a_{\sigma(i)}\right\}_{i=1}^{n}.\) We introduce also the operators
\[I_{n} : \mathbb{R}^{n}\rightarrow\mathbb{R}^{n-1},\left\{a_{i}\right\}_ {i=1}^{n}\mapsto\left\{a_{i}\right\}_{i=1}^{n-1}\] \[Tr_{n} : \mathbb{R}^{n}\rightarrow\mathbb{R}^{n},\left\{a_{i}\right\}_{i= 1}^{n}\mapsto\left\{\begin{array}{l}a_{i}:i\neq n\\ 0:i=n\end{array}\right\}\]
As above, for any sequence \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\) of real numbers and each integer \(k\), by \(a^{(k)}\) we will denote the truncated sequence \(a^{(k)}\) defined by \(a^{(k)}=\left\{a_{i}^{(k)}\right\}_{i=1}^{\infty}\), with \(a_{i}^{(k)}=a_{i}\) if \(1\leq i\leq k\) and \(a_{i}^{(k)}=0\) if \(i>k.\)
**Proposition 6.1**.: _Let \(\left\|\cdot\right\|_{n}\) be symmetric lattice norms on \(\mathbb{R}^{n}\), \(n\in\mathbb{N}\). Assume that the restrictions \(I_{n}\) are contractive with respect to these norms. Denote by \(Y\) the space of all sequences \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\), for which the norm_
\[\left\|a\right\|_{Y}:=\sup_{n\geq 1}\left\|\left\{a_{i}\right\}_{i=1}^{n}\right\| _{n}\]
_is finite. If the space \(Y\) is embedded into \(c_{0},\) then \(Y\) is a r.i. Banach sequence lattice._
Proof.: First, one can easily check that the conditions \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\in Y\) and \(\left|a_{i}\right|\leq\left|b_{i}\right|\), \(i=1,2,\ldots,\) imply that \(b=\left\{b_{i}\right\}_{i=1}^{\infty}\in Y\) and \(\left\|b\right\|_{Y}\leq\left\|a\right\|_{Y}\). Consequently, \(a\mapsto\left\|a\right\|_{Y}\) is a lattice norm on \(Y\).
To prove the rearrangement invariance of \(Y,\) assume that \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\in Y\) and a sequence \(b=\left\{b_{i}\right\}_{i=1}^{\infty}\) is equi-measurable with \(a.\) This means that the sets \(\left\{i:\left|a_{i}\right|>t\right\}\) and \(\left\{i:\left|b_{i}\right|>t\right\}\) have the same cardinality for every \(t>0\). Since
is embedded into \(c_{0}\) these sets are finite and hence the sets \(A_{t}:=\left\{i:\left|a_{i}\right|=t\right\}\) and \(B_{t}:=\left\{i:\left|b_{i}\right|=t\right\}\) also have the same cardinality for each \(t>0.\) Put \(t_{k}:=\left|b_{k}\right|\), \(B_{k}:=B_{t_{k}}\), \(A_{k}:=A_{t_{k}},\)\(k\in\mathbb{N}.\)
Let \(n\in\mathbb{N}\) be arbitrary. Take \(u_{n}\in\mathbb{N}\) such that \(\cup_{k=1}^{n}A_{k}\subseteq\left\{1,2,\ldots,u_{n}\right\}.\) Then, we have \(\left\{\left|b_{k}\right|\right\}_{k=1}^{n}\subseteq\left\{\left|a_{k}\right| \right\}_{k=1}^{u_{n}}\). Indeed, if \(1\leq k\leq n,\) then by construction \(\left|b_{k}\right|=t_{k}\in B_{k}\) and hence there exists \(j\in A_{k}\) with \(\left|a_{j}\right|=\left|b_{k}\right|.\) Since \(A_{k}\subseteq\left\{1,2,\ldots,u_{n}\right\}\), the conclusion follows.
Next, there is a permutation \(\sigma\) of the set \(\left\{1,2,\ldots,u_{n}\right\}\) with \(\left(\sigma\left|a\right|\right)_{k}=\left|b_{k}\right|\), \(1\leq k\leq n.\) By the assumptions of the lemma, this implies the estimate
Hence,
\[\left\|\left\{b_{k}\right\}_{k=1}^{n}\right\|_{n}\leq\left\|a\right\|_{Y},\ \ n\in\mathbb{N},\]
and so \(b\in Y\) and \(\left\|b\right\|_{Y}\leq\left\|a\right\|_{Y}.\) Similarly, \(\left\|a\right\|_{Y}\leq\left\|b\right\|_{Y},\) and thus \(\left\|a\right\|_{Y}=\left\|b\right\|_{Y}.\)
By construction, \(Y\) is a normed linear space of sequences. To prove completeness of \(Y\), take \(\left\{a^{n}\right\}_{n=1}^{\infty}\subseteq Y,\)\(a^{n}=\left\{a_{i}^{n}\right\}_{i=1}^{\infty}\), with \(\sum_{n=1}^{\infty}\left\|a^{n}\right\|_{Y}=C<\infty.\) Since \(Y\) is embedded in \(c_{0}\), there exists \(a\in c_{0}\) with \(a=\sum_{n=1}^{\infty}a^{n}.\) Also, for each integer \(k\) we have
\[\sum_{n=1}^{\infty}\|\{a_{i}^{n}\}_{i=1}^{k}\|_{k}\leq C.\]
Hence, by completeness of the space \(\mathbb{R}^{k}\) with respect to the norm \(\left\|\cdot\right\|_{k}\) and uniqueness of a representation of vectors by using the canonical unit basis, we get \(a^{(k)}=\sum_{n=1}^{\infty}(a^{n})^{(k)}\) and \(\left\|a^{(k)}\right\|_{k}\leq C\) for all \(k\in\mathbb{N}\). Consequently, \(a\in Y\) and \(\left\|a\right\|_{Y}\leq C.\) The proposition is proved.
Next, we proceed with the postponed proof of Lemma 3.2 on the nonemptiness of the sets \(\mathfrak{B}_{n}\left(X\right)\), \(n\in\mathbb{N}\). We will use the notation \(X_{+}\) for the positive cone \(\left\{x\in X:\,x\geq 0\right\}\) of a Banach lattice \(X.\)
Proof of Lemma 3.2.: If \(l_{\infty}\) is finite lattice representable in \(X\), then the desired result follows immediately from Definition 2.5. Therefore, we can assume that \(l_{\infty}\) fails to be finite lattice representable in \(X\), and hence, by Proposition 2.7, \(X\) is both \(\sigma\)-complete and \(\sigma\)-order continuous. This implies that for each \(x\in X_{+}\) we can define the contractive projection \(P_{x}:X\to X\) by \(P_{x}\left(y\right)=\vee_{n\geq 1}\left(nx\wedge y\right),y\in X_{+},\) and then extend it by linearity to the whole of \(X\) (see e.g. [20]).
Suppose that \(\left\{x_{i}\right\}_{i=1}^{m}\), where \(m\in\mathbb{N}\), is a maximal sequence of normalized positive pair-wise disjoint elements in a Banach lattice \(X\). We claim that \(X\) has dimension not bigger than \(m.\) We will divide the proof of this fact into several parts.
\((i)\) Each element \(x\in\left\{x_{i}\right\}_{i=1}^{m}\) is an atom.
Assume that \(x=y+z\) for some \(y,z\) with \(\left|y\right|\wedge\left|z\right|=0.\) Since \(x>0\), we have \(x=\left|y\right|+\left|z\right|\), and thus \(0\leq\left|y\right|,\left|z\right|\leq x.\) Hence, by maximality, \(\left|y\right|=x\) or \(\left|z\right|=x\), i.e., \(x\) is an atom.
\((ii)\) For every \(x\in\left\{x_{i}\right\}_{i=1}^{m}\) the projection \(P_{x}\) has one dimensional range.
Recall that (see [20, p. 10])
\[\mathrm{Im}P_{x}=\left\{z\in X:\,x\wedge y=0\text{ for some }y\in X_{+}\implies \left|z\right|\wedge y=0\right\}. \tag{6.1}\]
Putting \(z=P_{x}\left(y\right)\), where \(y\in Y_{+}\), we have \(z\geq 0\). Without loss of generality, assume that \(z>0.\) From (6.1) it follows that \(z\wedge x_{i}=0\) whenever \(x_{i}\neq x.\) If \(z\wedge x=0\) we get a contradiction, because the set \(\left\{x_{i}\right\}_{i=1}^{m}\) was selected to be maximal. Hence, \(0<z\wedge x\leq x\) and, since \(x\) is an atom, we conclude that \(z\wedge x=\lambda x\) for some \(\lambda>0.\) Observe that the set \(\left(\left\{x_{i}\right\}_{i=1}^{m}\smallsetminus\left\{x\right\}\right)\cup \left\{z/\left\|z\right\|_{X}\right\}\) is also a maximal set of normalized positive pair-wise disjoint elements in \(X\). Consequently, from \(\left(i\right)\) it follows that \(z\) is an atom. Since \(\lambda x=x\wedge z\leq z,\) this implies that \(\lambda x=\mu z\) for some scalar \(\mu>0.\) Hence, \(P_{x}\) has one dimensional range, generated by the vector \(x.\)
\(\left(iii\right)\)\(X\) is the linear span of the sequence \(\left\{x_{i}\right\}_{i=1}^{m}.\)
Put \(x=\vee_{i=1}^{m}x_{i}\) and take \(y\in X_{+}.\) Then, if \(z:=P_{x}\left(y\right)\), we have \(x\wedge\left(y-z\right)=0.\) From the inequalities \(0\leq x_{i}\leq x\) and \(0\leq z\leq y\) it follows that \(x_{i}\wedge\left(y-z\right)=0\) and hence, by maximality, we have \(y=z=P_{x}\left(y\right).\) Since \(x\wedge y=\vee_{i=1}^{m}\left(x_{i}\wedge y\right)\), we have for each integer \(n\)
\[nx\wedge y=\vee_{i=1}^{m}\left(nx_{i}\wedge y\right)=\sum_{i=1}^{m}nx_{i} \wedge y\leq\sum_{i=1}^{m}P_{x_{i}}\left(y\right),\]
which implies that
\[y=P_{x}\left(y\right)\leq\sum_{i=1}^{m}P_{x_{i}}\left(y\right).\]
By the decomposition property, we may write \(y=\sum_{i=1}^{m}y_{i},\) where \(0\leq y_{i}\leq P_{x_{i}}\left(y\right)\), and hence \(y_{i}\in\mathrm{Im}P_{x_{i}}.\) Therefore, by \(\left(ii\right)\), \(y_{i}=\lambda_{i}x_{i}\) for some scalars \(\lambda_{i}\) and thus \(y=\sum_{i=1}^{m}\lambda_{i}x_{i}.\) As a result, the claim is proven and so the lemma follows.
**Lemma 6.2**.: _Let \(X\) be an infinite dimensional Banach lattice such that \(l_{\infty}\) is not finitely lattice representable in \(X\). Then, for every sequence \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) and \(\varepsilon>0\) there exists a sequence \(\left\{u_{i}\right\}_{i=1}^{n+1}\in\mathfrak{B}_{n+1}\left(X\right)\) such that either_
\(\left(i\right):\)_\(u_{i}=x_{i}\), \(i=1,\ldots,n\),_
_or_
\(\left(ii\right):\)_there exists \(k\) with \(1\leq k\leq n\) and a bijection \(\psi:\left\{1,..,n-1\right\}\rightarrow\left\{1,...,n\right\}\smallsetminus \left\{k\right\}\) such that \(u_{i}=x_{\psi\left(i\right)},1\leq i\leq n-1\) and \(\alpha u_{n}+\beta u_{n+1}=x_{k}\) for some positive scalars \(\alpha\) and \(\beta.\) Moreover, we have_
\[\left\|u_{n}-x_{k}\right\|_{X}\leq\varepsilon.\]
Proof.: Given \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) we put \(x=\vee_{i=1}^{n}x_{i}=\sum_{i=1}^{n}x_{i}.\) Take \(y\in X_{+}\) and set \(z=P_{x}\left(y\right).\) Then \(x\wedge\left(y-z\right)=0.\) If there exists \(y\) such that \(y\neq P_{x}\left(y\right)\), we define \(u_{n+1}:=\lambda\left(y-P_{x}\left(y\right)\right)\), where \(\lambda\) is selected so that \(\left\|u_{n+1}\right\|=1.\) Then, setting \(u_{i}=x_{i}\), \(i=1,\ldots,n\), we see that the case \(\left(i\right)\) holds.
Therefore, we may assume that \(P_{x}\left(y\right)=y\) for each \(y\in X_{+}.\) Hence, as in the proof of Lemma 3.2, it follows that \(X\) is the direct sum of the bands \(P_{x_{i}}\left(X\right)\), \(i=1,2,\ldots,n\), and hence at least one of them, say, \(P_{x_{k}}\left(X\right)\), is infinite dimensional.
Since \(x_{k}\) can not be an atom, we may write \(x_{k}=u+v\), where \(u,v\in X_{+}\) and \(u\wedge v=0.\) Further, at least one of the subspaces \(P_{u}\left(X\right)\) or \(P_{v}\left(X\right)\) is again infinite dimensional. Arguing in the same way, we conclude that, for any positive integer \(m\), \(x_{k}\) is a sum of \(m\) pair-wise disjoint elements \(w_{j}\), \(j=1,2,\ldots,m.\) Without loss of generality, assume that \(\|w_{1}\|_{X}\geq\|w_{2}\|_{X}\geq\cdots\geq\|w_{m}\|_{X}\). By the assumption (see also Proposition 2.6), \(X\) satisfies a lower \(p\)-estimate for some \(p<\infty.\) Consequently, we can estimate
\[m^{1/p}\left\|w_{m}\right\|_{X}\leq\left(\sum_{i=1}^{m}\left\|w_{j}\right\|_{X }^{p}\right)^{1/p}\leq M_{[p]}\left(x\right)\left\|x_{k}\right\|_{X}=M_{[p]} \left(X\right),\]
whence \(\lim_{m\to\infty}\left\|w_{m}\right\|_{X}=0.\)
Put
\[u_{n}=\left(x_{k}-w_{m}^{(m)}\right)/\left\|x-w_{m}^{(m)}\right\|_{X},\ \ u_{n+1}=w_{m}^{(m)}/\left\|w_{m}^{(m)}\right\|_{X}.\]
Since \(u_{n}\wedge u_{n+1}=0,\) we have \(\left\{u_{i}\right\}_{i=1}^{n+1}\in\mathfrak{B}_{n+1}\left(X\right)\). Moreover, by construction, \(x_{k}=\alpha u_{n}+\beta u_{n+1},\) with \(\alpha=\left\|x_{k}-w_{m}^{(m)}\right\|_{X},\)\(\beta=\left\|w_{m}^{(m)}\right\|_{X}.\) Finally, since
\[\left\|x_{k}-u_{n}\right\|_{X} = \left\|\frac{\left(\alpha-1\right)x_{k}+w_{m}^{(m)}}{\alpha} \right\|_{X}\] \[\leq \frac{\left\|\left|x_{k}-w_{m}^{(m)}\right\|_{X}-\left\|x_{k} \right\|_{X}}{\alpha}\left\|x_{k}\right\|_{X}+\frac{\left\|w_{m}^{(m)}\right\| _{X}}{\alpha}\] \[\leq \frac{2}{\alpha}\left\|w_{m}^{(m)}\right\|_{X},\]
we may select \(m\) so that \(\left\|x_{k}-u_{n}\right\|_{X}<\varepsilon.\) Thus, all the conditions in \((ii)\) are fulfilled.
_Remark 6.3_.: In the above proof we required that \(l_{\infty}\) fails to be finitely lattice representable in \(X\). But, in fact, we need only a weaker property that if \(\left\{x_{n}\right\}_{n=1}^{\infty}\) is an infinite sequence of pair-wise disjoint elements with decreasing norms in \(X\), then \(\left\|x_{n}\right\|_{X}\downarrow 0.\) According the terminology from the book [1], such a Banach lattice \(X\) is said to have the Lebesgue property (see [1, Theorem 3.22]).
**Lemma 6.4**.: _Each of the functionals \(\left\|\cdot\right\|_{X_{U}(n)},\Phi_{n}\left(\cdot\right)\) and \(\left\|\cdot\right\|_{X_{L}(n)}\) defined on \(\mathbb{R}^{n}\) is lattice monotone and symmetric._
Proof.: Let \(a=\left\{a_{i}\right\}_{i=1}^{n}\) and \(b=\left\{b_{i}\right\}_{i=1}^{n}\) be two sequences of scalars with \(\left|b_{i}\right|\leq\left|a_{i}\right|\), \(1\leq i\leq n.\) Then, for every sequence \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) we have
\[\left|\sum_{i=1}^{n}b_{i}x_{i}\right|=\sum_{i=1}^{n}\left|b_{i}\right|\left|x_{ i}\right|\leq\sum_{i=1}^{n}\left|a_{i}\right|\left|x_{i}\right|=\left| \sum_{i=1}^{n}a_{i}x_{i}\right|. \tag{6.2}\]
Also, let \(\sigma\) be a permutation of the set \(\left\{1,\ldots,n\right\}\) and the sequence \(c\) be defined by \(c=\sigma a:=\left\{a_{\sigma(i)}\right\}_{i=1}^{n}.\) Further, we prove the desired claims for each functional separately.
\((i):\left\|\cdot\right\|_{X_{U}(n)}.\) From (6.2) it follows that
\[\left\|\sum_{i=1}^{n}b_{i}x_{i}\right\|_{X}\leq\left\|\sum_{i=1}^{n}a_{i}x_{i} \right\|_{X},\]
which implies \(\left\|b\right\|_{X_{U}(n)}\leq\left\|a\right\|_{X_{U}(n)}\). Consequently, \(\left\|\cdot\right\|_{X_{U}(n)}\) is a lattice norm.
Next, since for every \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) and any permutation \(\pi\) of \(\left\{1,\ldots,n\right\}\) we have \(\left\{x_{\pi(i)}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\), denoting by \(\sigma^{-1}\) the inverse permutation, we obtain
\[\left\|\sum_{i=1}^{n}c_{i}x_{i}\right\|_{X}=\left\|\sum_{i=1}^{n}a_{\sigma(i) }x_{i}\right\|_{X}=\left\|\sum_{i=1}^{n}a_{i}x_{\sigma^{-1}(i)}\right\|_{X} \leq\left\|a\right\|_{X_{U}(n)}.\]
Hence, \(\left\|c\right\|_{X_{U}(n)}\leq\left\|a\right\|_{X_{U}(n)}\), and by symmetry we obtain that the norm \(\left\|\cdot\right\|_{X_{U}(n)}\) is symmetric.
\((ii):\)\(\Phi_{n}\left(\cdot\right)\). In the same way, as above, we have
\[\Phi_{n}\left(b\right)\leq\left\|\sum_{i=1}^{n}b_{i}x_{i}\right\|_{X}\leq \left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}.\]
Thus, \(\Phi_{n}\left(b\right)\leq\Phi_{n}\left(a\right)\), and so \(\Phi_{n}\left(\cdot\right)\) is a lattice functional. Also, arguing precisely as in the case \((i)\), we obtain \(\Phi_{n}\left(c\right)\leq\Phi_{n}\left(a\right)\), and hence this functional is symmetric.
\((iii):\)\(\left\|\cdot\right\|_{X_{L}(n)}.\) Let \(a=\sum_{k\in F}a^{k}\), where \(F\subseteq\mathbb{N}\) is finite and \(a^{k}=\left(a_{i}^{k}\right)_{i=1}^{n}\), \(k\in F\). For each \(i\in\left\{1,\ldots,n\right\}\) we have
\[\left|b_{i}\right|\leq\left|a_{i}\right|\leq\sum_{k\in F}\left|a_{i}^{k}\right|.\]
One can readily select \(b_{i}^{k}\) such that \(b_{i}=\sum_{k\in F}b_{i}^{k}\), \(1\leq i\leq n\), and \(\left|b_{i}^{k}\right|\leq\left|a_{i}^{k}\right|\) for all \(k\) and \(i\). Then, setting \(b^{k}=\left\{b_{i}^{k}\right\}_{i=1}^{n}\), \(k\in F\), we have \(b=\sum_{k\in F}b^{k}\) and \(\Phi_{n}\left(b^{k}\right)\leq\Phi_{n}\left(a^{k}\right)\), which implies
\[\left\|b\right\|_{X_{L}(n)}\leq\sum_{k\in F}\Phi_{n}\left(b^{k}\right)\leq\sum _{k\in F}\Phi_{n}\left(a^{k}\right).\]
In consequence, \(\left\|b\right\|_{X_{L}(n)}\leq\left\|a\right\|_{X_{L}(n)}\), that is, the norm \(\left\|\cdot\right\|_{X_{L}(n)}\) is lattice.
Next, note that
\[\sigma a=\sum_{k\in F}\sigma a^{k},\]
and hence, by \((ii)\),
\[\left\|\sigma a\right\|_{X_{L}(n)}\leq\sum_{k\in F}\Phi_{n}\left(\sigma a^{k }\right)=\sum_{k\in F}\Phi_{n}\left(a^{k}\right).\]
Thus, \(\left\|\sigma a\right\|_{X_{L}(n)}\leq\left\|a\right\|_{X_{L}(n)}\) and so \(\left\|\cdot\right\|_{X_{L}(n)}\) is a symmetric norm.
An immediate consequence of this lemma is the following
**Corollary 6.5**.: \(\left\|\cdot\right\|_{X_{L}}\) _and \(\left\|\cdot\right\|_{X_{U}}\) are lattice norms._
Our next two propositions state that the operators \(I_{n+1}:\mathbb{R}^{n+1}\rightarrow\mathbb{R}^{n}\) are contractions with respect to each of the three functionals considered in the latter lemma.
**Proposition 6.6**.: _Let \(n\in\mathbb{N}\). For each infinite dimensional Banach lattice \(X\) and all \(a\in\mathbb{R}^{n+1}\) we have_
\[\Phi_{n}\left(I_{n+1}a\right) \leq \Phi_{n+1}\left(a\right)\]
_and_
\[\left\|I_{n+1}a\right\|_{X_{L}\left(n\right)} \leq \left\|a\right\|_{X_{L}\left(n+1\right)}.\]
Proof.: Put \(a=\left\{a_{i}\right\}_{i=1}^{n+1}\) and \(b=I_{n+1}a=\left\{a_{i}\right\}_{i=1}^{n}\).
By Lemma 6.4, \(\Phi_{n}\) is lattice monotone. Consequently, it suffices to prove the result in the special case of \(a_{n+1}=0.\) We set
\[\mathfrak{B}_{n}^{*}\left(X\right):=\left\{\left\{x_{i}\right\}_{i=1}^{n}:\, \text{there is }x_{n+1}\in X\text{ such that }\left\{x_{i}\right\}_{i=1}^{n+1}\in \mathfrak{B}_{n+1}\left(X\right)\right\}.\]
Obviously, \(\mathfrak{B}_{n}^{*}\left(X\right)\subseteq\mathfrak{B}_{n}\left(X\right)\), which implies, since \(a_{n+1}=0,\) the following:
\[\Phi_{n}\left(b\right) = \inf\left\{\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}:\left\{x_ {i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\right\}\] \[\leq \inf\left\{\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}:\left\{x_ {i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}^{*}\left(X\right)\right\}\] \[= \inf\left\{\left\|\sum_{i=1}^{n}a_{i}x_{i}+a_{n+1}x_{n+1}\right\| _{X}:\left\{x_{i}\right\}_{i=1}^{n+1}\in\mathfrak{B}_{n+1}\left(X\right)\right\}\] \[= \Phi_{n+1}\left(a\right),\]
and the first inequality is proved.
To prove similar inequality for the norm \(\left\|\cdot\right\|_{X_{L}\left(n\right)}\), write \(a=\sum_{k\in F}a^{k}\) for some finite subset \(F\subseteq\mathbb{N}\) and \(a^{k}\in\mathbb{R}^{n+1}.\) Then, we have
\[b=I_{n+1}Tr_{n+1}a=\sum_{k\in F}I_{n+1}Tr_{n+1}a^{k},\]
and, by the first part,
\[\left\|b\right\|_{X_{L}\left(n\right)}\leq\sum_{k\in F}\Phi_{n}\left(I_{n+1} Tr_{n+1}a^{k}\right)\leq\sum_{k\in F}\Phi_{n+1}\left(Tr_{n+1}a^{k}\right)\leq \sum_{k\in F}\Phi_{n+1}\left(a^{k}\right).\]
Hence,
\[\left\|b\right\|_{X_{L}\left(n\right)}\leq\left\|a\right\|_{X_{L}\left(n+1 \right)},\]
and the proof is completed.
**Proposition 6.7**.: _Let \(X\) be an infinite dimensional Banach lattice such that \(l_{\infty}\) is not finitely lattice representable in \(X\). The following holds for every positive integer \(n\) and \(a\in\mathbb{R}^{n+1}\)_
\[\left\|I_{n+1}a\right\|_{X_{U}\left(n\right)}\leq\left\|a\right\|_{X_{U}\left( n+1\right)}.\]
Proof.: We put again \(a=\left\{a_{i}\right\}_{i=1}^{n+1}\) and \(b=I_{n+1}a=\left\{a_{i}\right\}_{i=1}^{n}\). As in the proof of Proposition 6.6, we may assume that \(a_{n+1}=0\).
Let \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) and \(\varepsilon>0.\) By Lemma 6.2, we can select a sequence \(\left\{u_{n}\right\}_{i=1}^{n+1}\in\mathfrak{B}_{n+1}\left(X\right)\) that satisfies one of the conditions \((i)\) and \((ii)\) of that lemma.
In the case when \((i)\) is fulfilled, we have
\[\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}=\left\|\sum_{i=1}^{n+1}a_{i}u_{i} \right\|_{X}\leq\left\|a\right\|_{X_{U}\left(n+1\right)},\]
and the desired result follows.
Assume now that we have \((ii)\) and let \(k\), \(\psi\) be as in the statement of Lemma 6.2. Define the vector \(c=\left\{c_{i}\right\}_{i=1}^{n+1}\in\mathbb{R}^{n+1}\) by
\[c_{i}=\left\{\begin{array}{c}a_{\psi(i)}:i\neq k,1\leq i\leq n-1\\ c_{n}=a_{k}\\ c_{n+1}=0\end{array}\right\}\]
Then, we have
\[\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}\leq\left\|\sum_{i=1}^{n+1}c_{i}u_{ i}\right\|_{X}+\left\|\sum_{i=1}^{n+1}c_{i}u_{i}-\sum_{i=1}^{n}a_{i}x_{i} \right\|_{X}.\]
Since
\[\sum_{i=1}^{n+1}c_{i}u_{i}-\sum_{i=1}^{n}a_{i}x_{i} = \sum_{i=1}^{n-1}a_{\psi(i)}x_{\psi(i)}+a_{k}u_{n}+0\cdot u_{n+1}- \sum_{i=1}^{n}a_{i}x_{i}\] \[= \left(\sum_{i=1,i\neq k}^{n}a_{i}x_{i}\right)+a_{k}u_{n}-a_{k}x_{ k}-\sum_{i=1,i\neq k}^{n}a_{i}x_{i}\] \[= a_{k}\left(u_{n}-x_{k}\right),\]
\(\left|a_{k}\right|\leq\left\|a\right\|_{X_{U}\left(n+1\right)}\), \(1\leq k\leq n\), and \(\left\|\cdot\right\|_{X_{U}\left(n+1\right)}\) is a symmetric norm, we conclude
\[\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}\leq\left\|\sum_{i=1}^{n+1}c_{i}u_{ i}\right\|_{X}+\left\|u_{n}-x_{k}\right\|_{X}\leq\left\|c\right\|_{X_{U}\left(n+1 \right)}+\varepsilon\left\|a\right\|_{X_{U}\left(n+1\right)}=\left(1+ \varepsilon\right)\left\|a\right\|_{X_{U}\left(n+1\right)}.\]
Thus, since \(\varepsilon>0\) is arbitrary, \(\left\|b\right\|_{X_{U}\left(n\right)}\leq\left\|a\right\|_{X_{U}\left(n+1 \right)}\), what is required.
**Proposition 6.8**.: _Let \(X\) be an infinite dimensional Banach lattice such that \(X_{L}\) is not contained in \(c_{0}.\) Then, \(l_{\infty}\) is finitely lattice representable in \(X\)._
Proof.: By the assumption, there exists a sequence \(a=\left\{a_{i}\right\}_{i=1}^{\infty}\in X_{L}\) with \(\limsup_{i\to\infty}\left|a_{i}\right|>\delta\) for some \(\delta>0\). By scaling we may assume that \(\delta=1.\) Define the sequence \(b=\left\{b_{i}\right\}_{i=1}^{\infty}\) by
\[b_{i}=\left\{\begin{array}{c}1:\left|a_{i}\right|>1\\ 0:\left|a_{i}\right|\leq 1\end{array}\right\}\]
Since \(\left|b_{i}\right|\leq\left|a_{i}\right|\) for all \(i=1,2,\dots,\) by Corollary 6.5, \(b\in X_{L}\) and \(\left\|b\right\|_{X_{L}}\leq\left\|a\right\|_{X_{L}}\).
Further, for each positive integer \(m\) select \(k_{m}\) such that the set
\[U_{m}:=\left\{i:\,1\leq i\leq k_{m},b_{i}=1\right\}\]
has cardinality \(m.\) Note that for any sequence \(\left\{y_{i}\right\}_{i=1}^{k_{m}}\in\mathfrak{B}_{k_{m}}\left(X\right)\) it holds
\[\left\|\sum_{i\in U_{m}}y_{i}\right\|_{X}=\left\|\sum_{i=1}^{k_{m}}b_{i}y_{i} \right\|_{X}\leq\Phi_{k_{m}}\left(b\right).\]
Hence, if \(b=\sum_{j\in F}b^{j}\) for some finite set \(F\) and \(b^{j}\in\mathbb{R}^{k_{m}},\) by the triangle inequality, we have
\[\left\|\sum_{i\in U_{m}}y_{i}\right\|=\left\|\sum_{i=1}^{k_{m}}\sum_{j\in F} \left\langle b^{j},e_{i}\right\rangle y_{i}\right\|_{X}\leq\sum_{j\in F} \left\|\sum_{i=1}^{k_{m}}\left\langle b^{j},e_{i}\right\rangle y_{i}\right\|_ {X}\leq\sum_{j\in F}\Phi_{k_{m}}\left(b^{j}\right).\]
Consequently, for any sequence \(\left\{t_{i}\right\}_{i\in U_{m}}\) of scalars we obtain
\[\sup_{i\in U_{m}}\left|t_{i}\right|\leq\left\|\sum_{i\in U_{m}}t_{i}y_{i} \right\|_{X}\leq\sup_{i\in U_{m}}\left|t_{i}\right|\left\|\sum_{i\in U_{m}}y_{ i}\right\|_{X}\leq\sup_{i\in U_{m}}\left|t_{i}\right|\left\|b\right\|_{X_{L}(k_{m})} \leq\left\|a\right\|_{X_{L}}\sup_{i\in U_{m}}\left|t_{i}\right|.\]
Since the set \(U_{m}\) has cardinality \(m,\) which is arbitrary, and \(\left\|a\right\|_{X_{L}}\) is a constant that does not depend on \(m,\) the latter inequality means that \(l_{\infty}\) is crudely finitely lattice representable in \(X\) (see Section 2.2). Since the latter is equivalent to the finite lattice representability of \(l_{\infty}\) in \(X\)[13, p. 288], the proof is completed.
**Proposition 6.9**.: _Assume that \(l_{\infty}\) is finitely lattice representable in a Banach lattice \(X\). Then \(X_{L}\) coincides with \(l_{\infty}\) isometrically._
Proof.: Fix a positive integer \(n\) and \(\varepsilon>0.\) By the assumption, there exists a sequence \(\left\{x_{i}\right\}_{i=1}^{n}\) of pair-wise disjoint elements in \(X\) such that
\[\sup_{1\leq i\leq n}\left|a_{i}\right|\leq\left\|\sum_{i=1}^{n}a_{i}x_{i} \right\|_{X}\leq\left(1+\varepsilon\right)\sup_{1\leq i\leq n}\left|a_{i}\right|\]
for any scalar sequence \(a=\left\{a_{i}\right\}_{i=1}^{n}.\) Putting \(z_{i}:=x_{i}/\left\|x_{i}\right\|_{X}\), we get \(\left\{z_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right).\) Hence, in view of embeddings (3.1), we have
\[\left\|a\right\|_{l_{\infty}^{n}} \leq \left\|a\right\|_{X_{L}(n)}\leq\Phi_{n}\left(a\right)\leq\left\| \sum_{i=1}^{n}a_{i}z_{i}\right\|_{X}=\left\|\sum_{i=1}^{n}\frac{a_{i}}{\left\| x_{i}\right\|_{X}}x_{i}\right\|_{X}\] \[\leq \left(1+\varepsilon\right)\sup_{1\leq i\leq n}\frac{\left|a_{i} \right|}{\left\|x_{i}\right\|_{X}}\leq\left(1+\varepsilon\right)\left\|a \right\|_{l_{\infty}^{n}}\]
Since \(\varepsilon>0\) is arbitrary, we conclude that \(\left\|a\right\|_{l_{\infty}^{n}}=\left\|a\right\|_{X_{L}(n)}\) for all \(n=1,2,\ldots,\) which implies that \(X_{L}=l_{\infty}\) isometrically.
Prove now the dual result.
**Proposition 6.10**.: _If \(l_{1}\) is finitely lattice representable in a Banach lattice \(X\), then \(X_{U}\) coincides isometrically with \(l_{1}.\)_
Proof.: For every positive integer \(n\) and \(\varepsilon>0\) we can select a sequence \(\left\{x_{i}\right\}_{i=1}^{n}\) of pair-wise disjoint elements such that
\[\sum_{i=1}^{n}\left|a_{i}\right|\leq\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X} \leq(1+\varepsilon)\sum_{i=1}^{n}\left|a_{i}\right|\]
for any scalar sequence \(a=\left\{a_{i}\right\}_{i=1}^{n}.\) As in the preceding proof, we put \(z_{i}=x_{i}/\left\|x_{i}\right\|_{X}\). Then \(\left\{z_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\), and since \(1\leq\left\|x_{i}\right\|_{X}\leq 1+\varepsilon,\)\(1\leq i\leq n,\) we have
\[\left\|a\right\|_{l_{1}^{n}} \geq \left\|a\right\|_{X_{U}(n)}\geq\left\|\sum_{i=1}^{n}a_{i}z_{i} \right\|_{X}=\left\|\sum_{i=1}^{n}\frac{a_{i}}{\left\|x_{i}\right\|_{X}}x_{i} \right\|_{X}\] \[\geq \sum_{i=1}^{n}\frac{\left|a_{i}\right|}{\left\|x_{i}\right\|_{X} }\geq(1+\varepsilon)^{-1}\sum_{i=1}^{n}\left|a_{i}\right|=(1+\varepsilon)^{-1} \left\|a\right\|_{l_{1}^{n}}\]
This implies \(\left\|a\right\|_{l_{1}^{n}}=\left\|a\right\|_{X_{U}(n)}\) and thus the proof is complete.
As a result, all the pieces needed for the proof of Theorem 3.3 are in place.
Proof of Theorem 3.3.: Prove first the claim for \(X_{L}.\) By Lemmas 6.4 and 6.6, \(\left\|\cdot\right\|_{X_{L}(n)}\) is a lattice, symmetric norm for each positive integer \(n\) and the operators \(I_{n}\) are contractions with respects to these norms. Hence, if \(X_{L}\) is embedded in \(c_{0},\) then Proposition 6.1 may be applied and we conclude that \(X_{L}\) is a r.i. Banach sequence lattice. In the case when \(X_{L}\) is not embedded in \(c_{0},\) from Propositions 6.8 and 6.9 it follows that \(X_{L}\) coincides isometrically with \(l_{\infty},\) and hence it is a r.i. Banach sequence lattice as well.
Proceeding with the case of \(X_{U},\) observe that, by the assumption, \(l_{\infty}\) fails to be finitely lattice representable in \(X\), and so, using Proposition 6.7, we have that the maps \(I_{n}\) are contractive with respect to these norms. Moreover, by Lemma 6.4, \(\left\|\cdot\right\|_{X_{U}(n)}\) is a lattice, symmetric norm for each positive integer \(n.\) Finally, from Proposition 2.6 it follows that \(X\) satisfies a lower \(p\)-estimate for some \(p<\infty.\) Hence, for every \(n\in\mathbb{N}\) and any sequences \(\left\{x_{i}\right\}_{i=1}^{n}\in\mathfrak{B}_{n}\left(X\right)\) and \(\left\{a_{i}\right\}_{i=1}^{n}\) of scalars it follows
\[\left(\sum_{i=1}^{n}\left|a_{i}\right|^{p}\right)^{1/p}\leq M_{\left[p\right]} \left(X\right)\left\|\sum_{i=1}^{n}a_{i}x_{i}\right\|_{X}\leq M_{\left[p \right]}\left(X\right)\left\|\sum_{i=1}^{n}a_{i}e_{i}\right\|_{X_{U}},\]
i.e., \(X_{U}\) is embedded into \(l_{p}\) and thus also in \(c_{0}.\) Thus, applying Proposition 6.1, we conclude that \(X_{U}\) is a r.i. sequence Banach lattice.
## Appendix: A description of the optimal upper sequence lattices for Orlicz spaces.
Recall that, according to Example 3.10, optimal upper and lower sequence lattices for the \(L_{p,q}\)-spaces are just some \(l_{r}\)-spaces. As well known (see e.g. [18,
19]), comparing with the Lorentz spaces, the structure of disjoint sequences in Orlicz spaces is much more complicated. In particular, in general, an Orlicz space \(L_{M}\) need not to admit an upper \(\delta(L_{M})\)-estimate or a lower \(\sigma(L_{M})\)-estimate (as above, \(\delta(X)\) and \(\sigma(X)\) are the Grobler-Dodds indices of a Banach lattice \(X\)). Therefore, we come to the problem of identification of optimal sequence lattices for this class of r.i. spaces. In this section, we present a description of optimal upper lattices for separable Orlicz spaces as intersections of some special Musielak-Orlicz sequence spaces.
We start with an assertion that reduces the consideration of issues related to pairwise disjoint functions to that of a simpler case of multiples of characteristic functions of pairwise disjoint sets.
**Proposition 6.11**.: _Let \(M\) be an Orlicz function such that \(M\in\Delta_{2}^{\infty}\) with the constant \(K\). For every \(n\in\mathbb{N}\) and arbitrary pairwise disjoint functions \(y_{k}\), \(k=1,\ldots,n\), there exist two sequences \(\{B_{k}\}_{k=1}^{n}\) and \(\{B^{\prime}_{k}\}_{k=1}^{2n}\) of pairwise disjoint subsets of \([0,1]\), \(r_{k}\in\mathbb{R}\), \(k=1,\ldots,n\), and \(r^{\prime}_{k}\in\mathbb{R}\), \(k=1,\ldots,2n\), such that for the functions \(h_{k}:=r_{k}\chi_{B_{k}}\), \(k=1,\ldots,n\), and \(f_{k}:=r^{\prime}_{k}\chi_{B^{\prime}_{k}}\), \(k=1,\ldots,2n\), we have_
\[\frac{1}{4}\|y_{k}\|_{L_{M}}\leq\|h_{k}\|_{L_{M}}\leq\|y_{k}\|_{L_{M}},\ \ \frac{1}{2}\|y_{k}\|_{L_{M}}\leq\|f_{k}\|_{L_{M}}\leq\frac{3}{2}\|y_{k}\|_{L_{M}},\ \ k=1,\ldots,n, \tag{6.3}\]
_and_
\[\Big{\|}\sum_{k=1}^{n}h_{k}\Big{\|}_{L_{M}}\leq\Big{\|}\sum_{k=1}^{n}y_{k} \Big{\|}_{L_{M}}\leq(K+1)\Big{\|}\sum_{k=1}^{2n}f_{k}\Big{\|}_{L_{M}}. \tag{6.4}\]
Proof.: Clearly, without loss of generality, we can assume that given functions \(y_{k}\in L_{M}\), \(k=1,\ldots,n\), are positive. Moreover, since \(M\in\Delta_{2}^{\infty}\), the space \(L_{M}\) is separable, and, consequently, it can be assumed also that \(y_{k}\) are bounded functions.
For each \(1\leq k\leq n\) we set
\[c_{k}:=\frac{\|y_{k}\|_{L_{M}}}{2\varphi_{L_{M}}(m(\operatorname{supp}y_{k})) },u_{k}(t):=\begin{cases}y_{k}(t),&\text{if }y_{k}(t)\geq c_{k},\\ 0,&\text{if }y_{k}(t)<c_{k}\end{cases}\ \ \text{and}\ \ g_{k}(t):=c_{k}\chi_{ \operatorname{supp}y_{k}\setminus\operatorname{supp}u_{k}}(t).\]
Then, it follows
\[\sum_{k=1}^{n}u_{k}\leq\sum_{k=1}^{n}y_{k}\leq\sum_{k=1}^{n}u_{k}+\sum_{k=1}^{ n}g_{k}. \tag{6.5}\]
Observe also that
\[\|g_{k}\|_{L_{M}}=c_{k}\varphi_{L_{M}}(m(\operatorname{supp}y_{k}\setminus \operatorname{supp}u_{k}))\leq\frac{1}{2}\|y_{k}\|_{L_{M}},\ \ k=1,\ldots,n \tag{6.6}\]
(\(\varphi_{L_{M}}\) is the fundamental function of \(L_{M}\); see formula (2.2)), and
\[\frac{1}{2}\|y_{k}\|_{L_{M}}=\|y_{k}\|_{L_{M}}-\|c_{k}\chi_{ \operatorname{supp}y_{k}}\|_{L_{M}}\leq\|u_{k}\|_{L_{M}}\leq\|y_{k}\|_{L_{M}}, \ \ k=1,\ldots,n. \tag{6.7}\]
Next, we estimate the norm \(\|\sum_{k=1}^{n}u_{k}\|_{L_{M}}\). To this end, we show that there is \(r_{k}\in[c_{k},\sup_{t}u_{k}(t)]\) such that
\[M(r_{k})=M\left(\frac{r_{k}}{\|u_{k}\|_{L_{M}}}\right)\int_{0}^{1}M(u_{k}(t))\,dt. \tag{6.8}\]
Indeed, let us consider the function
\[H_{k}(t):=\frac{M(u_{k}(t))}{M\left(\frac{u_{k}(t)}{\|u_{k}\|_{L_{M}}}\right)}, \ t\in\operatorname{supp}u_{k}.\]
From the equality \(\int_{0}^{1}M(\frac{u_{k}(t)}{\|u_{k}\|_{L_{M}}})dt=1\) it follows that
\[\inf_{t\in\operatorname{supp}u_{k}}H_{k}(t)\leq\int_{0}^{1}M(u_{k}(t))\,dt \leq\sup_{t\in\operatorname{supp}u_{k}}H_{k}(t).\]
Thus, since \(\inf_{t\in\operatorname{supp}u_{k}}u_{k}(t)\geq c_{k}\), by continuity of \(M\), equality (6.8) holds for some \(r_{k}\) such that \(r_{k}\in[c_{k},\sup_{t}u_{k}(t)]\).
Further, assuming as we can that the functions \(M\) and \(\varphi_{L_{M}}\) are strictly increasing, define the real numbers \(d_{k}\in[0,1]\), \(k=1,2,\ldots,n\), as follows:
\[d_{k}:=\begin{cases}\varphi_{L_{M}}^{-1}\left(\frac{\|u_{k}\|_{L_{M}}}{r_{k}} \right),&\text{if }\|u_{k}\|_{L_{M}}\leq r_{k}\varphi_{L_{M}}(m(\operatorname{supp}y_{k})), \\ m(\operatorname{supp}y_{k}),&\text{if }\|u_{k}\|_{L_{M}}>r_{k}\varphi_{L_{M}}(m( \operatorname{supp}y_{k})).\end{cases}\]
Clearly, the definition of \(d_{k}\) implies that
\[r_{k}\varphi_{L_{M}}(d_{k})\leq\|u_{k}\|_{L_{M}}. \tag{6.9}\]
Conversely, \(r_{k}\varphi_{L_{M}}(d_{k})=\|u_{k}\|_{L_{M}}\) if \(\|u_{k}\|_{L_{M}}\leq r_{k}\varphi_{L_{M}}(m(\operatorname{supp}y_{k}))\). Otherwise, since \(r_{k}\geq c_{k}\), in view of (6.7) and the definition of \(c_{k}\), we obtain
\[r_{k}\varphi_{L_{M}}(d_{k})\geq c_{k}\varphi_{L_{M}}(m(\operatorname{supp}y_{ k}))\geq\frac{1}{2}\|u_{k}\|_{L_{M}}.\]
Thus, summing up, we conclude that
\[r_{k}\varphi_{L_{M}}(d_{k})\geq\frac{1}{2}\|u_{k}\|_{L_{M}}. \tag{6.10}\]
Now, observe that from inequality (6.10) and formula (2.2) for the function \(\varphi_{L_{M}}\) it follows that
\[d_{k}\geq\varphi_{L_{M}}^{-1}(\|u_{k}\|_{L_{M}}/(2r_{k})).\]
and
\[\varphi_{L_{M}}^{-1}(u)=\frac{1}{M(1/u)},\ \ 0<t\leq 1,\]
respectively. Hence, taking into account that \(M\in\Delta_{2}^{\infty}\) with constant \(K\) and applying (6.8), we obtain
\[d_{k}M(r_{k}) \geq \varphi_{L_{M}}^{-1}(\|u_{k}\|_{L_{M}}/(2r_{k}))M(r_{k})=\frac{M(r_ {k})}{M\left(\frac{2r_{k}}{\|u_{k}\|_{L_{M}}}\right)}\] \[\geq \frac{1}{K}\frac{M(r_{k})}{M\left(\frac{r_{k}}{\|u_{k}\|_{L_{M}}} \right)}=\frac{1}{K}\int\limits_{0}^{1}M(u_{k}(t))\,dt. \tag{6.11}\]
In the converse direction, from the equality \(1/d_{k}=M\left(1/\varphi_{L_{M}}(d_{k})\right)\) (see (2.2)), combined with ( 6.9) and (6.8), it follows
\[d_{k}M(r_{k})=\frac{M(r_{k})}{M(\frac{1}{\varphi_{L_{M}}(d_{k})})}\leq\frac{M( r_{k})}{M\left(\frac{r_{k}}{\|u_{k}\|_{L_{M}}}\right)}=\int\limits_{0}^{1}M(u_{k} (t))\,dt. \tag{6.12}\]
Furthermore, by the definition of \(d_{k}\), we have \(d_{k}\leq m(\operatorname{supp}y_{k})\). Therefore, we can define the following functions \(h_{k}(t):=r_{k}\chi_{B_{k}}(t)\), where \(B_{k}\subset\operatorname{supp}y_{k}\) and \(m(B_{k})=d_{k}.\) Since \(\|h_{k}\|_{L_{M}}=r_{k}\varphi_{L_{M}}(d_{k})\), according to (6.9) and (6.10), it holds
\[\frac{1}{2}\|u_{k}\|_{L_{M}}\leq\|h_{k}\|_{L_{M}}\leq\|u_{k}\|_{L_{M}},\ \ k=1,2,\ldots,n.\]
Hence, from (6.7) it follows
\[\frac{1}{4}\|y_{k}\|_{L_{M}}\leq\|h_{k}\|_{L_{M}}\leq\|y_{k}\|_{L_{M}},\ \ k=1,2,\ldots,n. \tag{6.13}\]
Moreover, since the functions \(h_{k}\) (respectively, \(u_{k}\)) are pairwise disjoint, in view of estimate (6.12), we conclude that
\[\int_{0}^{1}M\left(\sum_{k=1}^{n}h_{k}(t)\right)\,dt = \sum_{k=1}^{n}d_{k}M(r_{k})\leq\sum_{k=1}^{n}\int_{0}^{1}M(u_{k}(t ))\,dt\] \[\leq \int_{0}^{1}M\left(\sum_{k=1}^{n}y_{k}(t)\right)\,dt.\]
Conversely, by (6.11), we have
\[\sum_{k=1}^{n}\int_{0}^{1}M(u_{k}(t))\,dt\leq K\int_{0}^{1}M\left(\sum_{k=1}^ {n}h_{k}(t)\right)\,dt.\]
Therefore, since \(M\) is convex and \(K\geq 1\), it follows that
\[\Big{\|}\sum_{k=1}^{n}h_{k}\Big{\|}_{L_{M}}\leq\Big{\|}\sum_{k=1}^{n}u_{k} \Big{\|}_{L_{M}}\leq K\Big{\|}\sum_{k=1}^{n}h_{k}\Big{\|}_{L_{M}}.\]
Noting that the collection \(\{g_{k},h_{k}\}_{k=1}^{n}\) consists of \(2n\) pairwise disjoint functions, we relabel them as \(f_{k}\), \(k=1,2,\ldots,2n\). Then, by (6.5) and the last inequality,
we obtain
\[\Big{\|}\sum_{k=1}^{n}h_{k}\Big{\|}_{L_{M}}\leq\Big{\|}\sum_{k=1}^{n }y_{k}\Big{\|}_{L_{M}} \leq \Big{\|}\sum_{k=1}^{n}u_{k}\Big{\|}_{L_{M}}+\Big{\|}\sum_{k=1}^{n}g _{k}\Big{\|}_{L_{M}}\] \[\leq K\Big{\|}\sum_{k=1}^{n}h_{k}\Big{\|}_{L_{M}}+\Big{\|}\sum_{k=1}^{n }g_{k}\Big{\|}_{L_{M}}\] \[\leq (K+1)\Big{\|}\sum_{k=1}^{2n}f_{k}\Big{\|}_{L_{M}},\]
and hence (6.4) is proved. Since inequalities (6.3) follow from (6.13) and (6.6), the proof is completed.
From Proposition 6.11 and its proof we obtain
**Corollary 6.12**.: _Let \(M\) be an Orlicz function such that \(M\in\Delta_{2}^{\infty}\). Then,_
\[\|a\|_{(L_{M})_{U}}\asymp\sup\left\{\Big{\|}\sum_{k=1}^{n}a_{k}\frac{\chi_{F_{ k}}}{\varphi_{L_{M}}(m(F_{k}))}\Big{\|}_{L_{M}}:\,n\in\mathbb{N},F_{k}\subset[0,1] \text{ pairwise disjoint}\right\},\]
_with constants independent of \(a=(a_{k})_{k=1}^{\infty}\)._
Recalling that the space \((L_{M})_{U}\) is rearrangement invariant (see Theorem 3.3), denote by \(\phi_{U}\) its fundamental function, i.e., \(\phi_{U}(n):=\|\sum_{k=1}^{n}e_{k}\|_{(L_{M})_{U}}\), \(n\in\mathbb{N}\). Also, let \(\Phi_{g}\) be the dilation function of a function \(g:\,(0,\infty)\to(0,\infty)\) for large values of arguments defined by
\[\Phi_{g}(u):=\sup_{v\geq\max(1,1/u)}\frac{g(vu)}{g(v)},\,\,\,u>0.\]
**Corollary 6.13**.: _Let \(M\) be an Orlicz function such that \(M\in\Delta_{2}^{\infty}\). Then,_
\[\phi_{U}(n)\asymp\Phi_{M^{-1}}(n),\,\,\,n\in\mathbb{N},\]
_where \(M^{-1}\) is the inverse function for \(M\)._
Proof.: By Corollary 6.12, we have
\[\phi_{U}(n)\asymp\sup\left\{\Big{\|}\sum_{k=1}^{n}\frac{\chi_{F_{k}}}{\varphi_{ L_{M}}(m(F_{k}))}\Big{\|}_{L_{M}}:\,F_{k}\subset[0,1]\text{ are pairwise disjoint}\right\}. \tag{6.14}\]
Let \(n\in\mathbb{N}\) and let \(F_{k}\subset[0,1]\), \(k=1,\ldots,n\), be pairwise disjoint. Then, by formula (2.2),
\[\Big{\|}\sum_{k=1}^{n}\frac{\chi_{F_{k}}}{\varphi_{L_{M}}(m(F_{k}))}\Big{\|}_{ L_{M}}=\inf\left\{\lambda>0:\,\sum_{k=1}^{n}M\left(\frac{M^{-1}(1/m(F_{k}))}{ \lambda}\right)m(F_{k})\leq 1\right\}.\]
Next, we write
\[\sum_{k=1}^{n}M\left(\frac{M^{-1}(1/m(F_{k}))}{\Phi_{M^{-1}}(n)} \right)m(F_{k}) = \sum_{k:\,m(F_{k})\geq 1/n}M\left(\frac{M^{-1}(1/m(F_{k}))}{\Phi_{M^{-1} }(n)}\right)m(F_{k})\] \[+ \sum_{k:\,m(F_{k})<1/n}M\left(\frac{M^{-1}(1/m(F_{k}))}{\Phi_{M^{- 1}}(n)}\right)m(F_{k})\] \[= (I)+(II).\]
Observe that
\[(I)\leq\sum_{k:\,m(F_{k})\geq 1/n}M\left(\frac{M^{-1}(n)}{\Phi_{M^{-1}}(n)} \right)m(F_{k})\leq M(1)=1\]
and
\[(II)\leq\sum_{k:\,m(F_{k})<1/n}M\left(\frac{M^{-1}(1/(m(F_{k})n))M^{-1}(1/m(F_ {k}))}{M^{-1}(1/m(F_{k}))}\right)m(F_{k})=\sum_{k:\,m(F_{k})<1/n}\frac{1}{n} \leq 1.\]
Summing up, we obtain
\[\Big{\|}\sum_{k=1}^{n}\frac{\chi_{F_{k}}}{\varphi_{L_{M}}(m(F_{k}))}\Big{\|}_ {L_{M}}\leq 2\Phi_{M^{-1}}(n),\]
for every \(n\in\mathbb{N}\) and all pairwise disjoint \(F_{k}\subset[0,1]\), \(k=1,\ldots,n\). Consequently, in view of (6.14), it follows
\[\phi_{U}(n)\preceq\Phi_{M^{-1}}(n),\,\,\,n\in\mathbb{N}.\]
Conversely, without loss of generality, assume that
\[\Phi_{M^{-1}}(n)=\frac{M^{-1}(nv_{n})}{M^{-1}(v_{n})}\]
for some \(v_{n}\geq 1\). Let \(F_{k}\subset[0,1]\), \(k=1,\ldots,n\), be arbitrary pairwise disjoint subsets of \([0,1]\) such that \(m(F_{k})=(nv_{n})^{-1}\). Then,
\[\phi_{U}(n)\succeq\Big{\|}\sum_{k=1}^{n}\frac{\chi_{F_{k}}}{ \varphi_{L_{M}}(m(F_{k}))}\Big{\|}_{L_{M}} = \inf\left\{\lambda>0:\,M\left(\frac{M^{-1}(nv_{n})}{\lambda} \right)\frac{1}{v_{n}}\leq 1\right\}\] \[= \frac{M^{-1}(nv_{n})}{M^{-1}(v_{n})}=\Phi_{M^{-1}}(n).\]
Recall that a family of Banach spaces \(\{X_{\alpha}\}_{\alpha\in\mathcal{A}}\) forms a _strongly compatible scale_ if there exists a Banach space \(\tilde{X}\) such that \(X_{\alpha}\stackrel{{ 1}}{{\hookrightarrow}}\tilde{X}\), \(\alpha\in\mathcal{A}\).
Let \(\{X_{\alpha}\}_{\alpha\in\mathcal{A}}\) be a strongly compatible scale. We set
\[\Delta(X_{\alpha})_{\alpha\in\mathcal{A}}:=\{x\in\cap_{\alpha\in\mathcal{A}}X _{\alpha}:\,\,\|x\|_{\Delta(X_{\alpha})}:=\sup_{\alpha\in\mathcal{A}}\|x\|_{X_ {\alpha}}<\infty\}.\]
Then, \((\Delta(X_{\alpha})_{\alpha\in\mathcal{A}},\|\cdot\|_{\Delta(X_{\alpha})})\) is a Banach space with the following properties:
(i) \(\Delta(X_{\alpha})_{\alpha\in\mathcal{A}}\stackrel{{ 1}}{{\hookrightarrow}}X_{\alpha}\), \(\forall\alpha\in\mathcal{A}\);
(ii) If \(F\) is a Banach space such that \(F\overset{1}{\hookrightarrow}X_{\alpha}\), \(\forall\alpha\in\mathcal{A}\), then \(F\overset{1}{\hookrightarrow}\Delta(X_{\alpha})_{\alpha\in\mathcal{A}}\).
Let \(M\) be an Orlicz function, \(M_{v}(u):=M(uv)/M(v)\), \(u\geq 0\), \(v>0\). We consider the family of the Musielak-Orlicz sequence spaces \(\{l_{M_{\bar{\beta}}}\}_{\bar{\beta}\in\mathcal{B}}\), where \(\mathcal{B}\) is the set of all sequences \(\bar{\beta}=(\beta_{k})_{k=1}^{\infty}\) such that \(\sum_{k=1}^{\infty}1/M(\beta_{k})\leq 1\). Recall that the norm of the Musielak-Orlicz sequence space \(l_{M_{\bar{\beta}}}\) is defined by
\[\|(a_{k})\|_{l_{M_{\bar{\beta}}}}:=\Big{\{}\lambda>0:\,\sum_{k=1}^{\infty}M_{ \beta_{k}}\Big{(}\frac{a_{k}}{\lambda}\Big{)}\leq 1\Big{\}}\]
(see e.g. [23, 31]). One can easily check that \(l_{M_{\bar{\beta}}}\overset{1}{\hookrightarrow}l_{\infty}\), for each \(\bar{\beta}\in\mathcal{B}\), and hence this family is a strongly compatible scale.
**Theorem 6.14**.: _For every Orlicz function \(M\) such that \(M\in\Delta_{2}^{\infty}\) we have \((L_{M})_{U}=\Delta(l_{M_{\bar{\beta}}})_{\bar{\beta}\in\mathcal{B}}\) (with equivalence of norms). Moreover, the following embeddings hold:_
\[l_{\Phi_{M}}\hookrightarrow(L_{M})_{U}\hookrightarrow l_{p_{M}}, \tag{6.15}\]
_where \(\Phi_{M}\) is the dilation function of \(M\) for large values of arguments and_
\[p_{M}:=\sup\{p\geq 1:\,M(uv)\leq Cu^{p}M(v)\,\,\text{ for some }\,C>0\,\,\text{ and all }\,0<u\leq 1,uv\geq 1\}.\]
Proof.: Without loss of generality, we will assume that \(a_{k}\geq 0\), \(k=1,2,\dots\).
Let \(F_{k}\subset[0,1]\), \(k=1,2,\dots\), be arbitrary pairwise disjoint sets of positive measure. Then, setting \(\beta_{k}:=M^{-1}(1/m(F_{k}))\), \(k=1,2,\dots\), we see that
\[\sum_{k=1}^{\infty}\frac{1}{M(\beta_{k})}=\sum_{k=1}^{\infty}m(F_{k})\leq 1,\]
and hence \(\bar{\beta}=(\beta_{k})_{k=1}^{\infty}\in\mathcal{B}\). Conversely, for each \(\bar{\beta}=(\beta_{k})_{k=1}^{\infty}\in\mathcal{B}\) we can find pairwise disjoint sets \(F_{k}\subset[0,1]\), \(m(F_{k})>0\), such that \(\beta_{k}=M^{-1}(1/m(F_{k}))\), \(k=1,2,\dots\) These observations together with Corollary 6.12 imply, for every \(a=\{a_{k}\}_{k=1}^{\infty}\), the following:
\[\|a\|_{(L_{M})_{U}} \asymp \sup\left\{\Big{\|}\sum_{k=1}^{n}a_{k}\frac{\chi_{F_{k}}}{\varphi _{L_{M}}(m(F_{k}))}\Big{\|}_{L_{M}}:\,n\in\mathbb{N},F_{k}\subset[0,1]\,\,\text{ pairwise disjoint}\right\}\] \[= \sup_{n\in\mathbb{N},F_{k}\text{pairwise disjoint}}\inf\left\{ \lambda>0:\,\sum_{k=1}^{n}M\left(\frac{a_{k}M^{-1}(1/m(F_{k}))}{\lambda} \right)m(F_{k})\leq 1\right\}\] \[= \sup_{\bar{\beta}\in\mathcal{B},n\in\mathbb{N}}\inf\left\{\lambda >0:\,\sum_{k=1}^{n}\frac{M\left(\frac{a_{k}}{\lambda}\beta_{k}\right)}{M( \beta_{k})}\leq 1\right\}\] \[= \sup_{\bar{\beta}\in\mathcal{B},n\in\mathbb{N}}\inf\left\{\lambda >0:\,\sum_{k=1}^{n}M_{\beta_{k}}(a_{k}/\lambda)\leq 1\right\}\] \[= \|a\|_{\Delta(l_{M_{\bar{\beta}}})_{\bar{\beta}\in\mathcal{B}}}.\]
Thus, \((L_{M})_{U}=\Delta(l_{M_{\bar{\beta}}})_{\bar{\beta}\in\mathcal{B}}\), with equivalence of norms.
To prove the left-hand side embedding in (6.15) we assume that \(\|(a_{k})\|_{l_{\Phi_{M}}}\leq 1\). This implies that \(\sum_{k=1}^{n}\Phi_{M}(a_{k})\leq 1\) for every \(n\in\mathbb{N}\). Then, for each sequence \(\bar{\beta}=(\beta_{k})_{k=1}^{\infty}\in\mathcal{B}\) we have
\[\sum_{k=1}^{n}\frac{M\left(a_{k}\beta_{k}\right)}{M(\beta_{k})}=\sum_{k:\;a_{k} \beta_{k}\leq 1}\frac{M\left(a_{k}\beta_{k}\right)}{M(\beta_{k})}+\sum_{k:\;a_{k} \beta_{k}>1}\frac{M\left(a_{k}\beta_{k}\right)}{M(\beta_{k})}\leq\sum_{k=1}^{ n}\frac{1}{M(\beta_{k})}+\sum_{k=1}^{n}\Phi_{M}(a_{k})\leq 2.\]
Thus, \(\|(a_{k})\|_{l_{M_{\bar{\beta}}}}\leq 2\) for every \(\bar{\beta}=(\beta_{k})_{k=1}^{\infty}\in\mathcal{B}\) and therefore, by the first assertion of the theorem, it follows
\[\|(a_{k})\|_{(L_{M})_{U}}\leq C\|(a_{k})\|_{\Delta(l_{M_{\bar{\beta}}})_{\bar {\beta}\in\mathcal{B}}}\leq 2C.\]
Furthermore, by [15], \(p_{M}\) is the supremum of the set of all \(p\geq 1\) such that \(M\) is equivalent to a \(p\)-convex function on the interval \([1,\infty)\), or equivalently \(p_{M}\) is the supremum of the set of all \(p\geq 1\) such that the Orlicz space \(L_{M}[0,1]\) admits an upper \(p\)-estimate. Thus, by using the notation of this paper, we have \(p_{M}=\delta(L_{M}[0,1])\) and hence the right-hand side embedding in (6.15) is a consequence of Proposition 3.9(i). This completes the proof.
_Remark 6.15_.: Informally, the Orlicz space \(l_{\Phi_{M}}\) is located rather "close" to the space \((L_{M})_{U}\), because the fundamental functions of these spaces are equivalent. Indeed, let \(n\in\mathbb{N}\) and \(\varepsilon>0\) be arbitrary. Then, by definition, \(\phi_{l_{\Phi_{M}}}(n)=1/u_{n}\), where \(u_{n}\) satisfies the conditions:
\[\frac{M(u_{n}v_{n})}{M(v_{n})}\geq(1-\varepsilon)\frac{1}{n}\;\;\text{for some}\;\;v_{n}\geq 1/u_{n}\;\;\text{and}\;\;\frac{M(u_{n}v)}{M(v )}\leq\frac{1}{n}\;\;\text{for all}\;\;v\geq 1/u_{n}.\]
In particular, from the last estimate it follows that \(M(1/u_{n})\geq n\). Therefore, since \(M^{-1}\) is concave, we get
\[u_{n}\geq\frac{M^{-1}((1-\varepsilon)s_{n}/n)}{M^{-1}(s_{n})}\geq(1-\varepsilon )\frac{M^{-1}(s_{n}/n)}{M^{-1}(s_{n})},\;\;\text{where}\;\;s_{n}=M(v_{n}) \geq M(1/u_{n}),\]
and
\[u_{n}\leq\frac{M^{-1}(s/n)}{M^{-1}(s)},\;\;\text{for all}\;\;s\geq M(1/u_{n}).\]
Thus,
\[\frac{1}{\phi_{l_{\Phi_{M}}}(n)}=\inf_{s\geq M(1/u_{n})}\frac{M^{-1}(s/n)}{M^{ -1}(s)},\]
whence
\[\phi_{l_{\Phi_{M}}}(n)=\sup_{s\geq M(1/u_{n})}\frac{M^{-1}(s)}{M^{-1}(s/n)}= \sup_{t\geq M(1/u_{n})/n}\frac{M^{-1}(tn)}{M^{-1}(t)}.\]
Consequently, since \(M(1/u_{n})/n\geq 1\), by Corollary 6.13, we have
\[\phi_{l_{\Phi_{M}}}(n)\leq\mathcal{M}_{M^{-1}}(n)\preceq\phi_{(L_{M})_{U}}(n), \;\;n\in\mathbb{N}.\]
It remains to note that the opposite inequality follows from the left-hand side embedding (6.15).
**Data availability statement.**_All data generating or analysed during this study are included in this published article._
|
2309.13509 | Coco-Nut: Corpus of Japanese Utterance and Voice Characteristics
Description for Prompt-based Control | In text-to-speech, controlling voice characteristics is important in
achieving various-purpose speech synthesis. Considering the success of
text-conditioned generation, such as text-to-image, free-form text instruction
should be useful for intuitive and complicated control of voice
characteristics. A sufficiently large corpus of high-quality and diverse voice
samples with corresponding free-form descriptions can advance such control
research. However, neither an open corpus nor a scalable method is currently
available. To this end, we develop Coco-Nut, a new corpus including diverse
Japanese utterances, along with text transcriptions and free-form voice
characteristics descriptions. Our methodology to construct this corpus consists
of 1) automatic collection of voice-related audio data from the Internet, 2)
quality assurance, and 3) manual annotation using crowdsourcing. Additionally,
we benchmark our corpus on the prompt embedding model trained by contrastive
speech-text learning. | Aya Watanabe, Shinnosuke Takamichi, Yuki Saito, Wataru Nakata, Detai Xin, Hiroshi Saruwatari | 2023-09-24T00:15:31Z | http://arxiv.org/abs/2309.13509v1 | COCO-Nut: Corpus of Japanese utterance and voice characteristics description for prompt-based control
###### Abstract
In text-to-speech, controlling voice characteristics is important in achieving various-purpose speech synthesis. Considering the success of text-conditioned generation, such as text-to-image, free-form text instruction should be useful for intuitive and complicated control of voice characteristics. A sufficiently large corpus of high-quality and diverse voice samples with corresponding free-form descriptions can advance such control research. However, neither an open corpus nor a scalable method is currently available. To this end, we develop Coco-Nut, a new corpus including diverse Japanese utterances, along with text transcriptions and free-form voice characteristics descriptions. Our methodology to construct this corpus consists of 1) automatic collection of voice-related audio data from the Internet, 2) quality assurance, and 3) manual annotation using crowdsourcing. Additionally, we benchmark our corpus on the prompt embedding model trained by contrastive speech-text learning.
Aya Watanabe, Shinnosuke Takamichi, Yuki Saito, Wataru Nakata, Detai Xin, Hiroshi Saruwatari The University of Tokyo, Japan. Speech synthesis, speech dataset, voice characteristics, text prompt, crowdsourcing
## 1 Introduction
In human speech production, the speaker's voice carries not only linguistic content but also unique vocal characteristics. Text-to-speech (TTS) tasks that imitate the speech production involve two significant challenges: synthesizing highly intelligible speech from the provided text (referred to as "content prompt" in this paper) and controlling the voice characteristics. This is because the characteristics greatly influence the listener's perception, affecting their understanding of the speaker's personality, emotion, and overall impression. Several methods of voice characteristics control have been proposed, such as a speaker index [1], speaker attributes [2, 3], personality [4], and so on [5, 6, 7, 8]. However, these methods only enable control over a narrow and simplistic range of voice characteristics, limiting their applicability in various contexts.
There has been significant advancement in techniques for synthesizing media using free-form text descriptions (text prompts). This progress is evident in various fields, such as text-to-image [9], text-to-audio [10], text-to-music [11], and text-to-video [12]. The potential of prompt-based media generation is to manipulate complicated media components, with benefits exciting from the ongoing advancements in large language models (LLMs) [13, 10]. Following these trends, we believe that voice characteristics control by a free-form description opens new doors for TTS tasks. Hence, our goal is to develop TTS capable of controlling vocal characteristics through free-form descriptions, leading to the construction of a dedicated corpus. We refer to this free-form description and TTS synthesizer as the "characteristics prompt" and "Prompt TTS," respectively. As depicted in Figure 1, the aim of Prompt TTS is to synthesize speech that aligns with the prompted linguistic content and voice characteristics. The corpus designed for this purpose should encompass a wide array of vocal characteristics, unlike the existing TTS corpora [14, 15] which tend to cover only a limited range of voice attributes. However, neither an open corpus nor a scalable methodology to construct the corpus is currently available.
In this paper, we propose a methodology for constructing a corpus toward Prompt TTS. Our methodology consists of 1) machine-learning-based automatic collection of voice-related audio data from the Internet, 2) quality assurance to enhance the quality of content prompts and speech in the corpus, and 3) manual annotation of characteristics prompt using crowdsourcing. With this methodology, we construct an open corpus, _Coco-Nut1_, which is available at our project page2. This paper also benchmarks our Coco-Nut corpus. The Coco-Nut corpus is used for training a contrastive speech-text training model that embeds characteristics prompt and speech into a same space. Experimental evaluation gives results of the corpus construction and performance of the benchmark system both on objective and subjective aspects.
Footnote 1: Corpus of **e**onnection **N**ihongo **ut** utterance and text. **”Nihongo” means the Japanese language in Japanese.
Footnote 2: [https://sites.google.com/site/shinnosuketakamichi/research-topics/](https://sites.google.com/site/shinnosuketakamichi/research-topics/)
## 2 Related Work
### Dataset for text-to-image
Model training for text-to-image requires pairs of an image and text prompt that describes the image content. DALL-E [9], known as a pioneer in text-to-image, is trained using the MS-COCO dataset [16] (image captioning dataset) and web data [17]. MS-COCO is a dataset used for image captioning, which involves manual annotation of texts that describe the image content. In addition to MS-COCO, the use of diverse data from the Internet in training significantly contributes to the synthesis of diverse images [9]. Although HTML images and their accompanying alt-tag texts provide
Figure 1: Our Coco-Nut corpus towards prompt TTS. Characteristics prompt and content prompt are, for example, “middle-aged man’s voice speaking in a clear and polite tone” and “Welcome to our office!” respectively. Speech synthesizer synthesizes speech of the prompted content with the prompted voice characteristics.
a massive amount of text-image pairs, data filtering is necessary due to noisiness of the Internet data. The pre-trained CLIP (contrastive language-image pretraining) model [13] is often used for data filtering purposes. The importance of data diversity and contrastive learning should be considered in other generation tasks, e.g., voice characteristics in this paper.
### Dataset for text-to-audio and text-to-music
As the same to text-to-image, datasets for captioning are also available for text-to-audio. The typical examples are AudioCaps [18] and Clotho [19]. Additionally, the text-audio version of CLIP, CLAP (contrastive language-audio pretraining) model [10], is also used for data filtering [20] before the training. Mulan [11] in text-to-music proposes a method of retrieving music videos on the web and builds a machine learning model to identify whether the text attached to the video describes the music. This methodology has the potential to be applied to other than music.
Unlike the text-to-audio and text-to-music cases, datasets for Prompt TTS are very limited3. Existing studies have added characteristics prompts to small in-house and private datasets [21, 22]. However, typical TTS corpora [14, 15] contain only limited voice characteristics. Considering the contribution of Internet data described in Section 2.1, it is necessary to establish a methodology of corpus construction from the Internet data. Also, there is no open corpus that everyone can access.
Footnote 3: Audio captioning datasets [18, 19] include human voices as an environmental sound, but the voices do not strongly specify linguistic content.
### Sequence generation from text
In sequence generation tasks such as text-to-video and text-to-audio, it is necessary to determine 1) _overall concepts_ that represent characteristics of the entire sequence and 2) _sequence concepts_ that represent characteristics of changes in the sequence. There are two kinds for describing these concepts using text.
The first is to describe both concepts in a single text, e.g., "woodon figuring surf on a surfboard in space" [12] in text-to-video and "hip-hop features rap with an electronic backing" [11] in text-to-music4. The second is to describe each concept in separate texts. Examples of this include "bat hitting" (overall concept) and "ki-i-n" (sequence concept) in text-to-audio [24]5 and "A toy fireman is lifting weights" (sequence concept) in text-to-video [26]6. This kind of methods is suitable for applications that require fine-grained control over the sequence, such as TTS, where the linguistic content and voice characteristics are often controlled separately [21, 22]. Therefore, we aim to collect content and characteristics prompts separately.
Footnote 4: MusicLM [23] uses a variation of this kind by switching the description at fixed intervals (15 seconds in the paper) to allow for more fine-grained control of changes. This method is suitable for applications that generate sequences from rough descriptions.
Footnote 5: LAION-Audio-630K [25] uses text of overall concept for non-speech environmental sounds and that of sequence concept for speech-related environmental sounds.
Footnote 6: Overall concept is given by an image in the paper.
## 3 Corpus construction
### Corpus composition
The corpus for Prompt TTS should include:
1. **High-quality speech.** Speech data for TTS. Unlike data in speech-to-text corpora [27, 28, 29], it should be high-quality, e.g., less noise. Also, it is paired with the content prompt and characteristics prompt.
2. **Content prompt.** Text transcriptions of speech. This corresponds to the sequence concept described in Section 2.3.
3. **Characteristics prompt.** Free-form descriptions that express characteristics of speech. This corresponds to the overall concept described in Section 2.3.
Existing approaches [21, 22] for constructing this kind of corpora are to add characteristics prompts to existing TTS corpora consisting of high-quality speech and content prompts, e.g., [15, 14]. However, as described in Section 2, such corpora often lack diversity of voice characteristics. Therefore, we propose a methodology that builds a corpus from very noisy Internet data.
Our methodology consists of the following four steps. Figure 2 illustrates these steps. Although the target language of this paper is Japanese, the process of these steps is language-independent; our methodology can be implemented in other languages than Japanese.
1. **Data collection.** Speech data candidates are searched out and obtained from the Internet.
2. **Video filtering.** Impressive voice data are filtered from the candidates. The "impressive voice" refers to those that have received a large number of responses on the Internet. Such data are expected to be characteristic voices and therefore suitable for the construction of the corpus.
3. **Quality assurance.** Speech and its transcriptions (content prompts) are further filtered to guarantee quality of the corpus.
4. **Manual annotation.** Characteristics prompts are manually annotated to the speech data.
The subsequent subsections describe the details of these steps.
### Data collection
To obtain speech data candidates, we make search phrases and input it into the search engine of video-sharing websites, e.g., YouTube. We select article categories related to speech from Wikipedia7 in the target language and use the titles of Wikipedia articles belonging to those categories as search phrases. In addition, we add related phrases that are thought to be related to the search phrase (e.g., "article title] short clip"). After searching, we obtain a video ID, audio data, video title, and viewers' comments of the found videos.
Footnote 7: For example, [https://en.wikipedia.org/wiki/List_of_YouTubers](https://en.wikipedia.org/wiki/List_of_YouTubers) in English.
### Video filtering
By filtering through the above obtained video data, videos containing "impressive voice" are acquired. In this paper, we extract videos in which many people commented about voices in the videos. Two-stage filtering are conducted, and the voices of the filtered videos are forwarded to the next "quality assurance" step.
1. **Keyword matching-based pre-filtering.** The obtained data contains many videos without audio or with nondescript voices. First, a rule-based video filter is applied. We use a set of keywords related to voice characteristics (e.g., "listen") to distinguish whether a viewer's comment on a video contain those
Figure 2: Procedure of corpus construction.
keywords. If the number of comments containing the keywords in that video is greater than the threshold, the video is adopted.
2. **Machine learning-based filtering.** Machine learning is used to determine whether viewers' comments mention to the voice in the video, obtaining in videos with "impressive voices". We create training data for this machine learning. We randomly extract viewers' comments from videos and perform crowdsourcing-based annotations on the comment mentions. A title and comment of the video are presented to the crowdworkers8. The crowdworkers answer whether the comment is 1) related to speaking voice, 2) related to singing voice, or 3) others. Before the annotation, we instruct crowdworkers that "1)" includes comments mentioning about the voice characteristics but does not include comments about the linguistic contents. Footnote 8: For example, “Video title: My daily voice training method. Comment: Cool Voice!” The answer will be “1) related to speaking voice.” Presenting title makes it easier for the crowdworkers to judge the comment content by having the crowdworkers imagine the content of the video.
A comment content classifier is trained using the above annotated data. The classifier model is BERT [30] followed by a linear layer. The input is a video title and comment, joined by "[SEP]" token that represents "sentence separation" in BERT. The output target is binary: 1) speech-related comment and 2) singing-related comment and others. To improve classification performance, we decided to use the aforementioned keywords auxiliary. A subset of the keyword set was used, and only comments that matched one of the subsets were used to train and evaluate the classifier.
### Quality assurance
Due to the collection of Internet data, there are some text and speech data samples that are of low quality and difficult to use. In order to ensure the quality of the audio data included in the corpus, the following processes are used to filter the data.
#### 3.4.1 Audio quality
To ensure the quality of the sound, the following operations are performed.
1. **Voice activity detection (VAD).** VAD is performed to extract only the segments containing voices from the entire video. We use inaSpeechSegmenter [31] to detect individual speech segments in the video.
2. **Denoising.** To enhance audio quality, we use Demucs9, which is a powerful source separation model based on deep neural networks, to extract the voices from noise-contaminated voices. Footnote 9: [https://github.com/facebookresearch/demucs](https://github.com/facebookresearch/demucs)
3. **Audio quality assessment.** There is a variety of audio quality of speech, e.g., recording device quality and effective frequency band. Also, denoising process well eliminates background noise but sometimes drops speech component. To quantify quality degradation caused by these factors, we use NISQA [32], a multidimensional speech quality predictor. The NISQA score is calculated for each speech segment, and we filtered out the segments with the score lower than the pre-determined threshold10 Footnote 10: We found that speech component drop can be quantified by the NISQA score.
4. **Threshold for duration and audio volume.** We set the acceptable duration ranges to eliminate too long and too short voices. We also set the volume threshold and filtered out inaudible (low-volume) speech.
5. **Detection of multi-speaker voice and singing voice.** Data not intended for TTS, specifically singing voices and multi-speaker voices (e.g., cheering), are manually excluded.
6. **Voice characteristics variation.** It is desirable for the corpus to include a variety of voice characteristics. To achieve this, we perform hierarchical clustering based on Ward's method [33] using distances of \(x\)-vectors [34], which reflects not only voice quality but also speech style as suggested by [35]. The \(x\)-vector is extracted for each speech segment by a pretrained \(x\)-vector extractor. Since speech segments with similar voice characteristics are expected to be grouped, we randomly sample one speech segment as the representative of each cluster.
#### 3.4.2 Content quality
To select appropriate speech contents, the following processing steps are performed.
1. **Speech-to-text and language identification.** To obtain content prompts of speech, we use pre-trained Whisper speech-to-text models [36]. Jointly with speech-to-text, we identify language of speech by Whisper and filtered out speech of non-target language. Furthermore, manual identification is conducted to enhance the corpus quality11 Footnote 11: We found that language identification by Whisper alone would result in the inclusion of many voices of non-target languages.
2. **NSFW (not safe for work) word detection.** We filtered out content prompts that include NSFW words. We adopt keyword matching-based NSFW word detection; the text is filtered out if the lemmatized word is found in the NSFW word dictionary. Additional manual detection is conducted to enhance the corpus quality.
3. **Non-verbal voice detection.** Since TTS does not handle non-verbal voices, e.g., scream, we filter out non-verbal voices using a large language model and content prompt texts. Masked language model (MLM) scores [37] based on BERT [30] are calculated for each segment's transcription. Since the masked tokens of content prompt text is highly predictable from the adjacent tokens12, the MLM score of the non-verbal voice becomes higher.
Footnote 12: For example, let consider “aa[MASK]aaaa,” a partially masked content prompt of scream. The masked token “[MASK]” will be “aa.”
Footnote 13: The actual English-translated instruction is “Describe what kinds of speaker (age, gender, etc.), voice quality (brisk, low voice, etc.), and speaking style (angry, fast, etc.) in a free-form description of at least 20 characters. Do not include the linguistic content of the speech, and do not use expressions that indicate personal likes and dislikes (e.g., my favorite voice and disliked way of speaking).”.
We manually set a threshold against to the MLM score and filtered out speech with the MLM score higher than the threshold.
### Manual annotation
Finally, we use crowdsourcing to add characteristics prompts to the collected voices. The employed crowdworkers listen to the presented voice and describe the voice characteristics. They are instructed to include speaker attributes, voice quality, and speaking style in their descriptions13. Only descriptions with more than the threshold number of characters are accepted.
Footnote 13: We found that language identification by Whisper alone would result in the inclusion of many voices of non-target languages.
After collecting characteristics prompts, we manually filtered out characteristics prompts that include proper nouns and persons' name, e.g., "The voice is similar to [celebrity's real name]." This is done to prevent models trained on this corpus from generating the voices of actual individuals' name. We also perform text normalization to cleanse the descriptions.
## 4 Experiments
### Data collection
The target language was Japanese. The data collection period was from July 2022 to March 2023. The number of comments per video
was limited to the top 100 comments with the highest number of "Likes." After extracting comments in the target language by rule-based language identification, comments with less than 3 characters or more than 50 characters were excluded. Table 1 lists the results of the data collection.
### Filtering
For keyword matching-based pre-filtering, we used eight words: "#!", "#!", "#!", "#!", "#!" (voice), "#!" (resonance), "#!" (sound), "#!" (listen), "#!" (hear), and "#!" (song). The threshold for the number of keyword-matching comments per video was \(10\).
For machine learning-based filtering, we used pre-trained BERT [30] model14. We collected \(32{,}453\) labels for comments, out of which \(11{,}647\) were "speech-related." \(80\)% and \(20\)% labels were used for training and evaluation, respectively. We attempted to improve the performance by using a keyword subset. We examined using all combinations of the subsets. Finally, seven different subsets and classifiers with high precision were selected. The choice of precision for the selection is to ensure the accurate extraction of "speech-related" comments. The average precision of the seven classifiers was \(54.3\)%. In comparison, when using only the BERT-based classifier without the keyword subsets, the precision was \(38.6\)%. This confirms the effectiveness of using the keyword subsets in combination, as it improves the precision. After training the classifiers, we classified unlabeled comments. Videos were selected if they had \(10\) or more comments identified as "speech-related" by any of the seven classifiers. Hereinafter, a subset of selected videos, including \(1{,}523\), was used for further processing.
Footnote 14: [https://huggingface.co/cl-tohoku/bert-base-japanese](https://huggingface.co/cl-tohoku/bert-base-japanese)
### Quality assurance
Through VAD, we obtained \(54{,}610\) speech segments. Figure 3 shows the distribution of NISQA-predicted mean opinion scores (MOSs) on audio quality. We set the threshold to \(2\), which is the most frequent score. Also, segments with a duration between \(2\) seconds and \(10\) seconds were retained. The audio volume was checked using Pydub15, and segments with a volume of \(-55\)dB or lower were excluded.
Footnote 15: [https://github.com/jiaaro/pydub](https://github.com/jiaaro/pydub)
For transcription using Whisper [36], both the tiny and large model were employed, because the former tends to excel in fidelity to the speech while the latter excels in grammatical correctness16.
Footnote 16: The average Word Error Rate (WER) of transcriptions from Whisper model was \(22.1\)%. Upon final publication, we will provide manually corrected transcriptions to ensure aWER of \(0\).
The NSFW detection was performed by McCab17 and the Japanese NSFW dictionary18.
Footnote 17: [https://taku19.github.io/mcab/](https://taku19.github.io/mcab/)
The MLM score threshold for non-verbal voice detection was determined to be \(-0.01\). Figure 4 is the histogram of MLM scores calculated by the transcriptions by Whisper large model. The analysis reveals that the MLM scores of collected segments are distributed around the peak of \(-3\), with a range of approximately \(\pm 2\) interval. We observe that the percentage of segments whose MLM scores exceed the threshold \(-0.01\) was approximately \(0.05\)%, which was extremely low frequency.
We used the \(x\)-vectors extracted by xvector_jtubespeech19 for voice characteristics variation. We performed hierarchical clustering and made \(11{,}000\) clusters based on voice characteristics similarity. From each cluster, a single audio segment was randomly selected. After the selection, we further conducted manual annotation for whether the segments include NSFW words, non-target language, and multi-speakers. Finally, \(7{,}667\) segments, with a total length of \(30{,}661\) seconds, were selected.
Footnote 19: [https://github.com/sarulab-speech/xvector_jtubespeech](https://github.com/sarulab-speech/xvector_jtubespeech)
### Annotation
We hired workers through the crowdsourcing platform, Lancers20. Each worker annotated 10 segments. There were a total of \(1{,}318\) workers, and each worker was paid \(200\) pen as reward.
Footnote 20: [https://www.lancers.jp](https://www.lancers.jp)
Before the annotation, in preparation for the machine learning experiments described at Section 4.6.1, we designed the training, validation, and test sets. To avoid data leakage caused by similar voice characteristics within the same video or YouTube channel, we ensured that the sets have no overlap in YouTube channel and included a diverse range of segments. As a result, we created training, validation, and test sets with \(6{,}463\), \(593\) and \(611\) segments respectively.
We designed our corpus to include variations introduced by workers. Specifically, for the training set, we included one characteristics prompt per segment, while five prompts per segment for the other sets, following the existing studies [19, 38].
### Corpus analysis
We analyze the constructed corpus. Particularly, we investigate data diversity, which is our aim of the corpus.
#### 4.5.1 Video categories
We investigated which video category speech segments in the corpus belonged to. The source video of each segment was classified according to the YouTube video category. Figure 5 shows the results. The corpus contains 14 categories, indicating that it covers a
wide variety of categories. The top three categories (entertainment, education, gaming) account for approximately 70%, and minor categories such as Science & Technology are also included.
#### 4.5.2 Gender distributions
We manually annotated gender to the characteristics prompts to analyze gender diversity. Figure 6 presents the distribution of gender. While the majority of characteristics prompts are labeled as male or female, non-binary and some prompts that don't mention gender (not-indicated) still exist. Similar to a typical TTS corpus, clusters can be observed for male and female voices. Non-binary and not indicated categories do not form distinct clusters but are scattered throughout. To provide a detailed analysis, we present the t-SNE visualization of \(x\)-vectors colored by gender in Figure 7. There are clusters for male and female voices. However, non-binary and not indicated categories do not form clusters but appear scattered.
#### 4.5.3 Voice characteristics of video categories
To examine the relation between \(x\)-vector and video categories, we present the t-SNE visualization of \(x\)-vectors colored by the video category in Figure 8. In the Entertainment and Education categories, specific clusters can be observed, particularly in the bottom-right and top-central regions. This suggests that typical voice characteristics are gathered within each category. On the other hand, for the majority of the scatter plot, no prominent clusters are observed. This indicates that the speech in this corpus encompass both typical voices within categories and voices that are shared across categories.
### Machine learning baseline
Using the constructed corpus, we conduct machine learning experiments to align speech and characteristics prompts. These gives future directions of Prompt TTS.
#### 4.6.1 Model construction
We constructed a baseline model that aligns speech and characteristics prompts. The model was inspired by CLAP [39], the model that embeds both audio and text into the same space by contrastive learning. While HTS-AT [40] is used as the audio encoder for CLAP, we changed it into HuBERT [41] to grasp speech features well. We used japanese-roberta-base21 and japanese-hubert-base22 as pre-trained models of RoBERTa [42] and HuBERT, respectively. Figure 9 shows the overview of baseline model architecture. Most of hyperparameters followed official implementation of CLAP23. The batch size was set to \(48\), and the learning rate was \(0.0001\). We used \(8\) GPUs, NVIDIA A100 for NVLink 40GiB HBM2. The training process took approximately \(1\) hour.
Footnote 21: [https://huggingface.co/irnna/japanese-roberta-base](https://huggingface.co/irnna/japanese-roberta-base)
Footnote 22: [https://huggingface.co/irnna/japanese-hubert-base](https://huggingface.co/irnna/japanese-hubert-base)
#### 4.6.2 Evaluation tasks
Following the CLAP paper [10], we evaluate the trained model and obtained embeddings.
**Speech retrieval from characteristics prompt.** We calculate the cosine similarity between the embeddings of the input prompt and the set of embeddings of target speech segments. A higher cosine similarity value indicates a higher-ranked retrieval result. We evaluate whether the proper segment can be retrieved by the prompt.
**Zero-shot speech classification.** We automatically generate characteristics prompts using categorical labels, such as "a voice of [label]." Then a prompt closest to the audio segment in the embedding space is selected. The label associated with that prompt is considered as the classification label for that speech. We evaluate whether the correct label can be obtained without additional training.
#### 4.6.3 Objective evaluation
We evaluated our model using mean average precision at top 10 retrieval (mAP@10) following [10]. mAP@10 is an evaluation metric that measures how accurately the speech corresponding to each characteristics prompt is retrieved within the top 10 retrievals. At the best epoch, the text to speech mAP@10 on the test set reached \(8.63\)%, while it was \(0.54\)% before the training. In comparison to what was trained specifically for environmental sounds [10], the obtained value may appear lower. However, it is important to note that mAP@10 will be 10% when the 10th candidate in every retrieval is the correct pair. Therefore, an 8% value can be considered a reasonable indication of learning to a certain extent.
To test whether the model recognizes the simple characteristics of the speech with the unseen data, we conducted the gender classification on JVS [15] parallel100 set, which consists of 49 male speakers and 51 female speakers, with each speaker having 100 speech samples. Using labels of two gender, we made two characteristics prompts: \(\mathcal{Y}\)\(\mathcal{Y}\)\(\mathcal{Y}\)\(\mathcal{Y}\)\(\mathcal{Y}\)\(\mathcal{Y}\)\(\mathcal{Y}\), which meant "a male voice" or "a female voice" and retrieved one prompt closest to the JVS speech in the embedding space. The gender of the retrieved prompt is considered as the gender of the speech. For example, if a male speaker's speech retrieved the prompt "a male voice," then the classification would be correct. Table 2 shows the confusion matrix of the result. It is observed that both genders" data are correctly identified at around 70%, indicating that the model had effectively learned to associate the speech of one gender with the text indicating same gender.
\begin{table}
\begin{tabular}{c c|c c} & & \multicolumn{2}{c}{Classification result} \\ & & Male & Female \\ \hline Actual & Male & 3442 & 1456 \\ gender & Female & 1048 & 4051 \\ \end{tabular}
\end{table}
Table 2: Zero-shot gender classification
Figure 7: \(x\)-vector distributions colored by gender. Figure 8: \(x\)-vector distributions colored by gender. Figure 9: Baseline model architecture. MLP means multi-layer perceptron. \(E^{s}\) and \(E^{p}\) mean the \(n\)-dimensional embeddings of speech and characteristics prompt, respectively.
#### 4.6.4 Subjective evaluation
To perceptually evaluate the retrieved speech by the prompt, we conducted subjective evaluations.
We randomly selected \(100\) characteristics prompts from the test set and retrieved speech from the whole test set. From retrieval results, we created four kinds of speech paired with the characteristics prompt: 1st candidate, 2-5th candidates, 6-10th candidates, and random candidates. The first means the 1st candidate retrieved by the prompt, and the last is randomly selected from the test set.
We presented the prompt and speech to crowdworkers and let them evaluate how much the prompt represents the speech characteristics on a nine-point scale, 9 is the best matching and 1 is the worst. For comparison, we added ground-truth speech (paired with the prompt in the test set) to the listening test set. Each worker was presented with a total of \(20\) pairs. We employed \(500\) workers and obtained \(20\) evaluations for each pair.
Figure 10 shows the results. Each figure illustrates mean and standard deviation of the scores of each prompt-speech pair. Those of ground-truth pairs are illustrated for comparison.
**Q. Is free-form text truly appropriate in describing the voice characteristics?** We validate the adequacy of the free-form characteristics representation we present in this paper. As shown in Figure 10, the ground-truth pairs obtained a sufficiently high average score of \(7.37\) despite the difference between the writers of the free-form expressions and the evaluators. It indicates that the free-form expressions can appropriately represent voice characteristics regardless of the writers or evaluators. Note that, compared to conventional categorization (e.g., gender), the scores tend to be more variable.
**Q. Does the baseline model retrieve perceptually good speech from the given prompt?** We compared between retrieval rank (1st, 2-5th, 6-10th candidates and random). The average scores for each method were \(3.98\) (1st), \(3.42\) (2-5th), \(2.98\) (6-10th) and \(3.25\) (random). Statistically significant differences were observed between the 1st candidate and random (\(p<0.05\)), indicating that the model can retrieve appropriate speech. However, there is still room for improvement in the trained model to reach the ground-truth score, and the samples in the 6-10th positions fall below the random method, indicating the need for improvement in the retrieval method.
**Q. Is the low score for 1st candidate due to the low ground-truth score?** As mentioned above, the scores of the ground-truth samples vary among the samples. We investigated whether this variability in ground-truth scores affects the retrieval performance of the model. Figure 11 illustrates the ground-truth scores along with the 1st candidate's corresponding to each prompt. From this figure, there is no clear correlation between the two, indicating that the variability in ground-truth scores has little impact on the retrieval performance. Therefore, the low scores of the 1st candidate primarily will reflect the performance of the retrieval model itself.
**Q. What happens when scores are extremely low?** Retrieved candidates includes extremely low scores as observed in the bottom left of Figure 10. To investigate the reason behind this, we examined the correspondence between the gender of the prompt and the gender mentioned in the ground-truth prompt associated with the retrieved speech. As shown in Figure 12, in the samples with low scores, it frequently occurs that the gender is misaligned. For example, there are cases where the input prompt includes the term "female," but our model retrieves a male voice. To address this issue, we need a training method that embeds the same gender samples close.
**Q. What are the actual examples?** Finally, we provide examples of the input prompt and the corresponding ground-truth prompt for the retrieved speech in Table 3. In the case of high scores (\(8.15\)), we can observe that not only the gender and age ("young women") but also the style ("sweet") are aligned. As mentioned earlier, when the gender is different, the scores significantly drop (\(1.05\)). On the other hand, even when the gender is aligned, if there are differences in the age group ("middle-aged" vs. "young") or style ("questioning manner" vs. "excited"), we can see the low score (\(2.35\)).
## 5 Conclusion
In this paper, we developed a paired corpus of speech and characteristics prompts and conducted evaluations of both the corpus itself and a baseline model. This corpus will promote research on Prompt TTS, where the speaker is controlled by characteristics prompts. The consideration of Prompt TTS architecture and the expansion of the corpus itself are tasks for future work.
Figure 11: Mean relationship of ground truth results and 1st retrieved results, both of the same characteristics prompt.
Figure 10: Mean and standard deviation of subjective evaluation on each prompt-speech pair. Blue circles indicate retrieved pairs of the figure title, and gray ones indicates ground-truth. “+” marks indicate average mean and standard deviation of same color plots.
\begin{table}
\begin{tabular}{l|c|c c c} Retrieval text & Score & Rank & Retrieved candidate & Score \\ \hline A young woman in her & 8.40 & 1st & A young woman is & 8.15 \\ twenties is speaking & & & & \\ \hline A young woman in her & 8.10 & 1st & A young man is speak- & 1.05 \\ twenties is speaking & & & & whispering softly. \\ \hline A middle-aged cheer- & 7.75 & 2– & A young man is speaking in a high-pitched & 2.35 \\ a clear voice, addressing in a questioning & & & & \\ \end{tabular}
\end{table}
Table 3: Retrieved pair examples. “Score” means the average of subjective evaluated appropriateness. “Rank” indicates the rank of retrieved candidate. Text of retrieved candidate means one characteristics prompt of retrieved candidate. Each text has been translated to English. |
2309.14266 | The Hydra Hand: A Mode-Switching Underactuated Gripper with Precision
and Power Grasping Modes | Human hands are able to grasp a wide range of object sizes, shapes, and
weights, achieved via reshaping and altering their apparent grasping stiffness
between compliant power and rigid precision. Achieving similar versatility in
robotic hands remains a challenge, which has often been addressed by adding
extra controllable degrees of freedom, tactile sensors, or specialised extra
grasping hardware, at the cost of control complexity and robustness. We
introduce a novel reconfigurable four-fingered two-actuator underactuated
gripper -- the Hydra Hand -- that switches between compliant power and rigid
precision grasps using a single motor, while generating grasps via a single
hydraulic actuator -- exhibiting adaptive grasping between finger pairs,
enabling the power grasping of two objects simultaneously. The mode switching
mechanism and the hand's kinematics are presented and analysed, and performance
is tested on two grasping benchmarks: one focused on rigid objects, and the
other on items of clothing. The Hydra Hand is shown to excel at grasping large
and irregular objects, and small objects with its respective compliant power
and rigid precision configurations. The hand's versatility is then showcased by
executing the challenging manipulation task of safely grasping and placing a
bunch of grapes, and then plucking a single grape from the bunch. | Digby Chappell, Fernando Bello, Petar Kormushev, Nicolas Rojas | 2023-09-25T16:27:51Z | http://arxiv.org/abs/2309.14266v2 | # The Hydra Hand: A Mode-Switching Underactuated Gripper with Precision and Power Grasping Modes
###### Abstract
Human hands are able to grasp a wide range of object sizes, shapes, and weights, achieved via reshaping and altering their apparent grasping stiffness between compliant power and rigid precision. Achieving similar versatility in robotic hands remains a challenge, which has often been addressed by adding extra controllable degrees of freedom, tactile sensors, or specialised extra grasping hardware, at the cost of control complexity and robustness. We introduce a novel reconfigurable four-fingered two-actuator underactuated gripper--the Hydra Hand--that switches between compliant power and rigid precision grasps using a single motor, while generating grasps via a single hydraulic actuator--exhibiting adaptive grasping between finger pairs, enabling the power grasping of two objects simultaneously. The mode switching mechanism and the hand's kinematics are presented and analysed, and performance is tested on two grasping benchmarks: one focused on rigid objects, and the other on items of clothing. The Hydra Hand is shown to excel at grasping large and irregular objects, and small objects with its respective compliant power and rigid precision configurations. The hand's versatility is then showcased by executing the challenging manipulation task of safely grasping and placing a bunch of grapes, and then plucking a single grape from the bunch.
Grasping; Multifingered Hands; Mechanism Design
## I Introduction
The grasping versatility of human hands is unparalleled by any other natural or mechanical gripper. Human hands are the result of a complex system of motor and sensory pathways that enable humans to switch at ease between precision and power grasps, and offering compliance and adaptability to varying shapes, sizes, and weights [1]. To achieve this in a robotic gripper is extremely challenging, with grasped object variability, unstructured grasping environments, multi-contact sensing difficulties, and high occlusion manipulation all contributing towards complexity. Achieving anywhere near human-level versatility with one gripper and a reduced number of actuators is non-trivial, and indeed finding an optimal method of doing so remains an open area in robot manipulation research.
Many works have approached the grasping versatility problem from a control and sensory perspective. Often, tactile sensors [2] and vision systems [3] are used, or computational models of the interaction between the gripper and known objects are developed [4]. These studies display impressive dexterity with known objects, but are limited in robustness and how well they generalise to grasping unknown and varied objects. Furthermore, grasping in unstructured environments is difficult from a sensory perspective, where high occlusion and multiple contact points with objects increases complexity significantly.
A related approach is to increase the number of degrees of freedom (DOFs) in the gripper such that an increased number of grasps can be achieved. Many anthropomorphic hands utilise this strategy, such as the ILDA hand [5] and the Robonaut Hand [6]. Some non-anthropomorphic hands, such as the farmHand [7] and the Yale Model W [8] also follow this trend, with targeted DOFs for desired grasps. However, increasing the number of controllable DOFs increases control complexity, and ensuring constraints such as grasp success and safety are satisfied can be difficult with a larger action space.
Rather than increasing gripper complexity, an alternative
Fig. 1: The Hydra Hand, capable of performing rigid precision and compliant power grasps with a single hydraulic grasp actuator, where mode-switching is achieved via a rotating palm. (a) Two-fingered precision grasping mode. (b) Four-fingered spherical grasping mode. (c) Rigid precision palmar pinch grip. (d) Cylindrical power grip with adaptability between pairs of fingers. (e) Spherical power grip with individual finger compliance. |
2309.15178 | Zero-Shot Reinforcement Learning from Low Quality Data | Zero-shot reinforcement learning (RL) promises to provide agents that can
perform any task in an environment after an offline, reward-free pre-training
phase. Methods leveraging successor measures and successor features have shown
strong performance in this setting, but require access to large heterogenous
datasets for pre-training which cannot be expected for most real problems.
Here, we explore how the performance of zero-shot RL methods degrades when
trained on small homogeneous datasets, and propose fixes inspired by
conservatism, a well-established feature of performant single-task offline RL
algorithms. We evaluate our proposals across various datasets, domains and
tasks, and show that conservative zero-shot RL algorithms outperform their
non-conservative counterparts on low quality datasets, and perform no worse on
high quality datasets. Somewhat surprisingly, our proposals also outperform
baselines that get to see the task during training. Our code is available via
https://enjeeneer.io/projects/zero-shot-rl/. | Scott Jeen, Tom Bewley, Jonathan M. Cullen | 2023-09-26T18:20:20Z | http://arxiv.org/abs/2309.15178v2 | # Conservative World Models
###### Abstract
Zero-shot reinforcement learning (RL) promises to provide agents that can perform _any_ task in an environment after an offline pre-training phase. _Forward-backward_ (FB) representations represent remarkable progress towards this ideal, achieving 85% of the performance of task-specific agents in this setting. However, such performance is contingent on access to large and diverse datasets for pre-training, which cannot be expected for most real problems. Here, we explore how FB performance degrades when trained on small datasets that lack diversity, and mitigate it with _conservatism_, a well-established feature of performant offline RL algorithms. We evaluate our family of methods across various datasets, domains and tasks, reaching 150% of vanilla FB performance in aggregate. Somewhat surprisingly, conservative FB algorithms also outperform the task-specific baseline, despite lacking access to reward labels and being required to maintain policies for all tasks. Conservative FB algorithms perform no worse than FB on full datasets, and so present little downside over their predecessor. Our code is available open-source via [https://enjeeneer.io/projects/conservative-world-models/](https://enjeeneer.io/projects/conservative-world-models/).
## 1 Introduction
Humans construct internal models of the world to solve varied tasks efficiently. Much work has focused on equipping artificial agents with analogous models (Sutton, 1991; Deisenroth and Rasmussen, 2011; Chua et al., 2018; Ha and Schmidhuber, 2018; Hafner et al., 2019; Schrittwieser et al., 2020; Hafner et al., 2023), but in most cases these agents have lacked the adaptability of humans. However, some agents, like those utilising successor features (Barreto et al., 2017; Borsa et al., 2018) and forward-backward (FB) representations (Touati and Ollivier, 2021; Touati et al., 2022), can solve _any_ task in an environment with no online planning or learning, and so appear to exhibit the adaptability we desire. They achieve this by learning a model that predicts the agent's future state visitations when attempting to solve unconcountered tasks, building a family of policies for these tasks given the model's predictions, and then selecting the correct policy when a test task is specified. It is models that can be used to achieve this generality that we call _world models_.
Recent attention has been paid to training world models on offline data, a setting known as _zero-shot reinforcement learning (RL)_(Kirk et al., 2023). The appeal of zero-shot RL is in providing agents that can solve any task in an environment without the need for dangerous or expensive online interaction. In Touati et al. (2022), FB models pre-trained on datasets of reward-free transitions are able to return policies for unseen tasks in an environment that are 85% as performant as those returned by offline RL algorithms explicitly trained for each task. FB achieves this with no prior knowledge of the tasks, zero planning, and no online interaction. FB thus represents remarkable progress towards the ideal of zero-shot RL.
However, such performance is only achievable if the pre-training dataset is large and diverse. Real datasets, like those produced by an existing controller or collected by a task-directed agent, are usually small and lack diversity. Even if we design agents to exhaustively explore environments, as is done in Unsupervised RL (Jaderberg et al., 2016), they suffer the impracticalities of the online RL algorithms we are trying to avoid: they act dangerously in safety-critical environments, and data collection is time-consuming.
Is it possible to relax the requirement for large and diverse datasets and do zero-shot RL in more realistic data settings? This is the primary question we address in this paper. We begin by establishing that FB suffers in this regime because it overestimates the value of out-of-distribution state-action pairs. As a resolution, we propose a fix that leverages ideas from _conservatism_ in offline RL (Kumar et al., 2020) to suppress either the values (VC-FB (Figure 1 (right)) or future state visitation measures (MC-FB) of out-of-distribution state-action pairs. In experiments across varied domains, tasks and datasets, we show our proposals outperform vanilla FB by up to 150% in aggregate, and surpass a task-specific baseline despite lacking access to reward labels _a priori_. Finally, we establish that both VC-FB and MC-FB perform no worse than FB on full datasets, and so present little downside over their predecessor. We believe the proposals outlined in this work represent a step towards deploying zero-shot RL methods in the real world.
## 2 Background
**Preliminaries.** We consider the standard RL setup of a Markov decision process (MDP) (Sutton and Barto, 2018). We focus on the class of continuous, finite-horizon MDPs, characterised by the tuple \((\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{P},T,\gamma)\), where \(\mathcal{S}\in\mathbb{R}^{n}\) and \(\mathcal{A}\in\mathbb{R}^{m}\) are continuous spaces of environment states and agent actions, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\mapsto\Delta(\mathcal{S})\) is a stochastic state transition function and \(\mathcal{R}:\mathcal{S}\mapsto\mathbb{R}_{\geq 0}\) is a function mapping states to non-negative rewards (Bellman, 1957). At each timestep \(t\), the agent observes state \(s_{t}\), selects action \(a_{t}\) according to a policy function \(\pi(s_{t})\), transitions to the next state \(s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t})\), and receives a reward \(r_{t+1}=\mathcal{R}(s_{t+1})\). This process repeats until a terminal timestep \(t=T\). The agent's task is to learn a policy that maximises the expected discounted sum of rewards \(\mathbb{E}_{\pi,\mathcal{P}}\sum_{t=0}^{T-1}\gamma^{t}\mathcal{R}(s_{t+1})\), where \(\gamma\in[0,1]\) is a discount factor.
**Problem formulation.** We are interested in pre-training agents to solve any arbitrary task in an environment, where each task is characterised by a reward function \(\mathcal{R}\). Therefore, instead of solving one MDP, we wish to solve a _set_ of MDPs, each sharing the same structure bar the reward functions (Borsa et al., 2018). Touati et al. (2022) call this zero-shot RL, which is equivalent to multi-task offline RL with no downstream planning allowance (Lazaric, 2012; Levine et al., 2020). During the pre-training phase, we assume the agent has access to a static dataset of reward-free transitions \(\mathcal{D}=\{(s_{i},a_{i},s_{i+1})\}_{i\in\{1,\dots,k\}}\) generated by an unknown behaviour policy. Once a task is revealed downstream, the agent must return a good policy for that task with no further planning or learning.
**Forward-backward representations.** FB representations rely on _successor measures_, which generalise Dayan (1993)'s successor representations to continuous MDPs (Blier et al., 2021). A successor measure gives the expected discounted time spent in each subset of future states \(S_{+}\subset\mathcal{S}\) after starting in state \(s_{0}\), taking action \(a_{0}\), and following policy \(\pi\) thereafter:
\[M^{\pi}(s_{0},a_{0},S_{+}):=\sum_{t=0}^{T-1}\gamma^{t}\Pr(s_{t+1}\in S_{+}|(s_ {0},a_{0}),\pi),\;\forall\;S_{+}\subset\mathcal{S}. \tag{1}\]
Figure 1: **FB’s failure mode on sub-optimal datasets and VC-FB’s resolution. (_Left_) Zero-shot RL methods must train on a dataset which was collected by a behaviour policy optimising against task \(z_{\mathrm{collect}}\), yet generalise to new tasks \(z_{\mathrm{eval}}\). Both tasks have associated optimal value functions \(Q^{*}_{z_{\mathrm{collect}}}\) and \(Q^{*}_{z_{\mathrm{eval}}}\) for a given marginal state. (_Middle_) Forward-backward (FB) representations overestimate the value of actions not in the dataset for all tasks. (_Right_) Value-conservative forward-backward (VC-FB) representations suppress the value of actions not in the dataset for all tasks. Black dots represent state-action samples present in the dataset.**
For any reward function \(\mathcal{R}\) and policy \(\pi\), the state-action value (\(Q\)) function is the integral of \(\mathcal{R}\) with respect to \(M^{\pi}\):
\[Q_{\mathcal{R}}^{\pi}(s_{0},a_{0}):=\int_{s_{+}\in\mathcal{S}}\mathcal{R}(s_{+} )M^{\pi}(s_{0},a_{0},\mathrm{d}s_{+}). \tag{2}\]
An FB representation approximates the successor measures of optimal policies for an infinite family of reward functions, and so can be thought of as a _universal successor measure_. It parameterises these policies \(\pi_{z}\) by task vectors \(z\in\mathbb{R}^{d}\). The representation consists of a _forward_ model \(F:\mathcal{S}\times\mathcal{A}\times\mathbb{R}^{d}\mapsto\mathbb{R}^{d}\), which outputs an embedding vector summarising the distribution of future states for a given state-action pair and policy, and a _backward_ model \(B:\mathcal{S}\mapsto\mathbb{R}^{d}\), which outputs an embedding vector summarising the distribution of states visited before a given state. Together, they form a rank-\(d\) approximation to the successor measure for the entire policy family:
\[M^{\pi_{z}}(s_{0},a_{0},\mathrm{d}s_{+})\approx F(s_{0},a_{0},z)^{\top}B(s_{+ })\rho(\mathrm{d}s_{+}),\ \forall\ s_{+}\in\mathcal{S}, \tag{3}\]
where \(\rho\) is the state marginal in the training dataset \(\mathcal{D}\). Since the successor measure satisfies a Bellman equation, \(F\) and \(B\) can be trained to improve the approximation in Equation 3 across a distribution \(\mathcal{Z}\) of task vectors via a temporal difference (TD) method (Samuel, 1959; Sutton, 1988):
\[\mathcal{L}_{\text{FB}}=\mathbb{E}_{(s_{t},a_{t},s_{t+1},s_{+}) \sim\mathcal{D},z\sim\mathcal{Z}}\big{[}\big{(}F(s_{t},a_{t},z)^{\top}B(s_{+} )-\gamma\bar{F}(s_{t+1},\pi_{z}(s_{t+1}),z)^{\top}\bar{B}(s_{+})\big{)}^{2}\\ -2F(s_{t},a_{t},z)^{\top}B(s_{t+1})], \tag{4}\]
where \(s_{+}\) is sampled independently from \((s_{t},a_{t},s_{t+1})\) and \(\bar{F}\) and \(\bar{B}\) are lagging target networks. See Touati et al. (2022) for a full derivation of this loss. By Equation 2, the trained representation can then be used to approximate the \(Q\) function for any \(\pi_{z}:z\sim\mathcal{Z}\) and reward function \(\mathcal{R}\):
\[Q_{\mathcal{R}}^{\pi_{z}}(s_{0},a_{0}) \approx\int_{s_{+}\in\mathcal{S}}\mathcal{R}(s_{+})F(s_{0},a_{0},z)^{\top}B(s_{+})\rho(\mathrm{d}s_{+}) \tag{5}\] \[=F(s_{0},a_{0},z)^{\top}\mathbb{E}_{s_{+}\sim\rho}[\ \mathcal{R}(s_{+})B(s_{+})\ ].\]
Touati et al. (2022) show that if the task vector \(z\) and policy \(\pi_{z}\) are defined as follows:
\[z=\mathbb{E}_{s_{+}\sim\rho}[\ \mathcal{R}(s_{+})B(s_{+})\ ]; \tag{6}\]
\[\pi_{z}(s)=\text{argmax}_{a}F(s,a,z)^{\top}z; \tag{7}\]
and if \(z\) lies within the task distribution \(\mathcal{Z}\), we can expect \(\pi_{z}\) to be a _near-optimal policy_ for \(\mathcal{R}\). The approximation becomes more exact as the embedding dimensionality \(d\) grows. We thereby obtain a mechanism for zero-shot RL. In practice, given a dataset \(\mathcal{D}_{\text{labelled}}\) of reward-labelled states distributed as \(\rho\), we can approximate \(z\approx\mathbb{E}_{(s,r)\sim\mathcal{D}_{\text{labelled}}}[\ rB(s)\ ]\) by simple averaging. For the special case of a goal-reaching task with goal \(s_{g}\), the task vector can be defined directly as \(z=B(s_{g})\). In theory, \(\pi_{z}\) is then given analytically via Equation 7, but continuous action spaces necessitate learning a separate task-conditioned policy model in an actor-critic formulation (Lillicrap et al., 2016).
## 3 Conservative Forward-Backward Representations
We begin by examining the FB loss (Equation 4) more closely. The TD target includes an action produced by the current policy \(a_{t+1}=\pi_{z}(s_{t+1})\). Equation 7 shows that this action is the current best estimate of the optimal action in state \(s\) for task \(z\). When training on a finite dataset, this maximisation does not constrain the policy to actions observed in the dataset, and so the policy can become biased towards out-of-distribution (OOD) actions thought to be of high value-a well-observed phenomenon in offline RL (Kumar et al., 2019, 2020). In such instances, the TD targets may be evaluated at state-action pairs outside the dataset, making them unreliable and causing errors in the measure and value predictions. Figure 2 shows the overestimation of \(Q\) as dataset size and quality is varied. The smaller and less diverse the dataset, the more \(Q\) values tend to be overestimated.
The canonical fix for value overestimation in offline RL is conservative \(Q\)-learning (CQL) (Kumar et al., 2019, 2020). Intuitively, CQL suppresses the values of OOD actions to be below those of in-distribution actions, and so approximately constrains the agent's policy to actions observed in the dataset. To achieve this, a new term is added to the usual \(Q\) loss function
\[\mathcal{L}_{\text{CQL}}=\alpha\cdot\Big{(}\mathbb{E}_{s\sim\mathcal{D}}[\max _{a}Q(s,a)]-\mathbb{E}_{(s,a)\sim\mathcal{D}}[Q(s,a)]\Big{)}+\mathcal{L}_{ \text{Q}}, \tag{8}\]
where \(\alpha\) is a scaling parameter and \(\mathcal{L}_{\text{Q}}\) represents the normal TD loss on \(Q\). This proves to be a useful inductive bias, mitigating value overestimation and producing state-of-the-art results on many offline RL benchmark tasks (Fu et al., 2020).
We can replicate a similar inductive bias in the FB context, substituting \(F(s,a,z)^{\top}z\) for \(Q\) in Equation 8 and adding the normal FB loss (Equation 4)
\[\mathcal{L}_{\text{VC-FB}}=\alpha\cdot\left(\mathbb{E}_{s\sim\mathcal{D},z \sim\mathcal{Z}}[\max_{a}F(s,a,z)^{\top}z]-\mathbb{E}_{(s,a)\sim\mathcal{D},z \sim\mathcal{Z}}[F(s,a,z)^{\top}z]\right)+\mathcal{L}_{\text{FB}}. \tag{9}\]
The key difference between Equations 8 and 9 is that the former suppresses the value of OOD actions for one task, whereas the latter does so _for all tasks in an environment_. We discuss the usefulness of this inductive bias in Section 3.1. We call models learnt with this loss _value-conservative forward-backward representations_ (VC-FB).
Sampling uniformly from \(\mathcal{Z}\) in Equation 9 may reduce values for tasks or goals that are never used in practice. Instead, it may prove better to direct updates towards tasks we are likely to encounter. To do so, we first recall that the backward embedding of a future state is equivalent to the task vector for reaching that state, i.e. \(z_{+}=B(s_{+})\). Substituting this into Equation 9, we obtain a new loss that penalises OOD actions with respect to reaching some goal state \(s_{+}\)
\[\mathcal{L}_{\text{MC-FB}}=\alpha\cdot\left(\mathbb{E}_{(s,s_{+ })\sim\mathcal{D},z\sim\mathcal{Z}}[\max_{a}F(s,a,z)^{\top}B(s_{+})]-\mathbb{ E}_{(s,a,s_{+})\sim\mathcal{D}}[F(s,a,z)^{\top}B(s_{+})]\right)\\ +\mathcal{L}_{\text{FB}}. \tag{10}\]
Whereas Equation 9 suppresses values of OOD actions for all tasks, Equation 10 suppresses the expected visitation count to goal state \(s_{+}\) when taking an OOD action, because \(M(s,a,z,s_{+})=F(s,a,z)^{\top}B(s_{+})\). As such, we call this variant a _measure-conservative forward-backward representation_ (MC-FB). We note that this variant confines conservative penalties to the part of \(z\)-space occupied by the backward embeddings of \(s_{+}\sim\mathcal{D}\), which in practice may be far smaller than the \(z\)-space coverage obtained via \(z\sim\mathcal{Z}\). The effectiveness of each bias is explored in Section 4.
Practical implementations of conservative FB representations require two new model components: a conservative penalty scaling factor \(\alpha\) and a way of computing the max operation over a continuous action space. Empirically, we observe fixed values of \(\alpha\) leading to fragile performance, so dynamically tune it at each learning step using Lagrangian dual-gradient descent as per Kumar et al. (2020). Appendix B.1.4 discusses this procedure in more detail. The max operation is approximated by computing a log-sum exponential over a finite set of \(Q\) values derived from a finite set of action samples. There are many ways to choose these samples, but again, we follow the recommendations
Figure 2: **FB value overestimation with respect to dataset size \(n\) and quality.** Log \(Q\) values and IQM of rollout performance on all Point-mass Mazze tasks for datasets (a) Rnd and (b) Random. \(Q\) values predicted during training increase as both the size and “quality” of the dataset decrease. This contradicts the low return of all resultant policies. Informally, we say the RND dataset is “high” quality, and the Random dataset is “low” quality–see Appendix A.2 for more details.
of Kumar et al. (2020) and mix actions sampled uniformly from a random policy and the current policy. Appendix B.1.3 provides full detail. Code snippets demonstrating the required programmatic changes to vanilla FB implementation are provided in Appendix G. We emphasise these additions represent only a small increase in the number of lines required to implement FB.
### A Didactic Example
To understand situations in which a conservative world model may be useful, we introduce a modified version of Point-mass Maze from the ExORL benchmark (Yarats et al., 2022). Episodes begin with a point-mass initialised in the upper left of the maze (\(\otimes\)), and the agent is tasked with selecting \(x\) and \(y\) tilt directions such that the mass is moved towards one of two goal locations (\(\otimes\) and \(\otimes\)). The action space is two-dimensional and bounded in \([-1,1]\). We take the Rnd dataset and remove all "left" actions such that \(a_{x}\in[0,1]\) and \(a_{y}\in[-1,1]\), creating a dataset that has the necessary information for solving the tasks, but is inexhaustive (Figure 3 (a)). We train FB and VC-FB on this dataset and plot the highest-reward trajectories-Figure 3 (b) and (c). FB overestimates the value of OOD actions and cannot complete either task. Conversely, VC-FB synthesises the requisite information from the dataset and completes both tasks.
The above example is engineered for exposition, but we expect conservatism to be helpful in more general contexts. Low-value actions for one task can often be low value for other tasks and, importantly, the more performant the behaviour policy, the less likely such low value actions are to be in the dataset. Consider the four tasks in the Walker environment: \(\{\texttt{walk},\texttt{stand},\texttt{run},\texttt{flip}\}\), where all tasks require the robot to stand from a seated position before exemplifying different behaviours. If the dataset includes actions that are antithetical to standing, as might be the case if the behaviour policy used to collect the dataset is highly exploratory, then both FB and VC-FB can observe their low value across tasks. However, if the dataset does not include such actions, as might be the case if it was collected via a near-optimal controller that never fails to stand, then FB may overestimate the value of not standing across tasks, and VC-FB would correctly devalue them. We extend these observations to more varied environments in the section that follows.
## 4 Experiments
In this section we perform an empirical study to evaluate our proposals. We seek answers to three questions: **(Q1)** Can our proposals from Section 3 improve FB performance on small and/or low-quality datasets? **(Q2)** How does the performance of VC-FB and MC-FB vary with respect to task type and dataset diversity? **(Q3)** Do we sacrifice performance on full datasets for performance on small and/or low-quality datasets?
Figure 3: **Ignoring out-of-distribution actions. The agents are tasked with learning separate policies for reaching \(\otimes\) and \(\otimes\). (a) Rnd dataset with all “left” actions removed; quivers represent the mean action direction in each state bin. (b) Best FB rollout after 1 million learning steps. (c) Best VC-FB performance after 1 million learning steps. FB overestimates the value of OOD actions and cannot complete either task; VC-FB synthesises the requisite information from the dataset and completes both tasks.**
### Setup
We respond to these questions using the ExORL benchmark, which provides datasets collected by unsupervised exploratory algorithms on the DeepMind Control Suite (Yarats et al., 2022; Tassa et al., 2018). We select three of the same domains as Touati and Ollivier (2021): Walker, Quadruped and Point-mass Maze, but substitute Jaco for Cheetah. This provides two locomotion domains and two goal-reaching domains. Within each domain, we evaluate on all tasks provided by the DeepMind Control Suite for a total of 17 tasks across four domains. Full details are provided in Appendix A.1.
We pre-train on three datasets of varying quality. Although there is no unambiguous metric for quantifying dataset quality, we use the reported performance of offline TD3 on Point-mass Maze for each dataset as a proxy. We choose datasets collected via Random Network Distillation (Rnd) (Burda et al., 2018), Diversity is All You Need (Diayn) (Eysenbach et al., 2018), and Random policies, where agents trained on Rnd are the most performant, on Diayn are median performers, and on Random are the least performant. As well as selecting for quality, we also select for size. The ExORL datasets have up to 10 million transitions per domain. We uniformly sub-sample 100,000 transitions from these to create datasets that may be considered more realistically sized for real-world applications. More details on the datasets are provided in Appendix A.2, which includes a visualisation of the state coverage for each dataset on Point-mass Maze (Figure 6).
### Baselines
We use FB as described in Touati et al. (2022) as our sole zero-shot RL baseline. Though other methods exist, we believe the performance gap they report is sufficient that we can rule out any other methods as state-of-the-art. As single-task RL baselines, we use CQL and offline TD3 trained on the same datasets relabelled with task rewards. CQL is representative of what a conservative algorithm can achieve when optimising for one task in a domain rather than all tasks. Offline TD3 exhibits the best aggregate single-task performance on the ExORL benchmark, so it should be indicative of the maximum performance we could expect to extract from a dataset. Full implementation details and hyperparameters are provided in Appendix B.2 and B.3.
We evaluate the performance of VC-FB, MC-FB and our baselines across five random seeds. To mitigate the well-established pitfalls of stochastic RL algorithm evaluation, we employ the best practice recommendations of Agarwal et al. (2021) when reporting observed performance. Concretely, we run each algorithm for 1 million learning steps, evaluating performance at checkpoints of 20,000 steps. At each checkpoint, we perform 10 rollouts and record the interquartile mean performance across each task. We then calculate the interquartile mean of performance across seeds for each checkpoint to create the learning curves reported in Appendix F. Results are reported with 95% confidence intervals obtained via stratified bootstrapping (Efron, 1992). Full implementation details are provided in Appendix B.1.
### Results
**Q1.** We report the aggregate performance of all FB algorithms and CQL in Figure 4. Both MC-FB and VC-FB stochastically dominate FB, achieving **150%** and **137%** its performance respectively. MC-FB and VC-FB outperform our single-task baseline in expectation, reaching 111% and 120% of CQL performance respectively _despite not having access to task-specific reward labels and needing to fit policies for all tasks_. This is a surprising result, and to the best of our knowledge, the first time a multi-task offline agent has been shown to outperform a single-task analogue. CQL outperforms offline TD3 in aggregate, so we drop offline TD3 from the core analysis, but report its full results in Appendix C alongside all other methods. We note FB achieves 80% of single-task offline TD3, which roughly aligns with the 85% performance on the full datasets reported by Touati et al. (2022).
**Q2.** We decompose the methods' performance with respect to domain and dataset diversity in Figure 5. The largest gap in performance between the conservative FB variants and FB is on Rnd, the highest-quality dataset. VC-FB and MC-FB reach 253% and 184% of FB performance respectively, and outperform CQL on three of the four domains. On Dianyn, the conservative variants outperform all methods and reach 135% of CQL's score. On the Random dataset, all methods perform similarly poorly, except for CQL on Jaco, which significantly outperforms all methods. However, in general, these results suggest the Random dataset is not informative enough to extract valuable policies. There appears to be little correlation between the type of domain (Appendix A.1) and the score achieved by any method.
Figure 4: **Aggregate zero-shot performance.**_(Left)_: IQM of task scores across datasets and domains, normalised against the performance of CQL, our baseline. _(Right)_ Performance profiles showing the distribution of scores across all tasks and domains. Both conservative FB variants stochastically dominate vanilla FB. The black dashed line represents the IQM of CQL performance across all datasets, domains, tasks and seeds.
Figure 5: **Performance by dataset and domain.** IQM scores across tasks and seeds with 95% confidence intervals. In general, the conservative FB variants perform better as dataset quality improves.
**Q3.** We report the performance of all FB methods across all domains when trained on the full Rnd dataset in Table 1. Both conservative FB variants maintain (and slightly exceed) the performance of vanilla FB in expectation and exhibit identical aggregate performance. These results suggest that performance on large, diverse datasets does not suffer as a consequence of the design decisions made to improve performance on our small datasets that lack diversity. Therefore, we can safely adopt conservatism into FB without worrying about performance trade-offs.
## 5 Discussion and Limitations
**Performance discrepancy between conservative variants.** VC-FB outperforms MC-FB in aggregate, but not in every constituent domain, which raises the question of when one variant should be selected over the other. Suppose the tasks in our domain of interest are distributed uniformly in \(z\)-space. In that case, we should expect VC-FB to outperform MC-FB as its conservative updates will match the underlying task distribution. Equally, if the tasks are clustered around state embeddings in \(z\)-space, then we should expect MC-FB to outperform VC-FB. In our experiments, we assumed the locomotion tasks would be distributed near-uniformly in \(z\)-space, and the goal-reaching tasks would be clustered around the goal state embeddings, thus implying better VC-FB performance on locomotion tasks and better MC-FB performance on goal-reaching tasks. There is little evidence to support this hypothesis. It seems intuitive the distribution of tasks in \(z\)-space _a priori_, and thus selecting the correct model, is non-trivial and requires careful further investigation. If progress can be made here, then \(\mathcal{Z}\) could be tuned to better match the underlying task distribution of the domain, conservative updates could be made with respect to this improved \(\mathcal{Z}\), and performance of all methods may improve.
**Computational expense of conservative variants.** The max value estimator used by the conservative FB variants performs log-sum-exponentials and concatenations across large tensors, both of which are expensive operations. We find that these operations, which are the primary contributors to the additional run-time, increase the training duration by approximately \(3\times\) over vanilla FB. An FB training run takes approximately 4 hours on an A100 GPU, whereas the conservative FB variants take approximately 12 hours. It seems highly likely that more elegant implementations exist that would improve training efficiency. We leave such an exploration for future work.
**Learning instability.** We report the learning curves for all algorithms across domains, datasets, and tasks in Appendix F. We note many instances of instability which would require practitioners to invoke early stopping. However, both CQL and offline TD3, our task-specific baselines, exhibit similar instability, so we do not consider this behaviour to be an inherent flaw of any method, but rather an indication of the difficulty of learning representations from sub-optimal data. Future work that stabilises FB learning dynamics could boost performance and simplify their deployment by negating the need for early stopping.
We provide detail of negative results in Appendix D to help inform future research.
## 6 Related Work
**Conservatism in offline RL.** The need for conservatism in offline RL was first highlighted by Fujimoto et al. (2019) in which they propose batch-constrained \(Q\)-learning (BCQ), a method for aligning the distributions of agent and behaviour policies, as a remedy. Instead of operating on policies, Kumar et al. (2020)'s CQL operates on values, suppressing the value of OOD actions
\begin{table}
\begin{tabular}{l l l l l}
**Domain** & **Task** & **FB** & **VC-FB** & **MC-FB** \\ \hline Walker & all tasks & 639 (616–661) & 659 (647–670) & 651 (632–671) \\ Quadruped & all tasks & 656 (638–674) & 579 (522–635) & 635 (628–642) \\ Maze & all tasks & 219 (86–353) & 287 (117–457) & 261 (159–363) \\ Jaco & all tasks & 39 (29–50) & 33 (24–42) & 34 (18–51) \\ \hline All & all tasks & 361 & **381** & **381** \\ \hline \end{tabular}
\end{table}
Table 1: **Performance on full Rnd dataset.** Aggregated IQM scores for all tasks with 95% confidence intervals, averaged across three seeds. Both VC-FB and MC-FB maintain the performance of FB.
to be lower than in-distribution actions. More recently, Lyu et al. (2022) provided an improved lower bound on the performance of CQL with mildly conservative \(Q\)-learning (MCQ) which is less conservative on OOD actions close to the dataset that are often higher value than vanilla CQL predicts. They prove that MCQ induces policies that are at least as performant as the behaviour policy. We note that these methods could be directly ported into our proposal, which may improve performance further.
The most similar works to ours are Kidambi et al. (2020) and Yu et al. (2020); both are model-based offline RL methods that employ conservatism in the context of a dynamics model. Kidambi et al. (2020) suppress the reward of predicted rollouts using a binary operator that determines whether the rollout is within dataset support, and Yu et al. (2020) suppress rewards in proportion to the uncertainty of rollouts as predicted by their dynamics model. Both are analogous to our OOD value suppression in VC-FB, but are limited to the single-task setting and require a planning algorithm to optimise the policy at test time. To our knowledge, the only work that studies zero-shot RL with sub-optimal data is Kumar et al. (2022)'s work on scaling CQL. In multi-task Atari, they train one base network on a large, diverse dataset of sub-optimal trajectories, then optimise separate actor heads for each Atari task. They show that, provided a sufficiently large dataset, high parameter count CQL networks can outperform the behaviour policy that created the dataset. However, the generality of this approach is limited by the need for a separate head per task, meaning we need to enumerate the tasks we wish to solve _a priori_. The parameterisation by task vectors \(z\) means that FB (and hence, our approach) automatically learns policies for all tasks without such enumeration.
**World models.** The canonical multi-task model is Schaul et al. (2015)'s universal value function approximator (UVFA), which conditions state-action value predictions on tasks and can be used by a planning algorithm to return a task-conditioned policy. The planning requirement is removed by Barreto et al. (2017)'s successor features, enabling policies to be returned from UVFA-style models using only rudimentary matrix operations. These works laid the foundations for universal successor features (Borsa et al., 2018) and FB representations (Touati and Ollivier, 2021; Touati et al., 2022) that can return policies for any task in an environment instantly after a pre-training phase. Sekar et al. (2020)'s Plan2Explore showed agents can synthesise policies zero-shot that are comparable with those obtained via single-task when trained solely inside a world model. Ghosh et al. (2023)'s intention-conditioned value functions can be thought of as FB representations without actions, and so predict how states evolve with respect to a task, i.e. \((s,z)\rightsquigarrows_{+}\) rather than \((s,a,z)\rightsquigarrows_{+}\). This is helpful as it allows representations to be learnt from data without action or reward labels, increasing the scope of datasets on which RL algorithms can be trained. To our knowledge, no prior work has acknowledged the deficiencies of any of these models with sub-optimal datasets, and no work has attempted to augment these models with conservatism.
## 7 Conclusion
In this paper, we explored training agents to perform zero-shot reinforcement learning (RL) with sub-optimal data. We established that the existing state-of-the-art method, FB representations, suffer in this regime because they overestimate the value of out-of-distribution state-action values. As a resolution, we proposed a family of _conservative_ FB algorithms that suppress either the values (VC-FB) or measures (MC-FB) of out-of-distribution state-action pairs. In experiments across various domains, tasks and datasets, we showed our proposals outperform vanilla FB by up to 150% in aggregate and surpass our task-specific baseline despite lacking access to reward labels _a priori_. In addition to improving performance when trained on sub-optimal datasets, we showed that performance on large, diverse datasets does not suffer as a consequence of our design decisions. Our proposals are a step towards the use of zero-shot RL methods in the real world.
#### Acknowledgments
We thank Sergey Levine for helpful feedback on the core and finetuning experiments, and Alessandro Abate and Yann Ollivier for reviewing earlier versions of this manuscript. Computational resources were provided by the Cambridge Centre for Data-Driven Discovery (C2D3) and Bristol Advanced Computing Research Centre (ACRC). This work was supported by an EPSRC DTP Studentship (EP/T517847/1) and Emerson Electric. |
2309.06662 | Oceananigans.jl: A model that achieves breakthrough resolution, memory
and energy efficiency in global ocean simulations | Climate models must simulate hundreds of future scenarios for hundreds of
years at coarse resolutions, and a handful of high-resolution decadal
simulations to resolve localized extreme events. Using Oceananigans.jl, written
from scratch in Julia, we report several achievements: First, a global ocean
simulation with breakthrough horizontal resolution -- 488m -- reaching 15
simulated days per day (0.04 simulated years per day; SYPD). Second,
Oceananigans simulates the global ocean at 488m with breakthrough memory
efficiency on just 768 Nvidia A100 GPUs, a fraction of the resources available
on current and upcoming exascale supercomputers. Third, and arguably most
significant for climate modeling, Oceananigans achieves breakthrough energy
efficiency reaching 0.95 SYPD at 1.7 km on 576 A100s and 9.9 SYPD at 10 km on
68 A100s -- the latter representing the highest horizontal resolutions employed
by current IPCC-class ocean models. Routine climate simulations with 10 km
ocean components are within reach. | Simone Silvestri, Gregory Wagner, Christopher Hill, Matin Raayai Ardakani, Johannes Blaschke, Jean-Michel Campin, Valentin Churavy, Navid Constantinou, Alan Edelman, John Marshall, Ali Ramadhan, Andre Souza, Raffaele Ferrari | 2023-09-13T01:35:23Z | http://arxiv.org/abs/2309.06662v1 | Oceananigans.jl: A model that achieves breakthrough resolution, memory and energy efficiency in global ocean simulations
###### Abstract
Climate models must simulate hundreds of future scenarios for hundreds of years at coarse resolutions, and a handful of high resolution decadal simulations to resolve localized extreme events. Using Oceananigans.jl, written from scratch in Julia, we report several achievements: First, a global ocean simulation with breakthrough _horizontal resolution_ -- 488m -- reaching 15 simulated days per day (0.04 simulated years per day; SYPD). Second, Oceananigans simulates the global ocean at 488m with breakthrough _memory efficiency_ on just 768 Nvidia A100 GPUs, a fraction of the resources available on current and upcoming exascale supercomputers. Third, and arguably most significant for climate modeling, Oceananigans achieves breakthrough _energy efficiency_ reaching 0.95 SYPD at 1.7 km on 576 A100s and 9.9 SYPD at 10 km on 68 A100s -- the latter representing the highest horizontal resolutions employed by current IPCC-class ocean models. Routine climate simulations with 10 km ocean components are within reach.
## 1 Justification
Oceananigans.jl -- a new ocean model written from scratch in Julia -- achieves ocean simulations with breakthrough resolution, memory and energy efficiency, realizing 0.041 simulated years per day (SYPD) at 488 m on 768 Nvidia A100s, 0.95 SYPD at 1 km on 576 A100s, and 9.9 SYPD at 10 km on 68 A100s.
## 2 Performance Attributes
\begin{tabular}{c|l} \hline \hline Categories & Scalability, time-to-solution, energy-to-solution. \\ Type of method & Fully explicit with sub-cycling. \\ Results basis & Whole application excluding I/O. \\ Numerical precision & Both 64- and 32-bit cases measured. \\ System scale & Results measured on full-scale systems. \\ Measurement mechanism & Timers, memory used and energy used. \\ \hline \hline \end{tabular}
Overview of the Problem
Climate models are essential for predicting where, when, and how climate change threatens Earth's ecosystems and human civilization. But current climate models, which capture only the broadest aspects of global warming, fall far short of providing the needed accuracy and granularity required to design and implement costly adaptation and mitigation strategies [14]. Significant reduction of the uncertainty of climate predictions is potentially worth trillions of dollars [20].
Climate models simulate the three-dimensional fluid dynamics, thermodynamics, chemistry, and biology of the atmosphere, ocean, and land to predict the hydrological cycle, carbon cycle and the net energy imbalance of the Earth system. While typical climate models use coarse resolutions of 25-100 km to simulate the numerous climate scenarios required by the Intergovernmental Panel on Climate Change (IPCC) [25], a handful of state-of-the-art climate simulations have been performed at higher resolutions of O(10 km) at astronomical expense. At either resolution there are many processes, such as clouds and ocean turbulence, that cannot be explicitly simulated and are instead approximated by empirical formulae called _parameterizations_. Biases due to inadequate parameterizations dominate the uncertainty of climate predictions over the next few decades [34, 23].
The prevailing strategy to reduce climate model uncertainty is to refine model resolution as much as possible [34]. For example, at horizontal resolutions of 1 km a substantial fraction of atmospheric convection and ocean turbulence are explicitly modeled by Newton's laws of motion, greatly reducing the impact of parameterizations [34]. High-resolution climate modeling is further required to make predictions for specific regions, providing information for local decision makers on adaptation and mitigation [14].
Yet the "resolution strategy" is fundamentally limited: even at 1 km resolution many climate-relevant physical processes remain unresolved [49]. Worse, processes such as sea ice dynamics, biology, or cloud-aerosol interaction will never be resolved because accurate macroscopic laws do not exist. Absent theoretical breakthroughs, such "irreducible" uncertainties can be addressed only by leveraging Earth system observations through advances in data assimilation and machine learning [41]. Data-driven optimization of climate models requires _ensembles_ of climate predictions, rather than single predictions at the highest affordable resolution. Ensembles of simulations are also required to explore emission scenarios and to estimate the impact of initial condition uncertainty and internal variability.
Consequently, reducing the uncertainty of climate predictions demands not _just_ higher resolution, but _more efficient resource utilization_ to enable hundreds to thousands of relatively high-resolution simulations. As an example, we consider the computational requirements to enable 100-simulation ensembles using all 37,888 AMD MI250 GPUs of the Frontier exascale supercomputer: completing an ensemble of 300-year simulations (200 years of spin-up + 100 years of prediction) within one month of wall clock time requires a climate model that can achieve 10 simulated years per day (SYPD) using 378 GPUs, or 1/100th of Frontier's resources. Disruptive progress on climate modeling requires not just scalable performance for a single, high-resolution simulation, but advances in _efficiency_ to meet this ensemble-based "10 per 100th" benchmark [40].
Our submission uses the ocean component of a new climate model being developed by the Climate Modeling Alliance [44]. The ocean contributes key uncertainty to climate predictions due to its prominent role in the Earth system's heat and carbon cycles. At 10 km resolutions, where ocean model uncertainties are significantly reduced, the ocean often is the most expensive climate model component [19]. This calls for a step change in ocean model performance.
Current State of the Art
We are aware of only three global ocean simulations that have achieved resolutions finer than 5 kilometers -- all at tremendous computational expense. In 2014, MITgcm [29] was used to perform the one year, tidal-forced ice-ocean simulation "LLC4320" [45], which exhibits 2.2 km horizontal resolution with 90 vertical levels. LLC4320 achieved 0.047 simulated years per day (SYPD) using 70,000 cores of the NASA Pleiades system.
FIO-COM32 [50] ran at \(\sim\)2.5 km (1/32\({}^{\text{nd}}\) degree) horizontal resolution with 90 vertical levels for 3.5 years. [48] ported LICOM3 to GPUs to realize 0.51 SYPD at 1/20\({}^{\text{th}}\) horizontal resolution with 60 vertical levels using 384 MI50 AMD GPUs, and further managed to scale to 26200 MI50s with strong scaling efficiency of 8%.
The largest ocean simulations used in current IPCC-class climate models, which typically require faster-time-to-solution to support longer simulations, have horizontal resolutions of roughly 10 km. [8] describes output from four 60-year ocean simulations following the OMIP-2 protocol with 8 km (1/12\({}^{\text{th}}\) degree), 10 km, and two with 11 km (1/10\({}^{\text{th}}\) degree). [11] report a 110-year simulation at 10 km (1/10\({}^{\text{th}}\) degree) horizontal resolution, the longest high resolution OMIP-2-style run. Some of the highest resolution climate models are the iHESP CESM-based model with 25km-10km atmosphere-ocean resolution [51], achieving 3.4 SYPD, and the 50km-10km HadGEM3-GC3.1 submission to HighResMIP [18, 37], achieving 0.4 SYPD.
At 3.4 SYPD, the iHESP CESM achieves sufficient time-to-solution for hundreds to thousands of years of simulated climate. But such a simulation is purchased for a high price, requiring the 40% of the Sunway TaihuLight supercomputer [51] and 4 million cores consuming 6 MW for hundreds of days of wall clock time. Enabling the large ensembles of high-resolution simulations needed to improve climate prediction requires both performance at scale as well as efficient _resource utilization_.
Figure 1 plots simulated years per _mega-watt-hour_ (SYPMWh) against resolution for state-of-the-art ocean models. The SYPMWh metric encodes the efficiency requirement needed to make progress on climate uncertainty with next-generation climate models: in particular, we require both higher-resolution models (moving rightwards in figure 1) as well as more efficient models (moving upwards in figure 1). For completeness, we report SYPMWh also for two GPU-based models: Veros [22], an ocean model, and COSMO [15], an atmospheric model. The present nomination is shown with stars from whence we see significant performance gains compared to the existing state-of-the art.
Figure 1: Simulated years computed by a megawatt-hour of energy (SWPMWh) versus number of grid points for state-of-the-art atmosphere and ocean models. Stars show the performance of our ocean model in a realistic and “aqua planet” (AP) setup.
## 5 Innovations
Our achievement is three-fold: first, using new software written in the Julia programming language called Oceananigans.jl [35], we report a near-global ocean simulation with highest-ever horizontal resolution (488 m) reaching 15 simulated days per day (0.04 SYPD). Second, Oceananigans performs this simulation with breakthrough _memory efficiency_ on just 768 NVidia A100 GPUs, and thus a fraction of the available resources on current and upcoming exascale supercomputers. Third, and arguably most important, Oceananigans achieves breakthrough _energy efficiency_, simulating the global ocean at 0.95 SYPD with 1.7 km resolution on 576 A100s, and at 10 km -- the highest horizontal resolution employed by an IPCC-class ocean model -- achieving 9.9 SYPD on 68 Nvidia A100s. This final milestone proves the feasibility of _routine_ climate simulations with 10 km ocean components, a crucial resolution threshold at which ocean macroturbulence (the most energetic ocean motions with scales between 10-100 km) is fully resolved.
We attribute these achievements first and foremost to a high-risk, high-reward strategy to develop a new ocean model from scratch in Julia with a specific focus on GPU performance and memory efficiency. Additional crucial ingredients include advances in numerical methods for finite volume fluid dynamics on the sphere and a novel optimization for simulating ocean free surface dynamics that achieves unprecedented GPU scalability.
### Starting from scratch with Julia
Oceananigans.jl is an open-source library for ocean-flavored fluid dynamics written from scratch in Julia [7]. Julia is a dynamic high-level programming language that leverages Just-In-Time (JIT) compilation and LLVM [24] to achieve performance competitive with traditional HPC languages like C or Fortran. Julia has gathered interest as potential language for HPC [17, 10, 16, 21, 27] and provides easy integration with MPI [47, 38]. Most of Oceananigans.jl software is hardware-agnostic through the Julia package KernelAbstractions.jl [10], which enables performance portability targeting CPUs and different GPU vendors using the JuliaGPU [6, 5] software stack, similar to the capabilities provided by Kokkos [9], OCCA [30], and HIP [1].
To our knowledge, Oceananigans is the first ocean model written from scratch for GPUs, rather than ported from existing CPU code. Starting from scratch and using the Julia programming language allowed us to rethink the typical patterns used in ocean and atmosphere dynamical cores. In particular, we developed a system of composable atomic operators that leverages Julia's functional programming paradigm and effective inlining capabilities to recursively construct large expression trees for calculus on staggered finite volume grids. Using this composable operator system, we fuse the entire tendency computation for each prognostic variable into a single compute-heavy kernel, each of which depends on only two intermediate diagnostic variables representing hydrostatic pressure and vertical diffusivity (which is treated implicitly using a predictor-corrector method).
Such a high degree of abstraction yields a number of innovations: first, kernel fusion maximizes efficiency on GPUs. Second, almost all intermediate quantities are computed on-the-fly, so that Oceananigans is extremely memory efficient and can perform global ocean simulations at resolutions up to \(1/4^{\text{th}}\) degree on a single Nvidia V100. Finally, because all compute-heavy kernels rely on a single "tendency kernel function" applied at each grid index \(i,j,k\), we can easily optimize performance by rapidly prototyping techniques to overlap computation and communication. The sparsity of kernels per time-step and small number of temporary variables mean that Oceananigans' algorithmic structure is markedly different from current ocean models, which typically allocate 10 to 100 _times_
the minimum necessary memory [4] and distribute computations across many small kernels [51]. We argue these algorithmic differences are a major factor in Oceananigans' energy-efficiency and time-to-solution on GPU systems.
### New numerical methods for finite volume fluid dynamics on the sphere
Our results use Oceananigans.HydrostaticFreeSurfaceModel, which solves the hydrostatic Boussinesq equations in a finite volume framework on staggered C-grids [3]. Oceananigans' hydrostatic model employs an implicit-explicit second-order Adams-Bashforth time stepping scheme. Vertically-implicit diffusion is implemented with a backward Euler time-discretization and tridiagonal solver.
A major innovation is a new adaptive-order scheme based on weighted essentially non-oscillatory (WENO) reconstructions [42] for advecting momentum and tracers on curvilinear finite-volume grids [43]. This new scheme automatically adapts to changing spatial resolution and permits stable, high-fidelity simulations of ocean turbulence without explicit dissipation or hyper-dissipation. This innovation reduces setup time when changing or increasing resolution while guaranteeing high-fidelity solutions that exhibit the minimum necessary dissipation of sharp, near-grid scale features.
### Optimization of ocean free surface dynamics for unprecedented GPU scalability
In hydrostatic ocean models with a free surface, the vertically-averaged, two-dimensional "barotropic mode" has dynamics orders of magnitude faster than the three-dimensional "baroclinic" component, and must be treated by a special "barotropic solver". Due to communication overhead, barotropic solvers in current ocean models -- whether implicit or explicit -- are a major bottleneck that accounts for between 40% [22] to 60% [48, 36] of the cost of a typical IPCC-class ocean simulations.
Oceananigans' excellent scalability is enabled by an innovative optimization of the parallel barotropic solver. An increase in computation is traded in for decreased communication latency by leveraging the two-dimensionality of the barotropic problem. Our new barotropic solver is based on explicit subcycling of the barotropic mode. Increasing the width of the barotropic halo to equal the number of explicit subcycles (typically between 10-30) greatly decreases the frequency of communication. As a result, communication is required once per time-step rather than every subcycle, reducing the frequency of communication by a factor of 10 to 30. The cost of the barotropic solver is therefore less than 10% of the total cost of a time step. Due to the sparsity of communication enabled by our novel barotropic solver, all communication operations can be overlapped with computational workloads as sketched in figure 2.
Figure 2: Left: time-stepping sequence. Right: different domains over which 2D fast and 3D slow mode updates take place (here assuming 1 barotropic substep per baroclinic step – halo region of size 1 – and second-order methods – outer region of size 1)
How performance was measured
The Oceananigans model performance is estimated for two near-global ocean simulations with different domains: a realistic (R) domain and an aqua planet (AP) domain. Both domains span the entire longitudinal extent of the sphere and cover a latitude range of 75\({}^{\circ}\)S to 75\({}^{\circ}\)N.
The Realistic domain has realistic bathymetry and is forced by realistic surface momentum, heat, and salinity fluxes derived from the ECCO2 state estimate[31] at three resolutions:
* **OceananigansR12**: with 1/12\({}^{\text{th}}\) degree horizontal resolution (\(\sim\)7 km) and 48 vertical levels
* **OceananigansR24**: with 1/24\({}^{\text{th}}\) degree horizontal resolution (\(\sim\)3.4 km) and 100 vertical levels
* **OceananigansR48**: with 1/48\({}^{\text{th}}\) degree horizontal resolution (\(\sim\)1.7 km) and 100 vertical levels
Figure 3 shows surface vertical vorticity after one year integration of **OceananigansR12**: and over the global ocean and also for selected regions to show further detail. Both **OceananigansR48**: and exhibit macroscale turbulent ocean features that are currently unresolved by most IPCC-class models. The **OceananigansR48**: solution exhibits fronts, filaments, and other "submesoscale" vorticity features realized only a handful of times in global simulations.
The idealized **OceananigansAP** suite of simulations [13], which has idealized bathymetry and surface forcing that does not require interpolation to different resolutions, is used for weak scaling experiments. All **OceananigansAP** experiments have 100 vertical levels and two latitudinal ridges that divide the world ocean into two basins. We vary the horizontal resolution of **OceananigansAP** from 1/6\({}^{\text{th}}\) of a degree (\(\sim\)14 km) to 1/196\({}^{\text{th}}\) of a degree (\(\sim\)488 m).
None of our simulations require explicit horizontal diffusion of momentum or tracers owing to the adaptive WENO advection scheme described in section 5.2. All simulations use a Richardson-number-based parameterization for vertical mixing due to unresolved shear and convective turbulence at 1-100 m scales.
To assess the time-to-solution for each experiment in simulated years per day (SYPD), we measure average wall clock time per time-step. Wall clock time is sampled through NVIDIA's Nsight System and recorded by NVIDIA Tool Extension Library via the NVTX.jl Julia package.
To assess the efficiency of each solution in simulated years per mega-watt-hour (SYPMWh), we combine SYPD with an estimate of the mean power draw over the duration of an experiment. On MIT Satori [2], which has 256 Nvidia V100s, we have access to precise, billing-grade power metering. For all simulations with Nvidia A100s we estimate power consumption \(P\) with
\[P=250D+300N\,\text{Watts}\,, \tag{1}\]
where \(D\) is the number of A100s and \(N\) is the number of nodes.
We further note that power estimates are provided by LICOM3 and COSMO, but not for LLC4320 or Veros. To estimate the power consumption of LLC4320, we assume that each of the 1000 dual CPU nodes draws 500W. We estimate the power consumption of iHESP CESM [51] and HadGCM3 [37] as a percentage of the peak power consumption of their respective clusters. We use equation (1) to estimate Veros' power consumption on 1 node with 16 A100s.
Figure 3: Vertical vorticity as simulated by **Oceananigans12** (top left) and **Oceananigans48** (bottom left) after a one year integration on September 1st. To the right, insets zoom on particularly energetic current systems: the Aghulas and the East Australian Currents. While major ocean currents with widths of 10-100 km are resolved in both simulations, the sharp density fronts and associated currents that develop at the ocean surface in winter at scales between 1-10 km (the ocean weather) are only resolved by **Oceananigans48**. On September 1 — spring in the southern hemisphere, fall in the northern hemisphere — such sharp frontal features populate the southern ocean but are suppressed in the north.
Performance Results
We report both scaling results via time-to-solution in SYPD and efficiency results via energy-to-solution in SYPMWh.
### Scaling Results
**Realistic ocean simulations (Satori and Engaging clusters).** We report strong scaling tests using the realistic global setup shown in figure 3 on two clusters: _(i)_ the MIT Satori cluster [2], a high-performance Power 9 system composed of 64 Power 9 nodes hosting four Nvidia V100 GPUs with 32GBs memory each, and _(ii)_ the Engaging MIT cluster, using 8 nodes that host 4 NVlinked A100s with 80GBs memory each. The resulting wall clock time per time step, averaged over 1500 time steps, is presented in Figure 4 for both single precision (FP32) and double precision (FP64) computations. On a single node, **OceananigansR12** attains 0.9 SYPD in double precision and 1.4 SYPD in single precision, with a wall clock time per time step ranging from 330 to 550 ms. When increasing the number of nodes up to 16 (64 GPUs), the communication overhead increases, resulting in 12.4 SYPD in single precision and 7.75 SYPD in double precision. We measure a strong scaling efficiency of 52% in single precision and 55% in double precision over 64 GPUs, because the computational workload (40 ms wall clock time per time-step) eventually becomes too short to completely mask the communication overhead.
For higher-resolution ocean weather-permitting simulations, the scaling is almost ideal across the range we investigate. For **OceananigansR24** (FP64-V100) and **OceananigansR48** (FP32-V100), we measure larger than ideal scaling. This counter-intuitive result is a product of a load balance improvement as the number of GPUs increases. In summary, we attain 1.94 SYPD on 120 V100 GPUs with a kilometer-scale resolution (**OceananigansR24**) and 0.33 SYPD with an ocean weather-resolving simulation (**OceananigansR48**). Finally, we have tested the **OceananigansR48** setup on 144 Perlmutter nodes (576 A100 GPUs), reaching the 0.95 SYPD. This is the _first instance_ of a kilometer-scale ocean achieving \(\sim\)1 SYPD. We have also tested the **OceananigansR12** setup on 17 nodes obtaining 9.9 SYPD (see fig. 5).
**Aqua-planet simulation (Perlmutter cluster).** We report weak scaling tests on the NERSC supercomputer (Perlmutter). Perlmutter is a HPE (Hewlett Packard Enterprise) Cray EX super
Figure 4: Strong scaling tests for the realistic setups **OceananigansR12**\((1/12^{\circ})\), **OceananigansR24**\((1/24^{\circ})\), and **OceananigansR48**\((1/48^{\circ})\). The left plot reports simulated years per wall clock day (SYPD) while the right plot wall clock milliseconds per time steps. All results are averaged over 1500 time steps.
computer that hosts four A100 GPUs with 40GB per node, linked through a NVLink3 interconnect. All weak scaling tests are performed using the **OceananigansAP** setup on double precision. We allocate two different horizontal resolutions (1/12 and 1/6 of a degree), progressively increasing them with the number of GPUs while maintaining 100 vertical levels. As shown in figure 5, we obtain 100% weak scaling efficiency for the whole investigated range (1 to 196 nodes - 4 to 768 A100s).
### Energy efficiency
In table 1 we summarize the energy metrics for our computations as well as the other investigated models. Figure 1 is derived from the data outlined in this table. HadGEM3 and iHRES entries are estimated by including the whole coupled climate model (atmosphere and ocean). Unavailable data is marked with \(-\). Our Oceananigans simulations are the highest in each of their columns. This reflects our attention to memory and energy efficiency.
\begin{table}
\begin{tabular}{l r r r r r r} \hline Model & Time step & Grid size & CPU/GPU & SYPD & wtime/tstep & Power est. & In fig 1 \\ \hline HadGEM3FP64 (Climate) [37] & - & \(\sim 7.22\times 10^{8}\) & 9396 (Cray XC40) & 0.4 & - & 141KW & ✓ \\ iHRESFP64 (Climate) [51] & - & \(\sim 6\times 10^{8}\) & Sunway TaihuLight & 3.7 & - & 6500KW & ✓ \\ LLC3X20FP64 (Ocean) & 25 s & \(8.7\times 10^{10}\) & 2000 (Intel) & 0.041 & 1.6 & 500KW & ✓ \\ VerosFP64 (Ocean) [22] & 180 s & \(3.5\times 10^{8}\) & 16 (A100) & 0.8 & 0.62 & 5.2KW & ✓ \\ VerosFP32 (Ocean) [22] & 180 s & \(3.5\times 10^{8}\) & 16 (A100) & 1.3 & 0.38 & 5.2KW & \\ ILCOM3FP84 (Ocean) [48] & 60 s & \(1.5\times 10^{9}\) & 384 (MI50) & 0.51 & 0.32 & 92KW & ✓ \\ ILCOM3FP84 (Ocean) [48] & 60 s & \(1.5\times 10^{9}\) & 26200 (MI50) & 2.72 & 0.06 & 6300KW & \\ COSMOFP64 (Atmos) [15] & 6 s & \(3.46\times 10^{10}\) & 4888 (P100) & 0.043 & 0.4 & 1000KW & ✓ \\
**OceananigansR1FP32** (Ocean) & 180 s & \(3.7\times 10^{8}\) & 4 (V100) & 1.5 & 0.33 & 1.2KW & ✓ \\
**OceananigansR12FP32** (Ocean) & 180 s & \(3.7\times 10^{8}\) & 64 (V100) & 12.4 & 0.04 & 18KW & \\
**OceananigansR12FP46** (Ocean) & 180 s & \(3.7\times 10^{8}\) & 68 (A100) & 9.9 & 0.05 & 22 & ✓ \\
**OceananigansR48FP32** (Ocean) & 45 s & \(1.24\times 10^{10}\) & 120 (V100) & 0.33 & 0.37 & 36KW & \\
**OceananigansR48FP64** (Ocean) & 45 s & \(1.24\times 10^{10}\) & 576 (A100) & 1.0 & 0.13 & 187KW & ✓ \\
**OceananigansR48FP64** (Ocean) & 45 s & \(1.24\times 10^{10}\) & 32 (A100) & 0.063 & 1.9 & 10.4KW & \\
**OceananigansAPFP4** (Ocean) & 11 s & \(2.1\times 10^{11}\) & 768 (A100) & 0.063 & 0.81 & 252KW & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Performance details of state-of-the are climate, ocean, and atmosphere models. Larger grid sizes correspond to finer spatial resolution. Computations belonging to this submission are shown in bold.
Figure 5: Weak scaling tests performed in double precision with the **OceananigansAP** setup. Each GPU has a grid equivalent to a global \(1/6^{\circ}\) and 100 vertical layers. The weak scaling is performed up to a horizontal resolution of \(1/168^{\text{th}}\) of a degree (\(\sim\)488 m resolution) where we achieve 15 simulated days per wall clock day (1 year in roughly 25 days). The star marks the performance of **OceananigansR48** (figure 3) on 144 Perlmutter GPU nodes. All results are averaged over 500 time steps.
Implications
By developing a new model from scratch specifically for GPUs, and wielding a handful of key ocean-model-specific innovations, Oceananigans achieves 9.9 SYPD at 10 km resolution using less than 1% of the resources of current state of the art supercomputers. This achievement means that most climate model runs submitted to IPCC will be able use 10 km ocean models -- _precipitating a step change in the accuracy of climate prediction_.
At scales between 10-100 km, macroscale ocean turbulence exerts a key control on ocean carbon and heat uptake. However, attempts to accurately parameterize this key process in coarse resolution models have frustrated generations of oceanographers. The inadequacies of macroscale parameterizations are associated with major biases and uncertainty in climate predictions [33, 39]. At resolutions of 10 km, the need for macroscale turbulence parameterization is eliminated, and ocean simulations capture key ocean features such as sharp sea surface temperature gradients supporting the formation of marine stratus clouds above narrow eastern boundary currents like the California and Benguela Current [28], and changes in the meridional overturning circulation due to the effect of Antarctic meltwater on deep convection in austr winter [26].
Additionally, by achieving 0.95 SYPD at 1.7 km resolution, we pave the way for decadal ocean simulations of the ocean "submesoscale" -- the ocean analogue to atmospheric weather -- which exhibits hourly fluctuations, high spatial and seasonal variability, and which exerts a strong control on ocean air-sea fluxes, biological productivity and fish stocks [46]. The granularity and accuracy provided by 1.7 km resolution is further required to plan local mitigation strategies and predict local extreme events.
Third, the unparalleled speed of execution and memory efficiency of Oceananigans allows global computations at never-before-seen sub-kilometer resolutions. The capacity for ultra-high-resolution simulations aligns with current advancements in resolution of ocean sampling platforms from satellites [32, 12] to fleets of floats and drones. While this wealth of data is likely to provide new insights and scientific knowledge about the nature of small scale processes, global high-resolution ocean simulations will be needed to explore their impact on global climate scales.
Finally, our results pave the way for marked increase in energy efficiency of climate simulations. The very reason to develop climate models, as stated by the Coupled Model Intercomparison Project (CMIP), for example, is to provide the necessary information to effectively reduce emissions and mitigate the effects of global warming -- while, counterproductively, the carbon footprint of climate simulations that contribute to CMIP increases rapidly. Oceananigans' achievements represent a milestone towards decreased energy consumption by climate modeling efforts.
## 9 Acknowledgments
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award DDR-ERCAP0025591. This work is partly supported by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program and by NSF grant AGS-1835576. N.C.C. is supported by the Australian Research Council DECRA Fellowship DE210100749. |
2309.16471 | Hadoop-Oriented SVM-LRU (H-SVM-LRU): An Intelligent Cache Replacement
Algorithm to Improve MapReduce Performance | Modern applications can generate a large amount of data from different
sources with high velocity, a combination that is difficult to store and
process via traditional tools. Hadoop is one framework that is used for the
parallel processing of a large amount of data in a distributed environment,
however, various challenges can lead to poor performance. Two particular issues
that can limit performance are the high access time for I/O operations and the
recomputation of intermediate data. The combination of these two issues can
result in resource wastage. In recent years, there have been attempts to
overcome these problems by using caching mechanisms. Due to cache space
limitations, it is crucial to use this space efficiently and avoid cache
pollution (the cache contains data that is not used in the future). We propose
Hadoop-oriented SVM-LRU (HSVM- LRU) to improve Hadoop performance. For this
purpose, we use an intelligent cache replacement algorithm, SVM-LRU, that
combines the well-known LRU mechanism with a machine learning algorithm, SVM,
to classify cached data into two groups based on their future usage.
Experimental results show a significant decrease in execution time as a result
of an increased cache hit ratio, leading to a positive impact on Hadoop
performance. | Rana Ghazali, Sahar Adabi, Ali Rezaee, Douglas G. Down, Ali Movaghar | 2023-09-28T14:36:38Z | http://arxiv.org/abs/2309.16471v1 | Hadoop-Oriented SVM-LRU (H-SVM-LRU): An Intelligent Cache Replacement Algorithm to Improve MapReduce Performance
###### Abstract
Modern applications can generate a large amount of data from different sources with high velocity, a combination that is difficult to store and process via traditional tools. Hadoop is one framework that is used for the parallel processing of a large amount of data in a distributed environment, however, various challenges can lead to poor performance. Two particular issues that can limit performance are the high access time for I/O operations and the recomputation of intermediate data. The combination of these two issues can result in resource wastage. In recent years, there have been attempts to overcome these problems by using caching mechanisms. Due to cache space limitations, it is crucial to use this space efficiently and avoid cache pollution (the cache contains data that is not used in the future). We propose Hadoop-oriented SVM-LRU (H-SVM-LRU) to improve Hadoop performance. For this purpose, we use an intelligent cache replacement algorithm, SVM-LRU, that combines the well-known LRU mechanism with a machine learning algorithm, SVM, to classify cached data into two groups based on their future usage. Experimental results show a significant decrease in execution time as a result of an increased cache hit ratio, leading to a positive impact on Hadoop performance.
Caching mechanism, Cache replacement algorithm, SVM-LRU, Hadoop performance
## 1 Introduction
Hadoop [1] is an open-source framework for the storage and parallel processing of large datasets. Two major components of Hadoop are HDFS (Hadoop Distributed File System) and MapReduce. HDFS is a distributed file system with a master/slave architecture. Input data are split into data blocks of identical size, distributing multiple copies (according to a replication factor) of each block to different machines to provide fault tolerance. MapReduce is a method for the parallel processing of a large amount of data on a cluster of machines in a distributed environment. MapReduce consists of three phases: Map, Shuffle and Reduce. First of all, Map tasks convert input data into \(<\)key, value\(>\) pairs (referred to as intermediate data) then these intermediate data are sorted, shuffled, and provided as input for the Reduce tasks. Finally, Reduce tasks merge values with identical keys to generate final results.
Hadoop is a popular platform for analyzing various data types (structured, semi-structured, and unstructured data) and a number of Big Data tools are designed based on this platform. Hadoop has some advantages over Relational DataBase Management Systems (RDBMS) such as flexibility, high throughput, low cost, and concurrent processing. Also, when we compare Hadoop with other alternatives for Big Data processing such as Spark, we observe some plus points like
security, scalability, and high fault tolerance. However, Hadoop has some challenges that can lead to poor performance:
1. HDFS is based on a hard disk drives (HDD) system. The high access times for I/O operations can have a significant impact on the overall execution time [2].
2. The shuffle phase in MapReduce is a time-consuming operation: as much as 33% of the overall execution time is spent on this phase [3].
3. A large amount of intermediate data is thrown away after processing; this requires recomputation if reuse is required [4].
4. The MapReduce programming model is not well-suited for iterative programs. A large amount of intermediate data may be unchanged from one iteration to the next. The lack of a mechanism to identify duplicate computations means that the data must be re-loaded and re-computed at each iteration leading to wasted I/O, network bandwidth, and CPU resources. The second problem is related to identifying termination conditions via a fixed point corresponding to the application's output not changing for successive iterations. This itself needs an extra MapReduce job for each iteration, degrading performance [5].
In recent years, many researchers have proposed the use of caching mechanisms to address these challenges [2], [3], [4], [5], [6], [7]. By use of a caching mechanism, data is prefetched into cache memory in order to reduce overall execution time. A caching mechanism consists of two phases: the placement phase and the delivery phase. The placement phase determines how to place data into the cache memory according to some measure of data popularity. A limited cache size creates the need for a replacement policy that determines how to remove content from the cache if new content is to be added (when the cache capacity is reached). There are a number of different replacement algorithms that have been proposed. The second phase is the delivery phase, which retrieves data from cache memory according to user demands. Network congestion may result if user demands are sufficiently high. While a caching strategy can have a positive impact on Hadoop's performance, to maximize this impact the limited cache space must be efficiently used, in particular, minimizing cache pollution is a key goal.
While a decision if data should be cached must be made, the more important decision is which data should be replaced if new data is to be added to a full cache. Therefore, cache replacement is the core of caching. In this paper, we propose a Hadoop-oriented SVM-LRU (H-SVM-LRU) approach that applies the intelligent cache replacement algorithm SVM-LRU to optimize the use of cache space and avoid cache pollution. A machine learning component classifies cached data into two groups (reused in the future or not) to recognize which data should remain in the cache and which data should be replaced. The goal is to use limited cache space efficiently resulting in a positive impact on the cache-hit ratio. As a result, this method can be appropriate for iterative programs by both reducing data access time from disk and avoiding recomputation through accessing more intermediate data from the cache.
Our contributions in this paper are:
* We provide an overview of intelligent cache replacement methods used in web proxy caches.
* Different cache replacement strategies are investigated in the Hadoop environment, and we discuss their advantages and disadvantages.
* We introduce the intelligent caching mechanism, H-SVM-LRU, in the Hadoop environment, which uses SVM for classifying data into two groups: reused in the future or not.
* We evaluate H-SVM-LRU's performance (hit ratio) and compare it with LRU.
We carry out experiments to investigate the impact of this algorithm on Hadoop's execution time performance.
The rest of the paper is organized as follows: Section 2 defines the problem and our solution approach. We discuss existing caching replacement strategies for Hadoop and intelligent caching methods for web proxy caches in Section 3. We then describe the H-SVM-LRU framework and present our H-SVM-LRU algorithm for the Hadoop environment in Section 4. Next, we explain the details of the H-SVM-LRU implementation in Section 5. We evaluate the performance of H-SVM-LRU via different experiments in Section 6. Finally, Section 7 contains the conclusions and suggestion for future work.
## 2 Problem definition
Reducing job execution time is an effective factor for improving Hadoop performance. The execution time is composed of two components: I/O operation time and processing time. Since input data are stored in an HDD-based system, HDFS, access time for I/O operations can be high, adversely affecting execution time. One approach to tackle this problem is to use a caching mechanism to store required data in the cache, however, the cache has limited space which must be effectively managed.
Hadoop 2.3.0 has been released with support the in-memory caching to mitigate the cost of I/O operations and increase memory utilization. The Hadoop in-memory cache employs centralized cache management that can cache both input and intermediate data. In this case, the NameNode is responsible for coordinating all the DataNode off-heap caches in the cluster. The NameNode periodically receives a cache report from each DataNode, describing all of the data blocks cached on a given DataNode. The NameNode manages DataNode caches by piggybacking cache and uncached commands on the DataNode heartbeat message. While the OS page cache employs an LRU-like algorithm for its cache replacement, HDFS in-memory caching does not replace previously cached data blocks unless users ask for uncached data blocks. Therefore users must determine manually which data should be cached or uncached which can limit effective usage of the in-memory cache.
Moreover, the LRU policy typically suffers from cache pollution, where unpopular objects can occupy cache space for a long time. For example, in LRU, suppose that a new item is inserted at the top of the cache. If the item is not requested again, it will take some time to be moved down to the bottom of the cache before removing it from the cache. To address this issue, we customize an intelligent cache replacement strategy, SVM-LRU, for the Hadoop environment (H-SVM-LRU). This strategy combines a supervised machine learning algorithm, SVM, with LRU to classify input data into two groups: reused in the future or not. This method determines the victim data that must be uncached (based on their class) to avoid cache pollution.
## 3 Related work
In this section, we first provide an overview of cache replacement algorithms that have been proposed for improving Hadoop performance. We then discuss different intelligent caching mechanisms that use machine learning methods.
**3.1 Cache replacement policies in Hadoop**
In this section, we investigate different cache replacement strategies in Hadoop, including their advantages and disadvantages.
_LIFE_ and _LFU-F_ are two replacement strategies used in _PacMan_[8] for an in-memory coordinated caching mechanism and data-intensive parallel jobs. In _PacMan_, parallel jobs run numerous tasks concurrently in a wave with the all-or-nothing property. The _LIFE_ algorithm evicts data blocks of files with the largest wave-width and results in reducing the average job completion time. _LFU-F_ aims to maximize cluster efficiency. For this purpose, it evicts data blocks with less frequent access. Both strategies prioritize incomplete files over completed files for eviction and use a window-based aging mechanism to avoid cache pollution. They first check whether there are data blocks that have not been accessed within a given time window. Among these files, the one with the least number of accesses is chosen.
_Enhanced Data-Aware Cache (EDACHE)_[9] was introduced for caching intermediate results to accelerate MapReduce job execution times. In this strategy, _WSClock_ is used as a cache replacement algorithm in which cached items are maintained in a circular list and a clock hand advances around this ring. This algorithm replaces cached items based on their reference bit and the last time used. It first checks the reference bit. If its value is one it means this item is used. The item's reference bit is then reset, its last time used is updated, and the clock hand is advanced. Otherwise, an item with an age greater than a threshold value is evicted. The bottleneck of this mechanism is related to the fact that large blocks lead to long search times for requested contents.
In _collaborative caching_, a _Modified ARC replacement algorithm_[10] was proposed in order to increase the cache hit ratio and improve efficiency. In this strategy, the cache is divided into the recent cache, recent history, frequent cache, and frequent history such that the cache sections contain data blocks and the history sections include references to evicted items. Initially, on a request for a block, the references in the history caches are checked. If present the corresponding block is placed in the recent or frequent cache, otherwise the cache references and serves the request from either of the history caches, which helps in faster caching as well as locating files for initial checks. If references are found in recent history then the block is placed in the recent cache. If the block is found in the recent cache, then it is moved to the frequent cache, hence a hit in either of the history caches removes the reference and places the corresponding block in one of the caches (recent or frequent). Caching a block also involves caching metadata. When either of the caches is fully utilized then a block is evicted from the recent or frequent cache but its reference is placed into its corresponding history. When either of the history caches is fully utilized the references simply drop out of the cache.
An adaptive cache algorithm [11] was designed to cache the partition of tables into the HDFS cache for Big SQL. Selective _LRU-K (SLRU-K)_ and _Exponential-Decay (EXD)_ are used as online caching algorithms and selective cache insertion to reduce the overhead of inserting items into the HDFS cache. _SLRU-K_ takes into account the variable size of the partition and uses a weight heuristic to place the partitions into the cache selectively. It keeps the list of K's last access time for each partition. However, _EXD_ maintains only the time of last access to computing the score for each partition that determines the weight of access frequency versus recently used. In _Adaptive SLRU-K and EXD_, the adaptor adjusts its behavior with access patterns of various workloads by automatically adopting the value of their parameters. Maximizing the byte hit ratio and minimizing the byte insertion ratio is the primary aim of the adaptor.
The _block goodness aware cache replacement strategy_[12] was presented in 2017 and uses two metrics for cache management: cache affinity (CA) depends on resources used by the application and block goodness (BG) measures how much a cached data block is worth. For each cached item, this strategy first calculates the BG value based on the data block access count and MapReduce application cache affinity then selects a data block with the lowest BG value for eviction. A data block with the oldest access time will be discarded if there is more than one data block with the same lowest BG value.
The _Cache Affinity Aware Cache Replacement Algorithm_[13] was designed in 2018 and categorizes MapReduce applications based on their cache affinity. This algorithm prioritizes caching input data of applications with high cache affinity. It takes into account the cache affinity of a MapReduce application and data access frequency to calculate the benefit of caching for input data. As a result, it evicts a data block with the lowest caching benefit. If there are some data blocks with identical lowest benefits, it evicts a block based on the LRU policy.
_AutoCache_[14] was developed in 2019 and employs a lightweight gradient-boosted tree (XGBoost) to predict file access patterns on the HDFS cache. In this mechanism, the probability of accessing a file is measured by a probability score which is used as a metric by the cache replacement policy to avoid cache pollution. When the free space of the cache is less than 10%, the eviction operation is started and it continues until the cache capacity becomes lower than 85%. This cache replacement algorithm has a low overhead by limiting computation to a fixed number of files.
In Table 1, we compare these cache replacement strategies in terms of their criteria for eviction and mitigating cache pollution and summarize their advantages and disadvantages.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Replacement strategy & Metrics for eviction & Cache & Advantages & Disadvantages \\ \hline LIFE & Largest wave width & Window age & Reduces average & Effective for short jobs \\ & and incompleted file & strategy & completion time & \\ LFU-F & Frequency access and & Window age & Maximizes cluster & Effective for short jobs \\ & incompleted file & strategy & efficiency & \\ WSClock & Last time used & No & Decreases execution & Long search times for \\ & & & time & large data blocks \\ Modified ARC & Recency and frequency & No & Increases cache hit ratio & Needs space for storing \\ & of access & & history \\ Adaptive cache & The score of each & No & Adapts to various & Significant overhead \\ & partition & & workload & \\ Block goodness & Block goodness value & No & Effective for multiple & Needs to calculate \\ aware & and access time & concurrent applications & block goodness \\ Cache Affinity & Cache affinity of & No & Effective use of cache & Needs to know the \\ aware & application and recency & & space & cache affinity of \\ & & & applications \\ AutoCache & probability of & Calculating & Reduces average & \\ & accessing the file & probability & completion time and & File oriented cache \\ & & score & improves cluster & \\ & & efficiency & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hadoop cache replacement comparisons
### Intelligent caching mechanisms
In a related application domain, several intelligent caching strategies have been presented that use different machine learning techniques to enhance the performance of web proxy caches. Ali et al. proposed _SVM-LRU, SVM-GDSF_, and _C4.5-GDS_[15] that combined a support vector machine (SVM) and a decision tree (C4.5) with Least-Recently-Used (LRU), Greedy-Dual-Size (GDS), and Greedy-Dual-Size Frequency (GDSF) replacement strategies. In these techniques, web objects are classified into two groups: revisited later or not. These methods use a web proxy log file as a training dataset and different features of web objects such as recency, frequency, size, access latency, and type of object are considered for classification. Experimental results show that SVM-LRU appears to have the best hit ratio.
Employing a Bayesian network, _BN-GDS,_ and _BN-LRU_[16] were introduced to improve the performance of cache replacement strategies like Greedy-Dual-Size (GDS), and Least-Recently-Used (LRU). In these strategies, the probability of web objects belonging to the revisited class (and so should be cached) is calculated based on features such as retrieval time, frequency, size, and type. Experimental results suggest that BN-GDS achieves the best hit ratio while BN-LRU has the best byte hit ratio.
Hybrid ELM-LFU [17], a two-level caching scheme for web proxies, was presented in 2018. In the first level, LFU is used for fast caching replacement (due to its low complexity), and thus is suitable for real-time communication. An Extreme Learning Machine (ELM) is used in the second level, applying a single hidden layer feed-forward network where there is no need to adjust the weights. In this mechanism, the chosen web object for eviction in the first level will be placed in the second-level cache. This method features low training times.
In [18], Herotodos et al. designed a framework for moving data automatically through tiered storage in the distributed file system via a set of pluggable policies. For this purpose, they employ incremental learning to find which data should be downgraded or upgraded allowing for adaption to workload changes over time. For downgrading, this method uses different caching replacement strategies like LRU, LFU, Least Recently & Frequently Used (LRFU), LIFE, LFU-F, Exponential Decay (EXD), and XGBoost-based Modeling (XGB). Also, On Single Access (OSA), LRFU, EXD, and XGB are used for the upgrade policy. Experimental results show that XGB is more suitable because it requires minimal storage, has low training overhead, makes useful predictions, and can learn incrementally over time.
PACS-oriented SVM-LRU [19] was proposed for picture archiving and communication systems in 2021. This algorithm calculates the probability of future access to cached items. In this strategy, SVM-LRU has brought some benefits like low training time, low computation, high prediction accuracy, and high hit ratio.
Even though using a caching mechanism has yielded some benefits in the Hadoop environment, some challenges remain. For instance, cache management imposes a heavy load on the NameNode, both in terms of required storage and computational load, potentially degrading performance. Moreover, existing cache replacement policies in Hadoop do not take into account cache pollution and effective use of cache space and they do not apply intelligent caching mechanisms. In this paper, we design a cache replacement mechanism as an approach for overcoming these problems, using SVM to classify data, resulting in improved performance. We choose SVM because its generalization ability can be maximized when training data are scarce, and it can control the misclassification error.
## 4 H-SVM-LRU cache replacement
Support Vector Machine (SVM) [20] is a supervised machine learning technique that is used for binary classification, dividing data into two classes: positive and negative. In this section, we provide H-SVM-LRU implementation details and explain our replacement algorithm that combines LRU with an SVM to classify data into two classes: reuse in the future or not. The aim of this algorithm is to avoid cache pollution and to effectively use cache space, leading to decreased execution times. In this section, we present a framework for intelligent cache replacement based on machine learning and explain the H-SVM-LRU algorithm with an example to illustrate its operation.
### The proposed H-SVM-LRU framework
In this section, we propose a framework for an intelligent LRU approach using an SVM and in-memory cache [21], [22], [23] for Hadoop which consists of two functional components: the classification component is responsible for training the data classification by using an SVM classifier and the Hadoop in-memory cache component uses this trained classifier to manage the cache space. Figure 1 gives the system structure and its components which we now explain in detail.
Fig. 1: The proposed H-SVM-LRU intelligent cache replacement strategy framework
The classification component consists of Hadoop job history, training dataset preparation, SVM model training, and SVM classifier. The job history server allows the user to get log information on finished applications. This information can be exploited as a source to extract training data. Next, preprocessing data is employed to normalize data and eliminate outliers. After preparing training data, an SVM classifier is trained. Finally, the classifier is deployed, and SVM-LRU uses this data classification in its cache replacement decision.
The Hadoop in-memory cache component is composed of NameNode, DataNodes, Application Master, and container. The NameNode is responsible for coordinating all the DataNode caches in the cluster and stores two types of metadata: block metadata includes the location of data blocks on DataNodes and cache metadata maps the locations of cached data. The NameNode periodically receives a cache report from each DataNode describing all the blocks cached on a given DataNode. The cache report is used to update cache metadata. For better utilization of the large distributed in-memory caches in Hadoop clusters, each Hadoop container always sends a request to cache the accessed block to the NameNode, and then the NameNode controls which data blocks are added and evicted to and from in-memory caches. We use centralized cache management; as a result, H-SVM-LRU is located on the NameNode. In our system, a container is launched to run a task (either a Map task or a Reduce task) and always sends a request to find cached data blocks. Application Master manages the user job lifecycle and resource needs of individual applications. Each application has a unique, framework-specific Application Master associated with it. It coordinates an application's execution in the cluster and also manages faults.
Assume that a MapReduce application requires two data blocks A and B, where data block A is located on DataNode X and data block B is cached on DataNode Y. The Application Master communicates with the NameNode (which contains the cache metadata) to query the locations of the input blocks and their availability in the cache. A cache miss occurs when looking for data block A and a cache hit occurs for data block B. In the cache miss state, the NameNode looks for block metadata to find the DataNode that contains data block A. Although, there are multiple replicas of a given data block that can be accessed by the query, we choose the first one to reduce search time. We could cache all replicas of this data block in the DataNodes that contain them. In this case, cache replication is identical to data replication and can increase the cache hit rate ratio. In this case, excessive cache space is occupied, conflicting with the goal of our proposed method.
We then use the H-SVM-LRU algorithm and the PutCache(A, X) method to place this data block in the cache. After caching, DataNode X piggybacks the cache report with a heartbeat message and sends it to the NameNode to update the cache metadata. The NameNode informs the Application Master of the location of cached data by using GetCache(A, X), and GetCache(B, Y). When a cache hit occurs, the GetCache method of the H-SVM-LRU algorithm is called to retrieve the cached data. The end result is that not only applications not only do not wait for the data block to get cached but also the probability of accessing cached data has increased via effectively using cache space. It is important to note that applications do not necessarily access all their demand data from the cache, i.e. it is not necessary to wait for data to be cached.
### The proposed H-SVM-LRU algorithm
In this section, we describe the proposed H-SVM-LRU algorithm for reducing cache pollution. The ordered dictionary data structure is used to implement the LRU cache because it remembers the order in which keys were first inserted. This data structure removes the first item when the cache does not have sufficient space. When a cache hit occurs and a data block is found in the cache, its key is moved to the end to show that it was recently used. We assume that the victim
item is removed from the top of the cache and the recently used items are moved to the bottom of the cache. The proposed H-SVM-LRU algorithm consists of two procedures: GetCache (DataBlock, DataNode) to retrieve data items from the cache, and PutCache (DataBlock, DataNode) to place data blocks into the cache. This algorithm works as follows.
```
1-Input R\(=\) {DB\({}_{1}\), DB\({}_{2}\),.....,DB\({}_{n}\)}
2-for each data block DB\({}_{\rm x}\) requested by a task T\({}_{\rm i}\)
3- DN\({}_{\rm y}\leftarrow\)lookup for DB\({}_{\rm x}\) in the cache metadata //DB\({}_{\rm x}\) is cached into DN\({}_{\rm y}\)
4- if DB\({}_{\rm x}\) is in the cache then
5- begin
6- cache hit occurs
7- call GetCache(DB\({}_{\rm x}\), DN\({}_{\rm y}\)) //Retrieving DB\({}_{\rm x}\) from DN\({}_{\rm y}\)
8- else
9- cache miss occurs
10- DN\({}_{\rm z}\leftarrow\)lookup for DB\({}_{\rm x}\) in the block metadata //DB\({}_{\rm x}\) is located on DN\({}_{\rm z}\)
11- call PutCache(DB\({}_{\rm x}\), DN\({}_{\rm z}\)) //Place DB\({}_{\rm x}\) into the cache
12- end
13- Procedure GetCache( DB\({}_{\rm x}\), DN\({}_{\rm y}\) )
14- begin
15- class of DB\({}_{\rm x}\)\(=\)Apply-SVM (features)
16- if the class of DB\({}_{\rm x}\)\(=\)1 then //DB\({}_{\rm x}\) classified as reused class
17- move DB\({}_{\rm x}\) to the bottom of the cache
18- else //DB\({}_{\rm x}\) classified as unused class
19- move DB\({}_{\rm x}\) to the top of the cache
20- end
21- Procedure PutCache( DB\({}_{\rm x}\), DN\({}_{\rm z}\) )
22- begin
23- if insufficient space in DN\({}_{\rm z}\) cache for DB\({}_{\rm x}\)
24- Evict DB\({}_{\rm t}\) from top of cache
25- class of DB\({}_{\rm x}\)\(=\)Apply-SVM (features)
26- if the class of DB\({}_{\rm x}\)\(=\)1 then //DB\({}_{\rm x}\) classified as reused class
27- insert DB\({}_{\rm x}\) at the bottom of the cache
28- else //DB\({}_{\rm x}\) classified as unused class
29- begin
30- if there are some DB with class unused then
31- insert DB\({}_{\rm x}\) at the end of the unused data list in the cache
32- else insert DB\({}_{\rm x}\) at the top of the cache
33- end
34- end
```
**Algorithm 1**H-SVM-LRU on Hadoop
The input is the sequence of data requested by tasks. When the data block DB\({}_{\rm x}\) is requested by a task, it searches for the requested data block in the cache metadata to find the data block and the DataNode where it is cached, resulting in a cache hit or cache miss. If data block DB\({}_{\rm x}\) is cached at
DataNode \(\rm DN_{y}\), the cache hit state occurs and GetCache (\(\rm DB_{x}\), \(\rm DN_{y}\)) is called. In this procedure, the SVM predicts whether the class of that data block is that it will be reused in the future or not. The algorithm then moves the data block in the cache based on its class. If the data block is classified by the SVM as an item to be reused, the data block \(\rm DB_{x}\) will move to the bottom of the cache. Otherwise, it will move to the top of the cache to remove it immediately and free cache space.
In the cache miss state, the data block does not exist in the cache, so it can be cached for future uses. For this purpose, the block metadata is first used to find the location of the requested data block on a DataNode (for instance, \(\rm DN_{z}\)), and a request to cache this data block is sent by calling PutCache (\(\rm DB_{x}\), \(\rm DN_{z}\)). This method first checks the cache capacity as to whether it has sufficient space or not. If there is insufficient space, then it evicts the top item from the cache and the SVM predicts the class of the new item. If the data block is classified to be reused then it is placed at the bottom of the cache; otherwise, it is placed at the end of the unused data list, located at the top of the cache.
H-SVM-LRU can efficiently remove unwanted items at an early stage to make space for new data blocks. By using this mechanism, cache pollution can be reduced, and the available cache space can be utilized more effectively. If all data blocks in the cache have the same class, the proposed algorithm is identical to LRU and only considers the recently used metric for data eviction. Algorithm 1 presents H-SVM-LRU as a cache replacement strategy for the Hadoop environment.
```
1:\(\rm DB_{x}\), \(\rm DN_{y}\), \(\rm DN_{z}\), \(\rm DN_{z}\), \(\rm DN_{y}\), \(\rm DN_{z}\), \(\rm DN_
In order to understand the benefits of the proposed intelligent LRU, we provide an example to compare LRU with H-SVM-LRU. Figure 2 illustrates an example of these two algorithms and compares these two methods. We assume the cache capacity is capable of storing up to five data blocks of the same size. We consider the following subset of the sequence of data blocks with their associated class: (DB1,0) (DB2,1) (DB3,1) (DB4,1) (DB5,0) (DB6,0) (DB7,0) (DB2,0) (DB8,1) (DB3,1). It can be observed that data blocks DB1, DB5, DB6, and DB7 are not reused in the future, while the other data blocks are reused. In the traditional LRU policy, which does not consider class information, all items are initially stored at the top of the cache, so they need a longer time to move down to the bottom of the cache where the least recently used items are stored for evicting. By contrast, the proposed intelligent LRU considers the class of data blocks to determine the victim item that should be evicted. This strategy stores the reused data blocks at the bottom of the cache (DB2, DB3, DB4, and DB8). On the other hand, if data blocks are properly classified as not reused, then they will be stored at the top of the cache (DB1, DB5, DB6, and DB7). Therefore, the unused data blocks are removed earlier by the H-SVM-LRU mechanism to make space for new data blocks.
It can be observed that in the LRU policy, the data blocks DB2 and DB3 are evicted although they are reused in the near future, resulting in cache misses. In H-SVM-LRU, these data blocks have not been evicted leading to an increase in the cache hit ratio. It can be noted that the proposed intelligent LRU approach efficiently removes unused data blocks early to make space for new data blocks. Therefore, cache pollution is decreased, and the available cache space is exploited efficiently. Moreover, the hit ratio and byte hit ratio can be improved. This aspect is discussed in more detail in Section 6.
## 5 H-SVM-LRU implementation
In this section, we provide details for the two phases of H-SVM-LRU implementation, training data preparation, and model training.
### Training data preparation phase
This phase consists of four steps: data collection, feature selection, providing target labels, and data preprocessing. We now explain each step in more detail:
* _Data collection_: We consider two independent scenarios: request awareness and non-request awareness. In the first scenario, the sequence of requested data is determined by the tasks, and we consider the following data features: size, recency, frequency, and type (input data of Map tasks, intermediate data, and output of Reduce tasks). Table 2 describes these features. In the second scenario, we use the ALOJA [24] Hadoop dataset that gathers training data by executing various workloads from the Intel Hi-Bench benchmark suite [25], [26] a comprehensive benchmark suite for Hadoop consisting of a set of Hadoop programs, including both synthetic micro-benchmarks and real-world Hadoop applications. Then we extract some useful features from the Hadoop job history [27], consisting of log information of MapReduce jobs.
In the request awareness scenario, the demand data are predefined. In other words, training data have a label, therefore target labels do not need to be generated. This allow us to consider fewer
Figure 2: Example LRU and H-SVM-LRU replacement mechanisms
features than in the second scenario where the requirement to generate target labels may require the consideration of more features.
* _Feature selection_: It is important to choose suitable features for ensuring model performance while reducing overfitting and computational demand. There are different metrics to select features 1. Selecting features based on missing values 2. Selecting features based on variance 3. Selecting features based on correlation with other features 4. Selecting features based on model performance
\begin{table}
\begin{tabular}{c c} \hline Feature name & Description \\ \hline Type & The input of the Map task, intermediate data, and the output of \\ Reduce task \\ Size & Size of data blocks in MB \\ Recency & Time last used \\ Frequency & Number of uses \\ \hline \end{tabular}
\end{table}
Table 2: Features for the request-awareness scenario
\begin{table}
\begin{tabular}{c c c} \hline
**Feature name** & **Feature type** & **Description** \\ \hline JobName & Job & Name of job: WordCount, Sort, Grep, Sort, etc \\ MapsTotal & Job & The total number of Map tasks \\ MapsCompleted & Job & The number of completed Map Tasks \\ ReducesTotal & Job & The total number of Reduce tasks \\ ReducesCompleted & Job & The number of completed Reduce tasks \\ Job-Status & Job & Valid values of job state are: New, Initiated, Running, Succeeded, Failed, Killed, and Error \\ Cache Affinity & Job & Cache affinity of application: Low, High, Medium \\ Start time & Job & The time the job started (in ms) \\ Finish time & Job & The time the job finished (in ms) \\ Task type & Task & Map or Reduce task \\ Task status & Task & The states of the task are: New, Scheduled, Running, Succeeded, Failed, and Killed \\ AvgMapTime & Task & The average time of a Map task (in ms) \\ AvgReduceTime & Task & The average time of a Reduce task (in ms) \\ Progress & Task & The progress of the task as a percentage \\ \hline \end{tabular}
\end{table}
Table 3: Features for the non-request-awareness scenario to provide target label
In this step, we select the features given in Table 3. For simplicity, we ignore the size of the data and recently used data features because input data are split into data blocks of the same size, and the recently used data feature is taken into account by the LRU policy.
* _Providing target labels:_ Since the training dataset does not have target labels, we should provide them. For this purpose, we use a scenario that is based on job status and the status of its Map tasks and Reduce tasks to provide a label for requested data of tasks. Table 4 describes different cases for this scenario.
* _Dataset preprocessing_: The last step of preparation of the training dataset is data preprocessing which includes the elimination of irrelevant data, unnecessary fields, and data normalization.
### Model training phase
In this phase, we use the Scikit-Learn library in Python to implement an SVM for classifying data. The training process includes two steps: choosing the best kernel function and evaluating the trained model.
* _Choosing the best kernel function:_ SVM has several available kernel functions that can be used for training, including polynomial, sigmoid, linear, and RBF. We evaluate the
\begin{table}
\begin{tabular}{c c c c c c} \hline Job & Map task & Reduce task status & Input Map task label & Input & Rationale \\ & & & task label & Reduce task label & & \\ & & & task label & & \\ \hline New & New & New & Not & Not & The job is waiting in a queue. \\ & & & reused & reused & \\ Initiated & Scheduling & Waiting & Reused & Not & The outputs of the Map tasks have not been generated \\ & Running & Waiting & Reused & Not & The outputs of the Map tasks have not been generated \\ & & & reused & & yet. \\ Running & Succeeded & Scheduling & Not & Reused & If the input of Reduce is the output of the completed \\ & & & reused & & Map task. \\ Running & Succeeded & Running & Not & Reused & The map task has been completed. \\ Running & Failed & Waiting & Not & Not & The Map task is failed and cannot generate \\ & & & reused & reused & intermediate data. \\ Running & Succeeded & Failed & Not & Not & Map task is completed and Reduce task is failed and \\ & & & reused & reused & cannot continue. \\ Running & Killed & Waiting & Reused & Not & The killed task may execute on another node \\ & & & reused & & (speculative task) \\ Running & Succeeded & Killed & Not & Reused & The failed task may execute on another node \\ & & & reused & & (speculative task) \\ Succeeded & Succeeded & Succeeded & Not & Not & Job is completed and we do not consider the \\ & & & reused & reused & relationship between jobs and repetitive and recurring \\ & & & & jobs & \\ Failed & Don’t care & Don’t care & Not & Not & Job-status has higher priority than task status \\ & & & reused & reused & \\ \hline \end{tabular}
\end{table}
Table 4: Guidelines to provide target labels
performance of kernel functions by using the confusion matrix to choose an appropriate kernel function for the training dataset. A confusion matrix is a table that is often used to describe the performance of a classification model. The metrics that are used in the confusion matrix method for investigating the correctness of classification are:
\(\circ\) _Recall_: The ability of a classification model to identify all relevant instances.
\(\circ\) _Precision_: The ability of a classification model to return only relevant instances.
\(\circ\) _F1 score_: This metric combines recall and precision using the harmonic mean.
The formulas for calculating these metrics are as follows:
\[\text{Recall}{=}\frac{TP}{TP+FN}\qquad\text{Precision}{=}\frac{TP}{TP+FP} \qquad\text{F1 Score}{=}2\ast\frac{Precision*Recall}{Precision+Recall}\]
We choose the RBF function as a kernel function for our dataset because it demonstrated the best performance. The experimental results are reported in Table 5.
* _Evaluating the trained model_: The dataset is divided randomly into training data (75%) and testing data (25%). In this phase, we use testing data and cross-validation methods to evaluate the training model and its prediction accuracy. The resulting prediction accuracy is 83%. In other words, the probability of misclassification is low. However, if misclassification occurs in reused data it is possible to evict them before reuse which leads to increased cache misses. In contrast, if misclassification occurs for unused data cache pollution may result.
## 6 H-SVM-LRU evaluation
In this section, we explain the experimental environment including software and hardware configurations, and set some Hadoop configuration parameters. Our evaluation is divided into two sections: the H-SVM-LRU performance evaluation and the investigation of the impact of the proposed algorithm on Hadoop performance. We first evaluate the efficiency of the proposed algorithm by using the cache hit ratio as the performance metric. Finally, we perform experiments to present the impact of the H-SVM-LRU cache replacement policy on overall Hadoop performance.
### Experimental setup
For our experiments, we use a cluster consisting of a single NameNode and nine DataNodes located in the same rack.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Kernel function & \multicolumn{2}{c|}{Precision} & \multicolumn{2}{c|}{Recall} & \multicolumn{2}{c|}{F1-score} & Accuracy \\ \hline \multirow{2}{*}{Linear} & 0 & 0.67 & 0 & 1 & 0 & 0.8 & 0.71 \\ \cline{2-7} & 1 & 1 & 1 & 0.33 & 1 & 0.5 & \\ \hline \multirow{2}{*}{RBF} & 0 & 0.8 & 0 & 1 & 0 & 0.81 & 0.85 \\ \cline{2-7} & 1 & 0.65 & 1 & 0.7 & 1 & 0.75 & \\ \hline \multirow{2}{*}{Sigmoid} & 0 & 0.57 & 0 & 1 & 0 & 0.73 & 0.57 \\ \cline{2-7} & 1 & 0 & 1 & o & 1 & 0 & \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation of different kernel functions
* _Hardware configuration_: These nodes are connected via a 10 Gigabit Ethernet switch. Each node is configured with an Intel Core i7-6700 processor, 16 GB memory, and a one TB hard disk.
* _Software configuration_: We use Ubuntu14.04 as an operating system and JDK 1.8, Hadoop version 2.7 (which employs in-memory caching), and Intel HiBench version 7.1.
* _Hadoop configuration parameters:_ The block size of files in HDFS is chosen to be one of two values, 64 MB or 128 MB, the number of cache replicas is set to one, and data replication is set to 3. The memory size for Map task, Reduce task, and node manager is 1GB, 2GB, and 8 GB, respectively. The maximum size of the cache is set to ',' GB and we assume that each DataNode in the cluster has the same size cache. Table 5 presents the Hadoop configuration parameters with their values. The remaining Hadoop configuration parameters are set to the default values.
* _MapReduce applications_: As we mentioned earlier, we use Intel HiBench as a Hadoop benchmark suite that contains the following applications: 1) WordCount is a CPU-intensive application that frequently occurs for each word in a text file. 2) Sort is a typical I/O-bound application that sorts input data. 3) Grep is a mix of CPU-bound and I/O-bound operations that searches for a substring in a text file. These three applications are supported by Hadoop. 4) Join is a multiple-stage application such that the results of the previous step are used as input for the next step. 5) Aggregation (supported by Hive) is used for the aggregation operation in a query.
* _Dataset:_ For carrying out experiments, we use the Gutenberg dataset [28] as input data for the WordCount application to evaluate its execution time based on different input data. As we mentioned earlier in the implementation section, the ALOJA dataset is used as a training dataset for the SVM model. The applications use input files generated by a random text generator.
### Metrics
In these experiments, we consider three key metrics to evaluate our proposed algorithm: The first, cache hit ratio is used to evaluate the performance of the proposed H-SVM-LRU cache replacement policy. The other two metrics, job execution time and normalized run time are used for determining the impact on Hadoop performance. In the following, we explain these three metrics:
* _Hit ratio and byte hit ratio:_ These are two major factors to evaluate the performance of the cache replacement strategy. Hit ratio relates the number of cache hits to the total number of requests and byte hit ratio relates the number of bytes obtained from the cache to the
\begin{table}
\begin{tabular}{c c} \hline Hadoop property name & Hadoop property value \\ \hline Dfs.replication & 3 \\ Dfs.blocksize & 64M or 128M \\ Mapreduce.map.memory.mb & 1024 \\ Mapreduce.reduce.memory.mb & 2048 \\ MapReduce.jobhistory.webapp.address & Master:19888 \\ MapReduce.reduce.speculative & False \\ Mapreduce.map.speculative & False \\ Mapred.map.tasks.speculative.execution & False \\ Mapred.reduce.tasks.speculative.execution & False \\ \hline \end{tabular}
\end{table}
Table 6: Hadoop parameters
total number of bytes requested. It is very difficult for a cache replacement strategy to simultaneously optimize these two metrics because improving the hit ratio usually favors small-sized items over large-sized items, leading to a reduced byte hit ratio. In contrast, strategies that tend to increase the byte-hit ratio and prefer large-sized items typically decrease the hit ratio. In the experiments, we only consider the hit ratio because data blocks have the same size.
* _Job execution time_: This plays a vital role in Hadoop performance improvement, and it is related to data access time. The data access time decreases significantly if we can access data from the cache instead of the disk, reducing the job execution. To calculate the average job execution time, we run each application five times.
* _Normalized run time_: For each application in a workload, its run time is normalized based on the Hadoop original (Hadoop no-cache). The average normalized time for applications in one is then calculated to evaluate overall Hadoop performance [11], [12], [27].
### H-SVM-LRU performance
For carrying out experiments to calculate the cache hit rates, we consider two data block sizes: 64MB, and 128 MB. The input data size is 2GB with the same sequence of requested data for each mechanism and the cache size is the same in all DataNodes (1.5 GB). We then calculate the cache capacity based on the maximum number of data blocks that can be cached, which was varied between 6-12 for 128 MB block size and 6-24 for 64 MB block size. Figure 3 presents cache hit ratio graphs for block sizes 64 MB and 128 MB.
In Figure 3, we can observe that by increasing the cache size, the hit ratio is increased for both the LRU and H-SVM-LRU strategies as more requested data can be cached. Also, by increasing the data block size, the cache hit ratio increases, for instance, when the cache size is 6 and the data block size has increased from 64MB to 128MB, the cache hit ratio has approximately doubled because we could cache more data. Both diagrams demonstrate that the hit ratio of H-SVM-LRU is higher than for LRU, in particular when the cache size is small. In order to investigate the performance improvement of H-SVM-LRU over LRU, we calculate the improvement ratio of the performance (IR) based on the hit ratio for each cache size.
Figure 3: Cache hit ratio for different cache sizes
Table 7 presents the relative improvement of H-SVM-LRU over LRU for different cache sizes for both 64 MB and 128 MB block sizes. We observe that H-SVM-LRU has the greatest improvement ratio for small cache size and small data blocks, suggesting that H-SVM-LRU is suitable for small cache size because it better avoids cache pollution.
### Impact of H-SVM-LRU on Hadoop performance
In this section, we carry out two separate experiments to investigate the impact of H-SVM-LRU on Hadoop performance: 1) Job execution time based on different input data sizes. 2) Normalized run time of multiple applications in a workload. For this purpose, we compare Hadoop performance in the following scenarios to extract the impact of the proposed replacement policy on Hadoop performance:
* H-NoCache: The Hadoop original does not utilize HDFS in-memory caching; it is used as a baseline.
* H-LRU: Hadoop uses traditional LRU as a cache replacement policy.
* H-SVM-LRU: H-SVM-LRU is used as a cache replacement policy.
#### 6.4.1 Job execution time based on different input data sizes
In this experiment, we consider the job execution time of the WordCount MapReduce application based on different input data sizes for two different data block sizes (64 MB and 128MB). Figure 4 presents job execution time based on input data size for our three scenarios.
As we observe in Figure 4, the job execution time has a significant difference between Hadoop original and Hadoop with cache because, by growing the input data size, the probability of cached data has increased and we could access more data from the cache. When we compare the execution time in H-LRU with H-SVM-LRU, we observe that the execution time of H-SVM-LRU is less than H-LRU because the number of cache hits is greater for H-SVM-LRU. In the second experiment, due to the fact that we use a data block size of 128 MB the difference in execution time between Hadoop original and Hadoop with cache has increased significantly. By increasing the size of the data block we can cache more data than before therefore the byte-hit ratio has increased. In this case, the execution time for H-SVM-LRU is less than for H-LRU because the byte-hit ratio in H-SVM-LRU is more than H-LRU. Therefore, we can conclude that H-SVM-LRU has a lower execution time than the two other scenarios which leads to improved Hadoop performance.
\begin{table}
\begin{tabular}{c c c} \hline \hline Cache size & IR for Data block size & IR for Data block size \\ & (64 MB) & (128 MB) \\ \hline
6 & 63.63\% & 20.83\% \\
8 & 64.70\% & 15.15\% \\
10 & 33.33\% & 10.25\% \\
12 & 33.33\% & 6.81\% \\
14 & 22.58\% & N/A \\
16 & 14.28\% & N/A \\
18 & 7.89\% & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 7: Improvement ratio of H-SVM-LRU over LRU based on hit ratio
#### 6.4.2 Normalized run time of multiple applications in a workload
In this experiment, we take into account various workloads which consist of four concurrent MapReduce applications. We assume that all applications in one workload require an equal share of cluster resources. In addition, some applications use the same input data, and data is shared between them, for instance, Grep, WordCount, and Sort use the same input data that are generated by a random text generator, and the data are shared between aggregation and join.
The cache affinity feature [12] determines how to utilize the benefit of cached data in each application such that it can be classified into three categories based on this feature: low cache affinity (Sort), medium cache affinity (WordCount, Join), and high cache affinity (Grep, Aggregation). Therefore, we provide various workloads composed of I/O-bound and CPU-bound applications by considering their cache affinity feature. Table 8 presents the list of workloads with their applications that are used in this experiment.
In order to compare the Hadoop performance for each workload, we calculate normalized run time based on the Hadoop original (H-No-Cache). Figure 5 illustrates the experimental results. If we compare H-LRU with Hadoop we observe that Hadoop-LRU improves performance by 11.33%
Figure 4: Job execution time for different input data sizes
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Workload & App1 & App2 & App3 & App4 & Input data \\ & & & & & size (GB) \\ \hline W1 & Aggregation & Grep & Join & WordCount & 257.3 \\ \hline W2 & Aggregation & Grep & Sort & WordCount & 262.9 \\ \hline W3 & Aggregation & WordCount & Grep & Grep & 376.2 \\ \hline W4 & Aggregation & Sort & Grep & Grep & 446.7 \\ \hline W5 & Grep & Grep & Sort & WordCount & 254.3 \\ \hline W6 & Aggregation & Grep & Join & Sort & 377.1 \\ \hline \end{tabular}
\end{table}
Table 8: The list of workloads with their applications
and the average improvement of H-SVM-LRU is 16.16%, 4.83% against Hadoop-original and H-LRU respectively. The use of cached data has played a vital role in reducing run time. Also, the number of cache hits in H-SVM-LRU is higher than in Hadoop-LRU as a result of using cache space efficiently. H-LRU and H-SVM-LRU have the best improvement in workload W3 and W5 due to the fact that workload W3 is composed of high cache affinity applications and workload W5 has the most shared data between applications.
Figure 6 provides the normalized run times of applications for each workload in the H-SVM-LRU scenario, in order to investigate the impact of H-SVM-LRU on the performance of each application in the workloads. We observe that some I/O-intensive applications like Grep and Sort show significant improvements in their performance because Sort can benefit from reusing cached data by using the same data used by Grep and WordCount, also I/O-bound jobs spend most of their time on reading blocks, therefore they can have increased benefit from cached data. Therefore, the performance of I/O-intensive applications like Sort can be improved when they are combined with other applications with different resource usage patterns. Moreover, multiple-stage applications like Join have difficulty reusing the input files because the output of the previous stage is used as input for the next stage and this usage is not well suited for this caching mechanism.
We conclude that H-SVM-LRU is appropriate for workloads that have a composition of applications with different resource usage and the amount of shared data is high, in other words, this strategy is suitable for jobs that reuse a large amount of data.
Figure 5: Normalized run time of different workloads
## 7 Conclusion and future work
In this paper, we demonstrated that H-SVM-LRU can efficiently remove unwanted items from the cache at an early stage to make space for new data blocks. By using this mechanism, cache pollution can be reduced, and the available cache space can be utilized more effectively. If all data blocks in the cache have the same class, the proposed algorithm is identical to LRU and only considers the recently used metric for data eviction. Algorithm 1 presents H-SVM-LRU as a cache replacement strategy for the Hadoop environment. We propose H-SVM-LRU as an intelligent cache replacement strategy to improve Hadoop performance. H-SVM-LRU combines SVM with LRU to use the limited cache capacity efficiently and avoids cache pollution by unused data. In this policy, we classify cached data into two groups by using an SVM classifier: data that is reused and unused in the future, and the evicted items are determined based on their class. Experimental results show that the cache hit ratio is improved for H-SVM-LRU via decreasing the frequency of eviction of data that is reused in the future, and we observe the average improvement of H-SVM-LRU is 16.16%, 4.83% against Hadoop-original and H-LRU respectively. This caching mechanism is appropriate for small cache sizes to use its limited space efficiently as well as being suitable for workloads composed of high cache affinity applications with varied resource usage and a large amount of shared data. The advantage of this policy includes increasing the number of cache hits which decreases data access time. This is turn reduces the job execution time resulting in a positive impact on overall Hadoop performance. While the training time is a limitation of this approach, this is somewhat mitigated by the training time being independent of the execution time. The major limitation of this study is that the lack of a labeled training dataset required extra computational overhead, particularly when considering non-request awareness scenarios. Our future plans include examining H-SVM-LRU on a large cluster to evaluate its scalability and extend intelligent caching by applying machine learning techniques to prefetch requested data from HDFS.
Figure 6: Normalized run time of applications in each workload |
2309.08897 | Asynchronous Task Plan Refinement for Multi-Robot Task and Motion
Planning | This paper explores general multi-robot task and motion planning, where
multiple robots in close proximity manipulate objects while satisfying
constraints and a given goal. In particular, we formulate the plan refinement
problem--which, given a task plan, finds valid assignments of variables
corresponding to solution trajectories--as a hybrid constraint satisfaction
problem. The proposed algorithm follows several design principles that yield
the following features: (1) efficient solution finding due to sequential
heuristics and implicit time and roadmap representations, and (2) maximized
feasible solution space obtained by introducing minimally necessary
coordination-induced constraints and not relying on prevalent simplifications
that exist in the literature. The evaluation results demonstrate the planning
efficiency of the proposed algorithm, outperforming the synchronous approach in
terms of makespan. | Yoonchang Sung, Rahul Shome, Peter Stone | 2023-09-16T06:35:22Z | http://arxiv.org/abs/2309.08897v1 | # Asynchronous Task Plan Refinement for Multi-Robot Task and Motion Planning
###### Abstract
This paper explores general multi-robot task and motion planning, where multiple robots in close proximity manipulate objects while satisfying constraints and a given goal. In particular, we formulate the plan refinement problem--which, given a task plan, finds valid assignments of variables corresponding to solution trajectories--as a hybrid constraint satisfaction problem. The proposed algorithm follows several design principles that yield the following features: (1) efficient solution finding due to sequential heuristics and implicit time and roadmap representations, and (2) maximized feasible solution space obtained by introducing minimally necessary coordination-induced constraints and not relying on prevalent simplifications that exist in the literature. The evaluation results demonstrate the planning efficiency of the proposed algorithm, outperforming the synchronous approach in terms of makespan.
## I Introduction
Developing multi-robot systems to achieve a desired goal while interacting with objects in the world requires integrated reasoning about task sequencing, task allocation, and motion planning. Task and motion planning (TAMP [1]) jointly addresses the search for a sequence of discrete symbolic actions, the selection of which object to manipulate, and the assignment of continuous values to actions, determining how to execute those actions. However, the TAMP literature has predominantly focused on single-robot problems.
Another closely related topic is multi-robot motion planning [2, 3], which aims to find collision-free paths for multiple robots. In this context, objects are not considered for manipulation but rather are treated as obstacles. Additionally, multi-robot motion planning typically addresses individual motion planning problems, unlike TAMP where a sequence of motion planning problems is considered. The objective of this work is to develop a general-purpose multi-robot TAMP (MR-TAMP) framework that inherits challenges from both of these perspectives.
In existing MR-TAMP research, two prevalent simplifications are the _pre-discretization_ of the search space [4, 5] and _synchronous actions_[6, 7, 8, 9], where robots simultaneously initiate and complete action execution. While these assumptions simplify algorithm design, they can significantly diminish the space of feasible solutions, potentially preventing the solution of certain feasible problems and reducing the diversity of available solution paths.
In this work, our goal is to formulate MR-TAMP problems that maximize the feasible solution space by avoiding both of these simplifications. This approach can be viewed as an extension of the TAMP formulation to MR-TAMP, introducing only the necessary constraints arising from multi-robot coordination. The formulation essentially represents a _hybrid_ constraint satisfaction problem (H-CSP [1, 10]), incorporating both discrete and continuous variables.
To achieve this goal, we address a specific aspect in this work, referred to as the _refinement_ problem. When a task plan is provided, specifying the sequences of object manipulations for all robots, the objective of the refinement problem is to assign values to all continuous variables that meet the constraints, in order to find solution paths that the robots can execute. This direction shows promise, as we can seamlessly harness state-of-the-art multi-agent task planners from the AI planning community [11, 12] when developing the complete framework in the future.
Figure 1 illustrates the type of task we address, wherein multiple mobile manipulators operate in close proximity, involving multi-step manipulations such as picking up and placing multiple objects. The process of solving the proposed refinement problem, which aims to satisfy the given task plan and constraints, reveals specific grasp poses, placements, motions, and action scheduling for the robots.
Fig. 1: Example MR-TAMP task showing the initial state. Robots \(\{(r\}_{m=1}^{3})\), movable objects (\(\{m\}_{m=1}^{4}\)), and workspace regions (\(\{\{w\}_{m=1}^{4}\}\)) are depicted in the figure, while fixed objects, corresponding to walls, shelves, table, and cabinet, are not shown. The goal is to move all movable objects from their initial locations to the cabinet (_i.e._, workspace region 4). The given task involves three robots such that robot \(1\) moves all movable objects to workspace region \(3\), and robots \(2\) and \(3\) move them to workspace region \(4\).
Our main contributions can be summarized as follows: (1) the introduction of a general problem formulation for MR-TAMP that is inherently asynchronous and does not require complex scheduling, (2) the identification of fundamental challenges raised by this problem, and (3) the proposal of a search algorithm that incorporates promising heuristics.
## II The MR-TAMP Refinement Problem
### _Notations and assumptions_
Consider \(R\) robots, indexed as \(\{r\}_{r=1}^{R}\), manipulating objects to achieve a goal in a 3D workspace. The workspace consists of \(M\) movable objects, such as cups and plates, indexed as \(\{m\}_{m=1}^{M}\) and \(F\) fixed objects, such as tables and shelves, indexed as \(\{f\}_{f=1}^{F}\). We denote \(W\) workspace regions as \(\{w\}_{w=1}^{W}\), where movable objects can be placed, such as the surface of the table and the space on the shelf, inspired by the work [13].
While our framework is not necessarily restricted to homogeneous robots (_i.e._, robots with the same shapes, degrees of freedom, and abilities to move and manipulate), in this paper, we consider homogeneous robots for the sake of notational convenience. Each robot \(r\) operates in a \(d\)-dimensional configuration space whose configuration is represented as \(q_{r}\in\mathcal{C}_{r}\subset\mathbb{R}^{d}\). The pose of a movable object \(m\) is denoted as \(p_{m}\in\mathcal{P}_{m}\subset\mathit{SE}(3)\). Then, the composite configuration space for all robots and movable objects becomes \(\mathcal{C}=\prod_{r=1}^{R}\mathcal{C}_{r}\times\prod_{m=1}^{M}\mathcal{P}_{m}\). We denote the free space of the composite configuration space as \(\mathcal{C}^{\mathrm{r}}\), which represents all possible configurations of robots and movable objects that are positioned stably and do not collide with each other and with fixed objects. Correspondingly, the obstacle space is defined as \(\mathcal{C}^{\mathrm{o}}=\mathcal{C}\setminus\mathcal{C}^{\mathrm{r}}\).
We assume quasi-static dynamics in the world, which implies that movable objects remain stable after being manipulated by robots. Additionally, we assume that each movable object can be manipulated by a single robot. Furthermore, we assume deterministic transition effects, full observability, and lossless communication among robots. While our focus in this work is on pick-and-place tasks where geometric constraints are of major concern, our ultimate aim is to position this work as a foundational framework in MR-TAMP that can effectively address a wider range of practical challenges in the future, including those that relax the assumptions mentioned in this paragraph.
### _Mode-based abstract actions_
We employ the notion of a _mode_[14, 15, 16, 17], denoted by \(\sigma\), which specifies a constraint submanifold of \(\mathcal{C}^{\mathrm{r}}\), to define actions. These modes are determined by the contact points between the robot and the movable object (_e.g._, robot \(r\) grasping movable object \(m\)), while the remaining objects remain stationary. We consider two types of modes: a _transit mode_\(\sigma^{\mathrm{s}}\) where a robot moves with an empty hand, and a _transfer mode_\(\sigma^{\mathrm{s}}\) where a robot moves while holding a movable object. The transition between two adjacent modes can be facilitated through a _transition configuration_, which represents the robot's grasping or placing configuration.
We define the _abstract action_ based on these two modes. Let the abstract action be \(a=\big{\{}\sigma^{\mathrm{s}}(r,m,w,w^{\prime}),\sigma^{\mathrm{r}}(r,m,w,w^{ \prime})\big{\}}\). \(\sigma^{\mathrm{s}}(r,m,w,w^{\prime})\) indicates that robot \(r\) moves from workspace region \(w\) to another workspace region \(w^{\prime}\) with an empty hand in order to grasp movable object \(m\) located in \(w^{\prime}\). \(\sigma^{\mathrm{f}}(r,m,w,w^{\prime})\) indicates that robot \(r\), while already grasping movable object \(m\) in workspace region \(w\), moves and places it in another workspace region \(w^{\prime}\). These actions are still abstract because continuous parameters, such as robot configurations \(\{q_{r}\}_{r=1}^{R}\) and object poses \(\{p_{m}\}_{m=1}^{M}\), are not yet specified. Abstract actions may encompass both arm and base motions, as illustrated in Figure 1.
### _H-CSP for refinement_
We formulate the refinement of abstract actions into fully specified actions that robots can execute as an H-CSP problem. This problem involves assigning values from the domains of variables while ensuring that the assigned values do not violate any constraints. The variable set is defined as \(\mathcal{V}=\big{\{}\{v_{r}^{q}\}_{r=1}^{R},\{v_{r}^{q}\}_{r=1}^{R},\{v_{m}^{p} \}_{m=1}^{M}\big{\}}\), where \(v_{r}^{q}\) is a transition configuration variable for robot \(r\), \(v_{r}^{g}\) is a grasp variable for robot \(r\), and \(v_{m}^{p}\) is a pose variable for movable object \(m\). The domains for these variables are defined as follows: for \(v_{r}^{q}\), \(\mathcal{D}_{r}^{q}=\mathcal{C}_{r}\); for \(v_{r}^{g}\), \(\mathcal{D}_{r}^{g}=\cup_{m=1}^{M}\mathcal{G}_{r,m}\); and for \(v_{m}^{p}\), \(\mathcal{D}_{m}^{p}=\mathcal{P}_{m}\). Here, \(\mathcal{G}_{r,m}\ni g_{r,m}=(r,m,\gamma_{r,m})\) indicates that robot \(r\) grasps movable object \(m\) with a relative transformation \(\gamma_{r,m}\) between the pose of the robot \(r\)'s end-effector and the pose of the object \(p_{m}\). The abstract actions are associated with variables as goal variables, where \(\sigma^{\mathrm{s}}(r,m,w,w^{\prime})\) includes \(v_{r}^{q}\) and \(v_{r}^{g}\), while \(\sigma^{\mathrm{r}}(r,m,w,w^{\prime})\) includes \(v_{r}^{q}\) and \(v_{m}^{p}\).
We present the mode-specific constraints, which are parameterized, that the assigned values must satisfy as follows:
* [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parseppt=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsepsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsepsep=0pt,topsepsepsepsep=
with grasp \(g_{r,m}\).
* \(\mathsf{Hold}(r,m)\) ensures that movable object \(m\) is securely attached to the hand of robot \(r\). When this constraint is activated, it affects other constraints in the following manner. In the CFree constraint, the pose \(p_{m}\) of movable object \(m\) is no longer considered directly, but can be computed based on grasp \(g_{r,m}\) and robot \(r\)'s configuration \(q_{r}\). Additionally, the collision detection between robot \(r\) and movable object \(m\) is no longer considered in the CFree constraint. Furthermore, it prevents the activation of Grasp constraints for other robots besides \(r\), ensuring that the same movable object cannot be grasped by multiple robots while it is already being held.
* \(\mathsf{Contain}(m,w)\) constrains that movable object \(m\) is stably placed within workspace region \(w\).
Among the constraints, Motion, CFree, and Kin are always applied to both types of abstract actions, \(\sigma^{\text{s}}\) and \(\sigma^{\text{r}}\). Grasp and Contain constraints serve as goals within the abstract actions. The constraints applied for each type of abstract action are presented as follows:
* \(\sigma^{\text{s}}\): Motion, CFree, Kin, and Grasp.
* \(\sigma^{\text{r}}\): Motion, CFree, Kin, Hold, and Contain.
Note that we do not introduce a constraint enforcing the synchronous start and end of abstract actions for all robots, characterizing the synchronous approach. Therefore, our formulation strictly generalizes the synchronous formulation.
### _The proposed problem_
In this work, we address a partial problem where _ground_ abstract actions for all robots are provided, which means that the arguments \(r\), \(m\), \(w\), and \(w^{\prime}\) are grounded in all instances of \(\sigma^{\text{s}}\) and \(\sigma^{\text{r}}\), as well as the ordering among abstract actions. However, we still need to assign values to the variables of the corresponding abstract actions that satisfy the constraints specified in Section II-C. This particular approach is referred to as the _sequence-before-satisfy_ strategy in the TAMP literature [1], and our focus is on addressing the satisfy part, or refinement, assuming that sequencing is given.
Specifically, we are provided with a tuple \(\left\langle\{a_{r}^{A_{r}}\}_{r=1}^{R},\prec\right\rangle\), where \(a_{r}^{A_{r}}\) represents a set of abstract actions for robot \(r\), and \(A_{r}\) is an index set specific to robot \(r\), allowing robots to have different cardinalities of abstract actions. \(\prec\) is a set of ordering constraints that determine the sequencing of the provided abstract actions.
It is important to note that these ordering constraints can apply not only to abstract actions of the same robot but also to abstract actions of different robots. For instance, if movable object \(m\) is initially placed in workspace region \(w\), then the refinement of \(\sigma^{\text{s}}(r,m,w^{\prime},w^{\prime\prime})\) for robot \(r\) cannot be carried out until another robot \(r^{\prime}\) executes \(\sigma^{\text{s}}(r^{\prime},m,w,w^{\prime})\), as the movable object \(m\) is not yet located within the workspace region \(w^{\prime}\).
Furthermore, \(\prec\) does not specify the ordering between every pair of abstract actions from \(\{a_{r}^{A_{r}}\}_{r=1}^{R}\). \(\prec\) is _minimally_ given in the sense that it only specifies the sequence of workspace regions where each movable object is placed. Any orderings that require geometric reasoning are not included and must be determined by solving the refinement problem. For instance, suppose workspace region \(w\) has limited space. In that case, robot \(r\) can only feasibly place movable object \(m\) in workspace region \(w\) (_e.g._, \(\sigma^{\text{s}}(r,m,w^{\prime},w)\)) after another robot \(r^{\prime}\) removes another movable object \(m^{\prime}\) from the same workspace region (_e.g._, \(\sigma^{\text{s}}(r^{\prime},m^{\prime},w,w^{\prime})\)), creating empty space in workspace region \(w\).
Let \(s_{0}=\left((q_{r})_{r=1}^{R},(p_{m})_{m=1}^{M}\right)\) represent the initial state, specifying the initial configurations of all robots and the initial poses of all movable objects. The refinement problem is then defined as follows: given a tuple \(\left\langle\{a_{r}^{A_{r}}\}_{r=1}^{R},\prec,s_{0}\right\rangle\), the goal is to find valid assignments of variables defined in Section II-C for all abstract actions \(\{a_{r}^{A_{r}}\}_{r=1}^{R}\), potentially introducing additional ordering constraints while respecting the given ordering constraints \(\prec\) and the mode-specific constraints.
## III Algorithm
Solving the proposed problem while respecting all the constraints simultaneously is highly challenging, as even a single robot TAMP problem is known to be intractable (_i.e._, PSPACE-hard [19]). Additionally, explicitly constructing a composite roadmap from \(\{G_{r}\}_{r=1}^{R}\) is computationally expensive, especially considering the exponential increase in the number of samples required by the motion planner (such as PRM in our case) to cover the composite configuration space for all robots (_i.e._, \(\prod_{r=1}^{R}C_{r}\)). Moreover, the path for an abstract action and its length can only be determined after it has been computed by the motion planner, making it difficult to anticipate in advance when a robot will place a movable object. Consequently, it is challenging to identify when the CFree constraints are affected by the Hold constraints without evaluating all the relevant Motion constraints.
### _Overall framework_
We propose a heuristic-based search algorithm to efficiently solve the refinement problem, incorporating the following four principles.
(1) **Least commitment**: We follow the _least commitment_ principle [20], avoiding the introduction of additional ordering constraints unless absolutely necessary. This approach increases the size of the feasible solution space, leading to a more diverse set of solutions.
(2) **Sequential heuristics**: Instead of solving the problem in one step, we decompose it into a sequence of subproblems. We relax the problem by neglecting some of the mode-specific constraints, creating a relaxed problem that serves as a necessary condition for the subsequent problem in the sequence. The first subproblem is the most relaxed, and as we progress through the sequence, the neglected constraints are reintroduced incrementally. Additionally, the relaxed problem provides heuristics for guiding the search in the next subproblem. This decomposition approach is appealing because it can efficiently find a solution if one exists or effectively detect infeasibility in the early stage
of the sequence. The flow chart illustrating this process is depicted in Figure 2.
(3) **Implicit time representation**: Unlike many existing multi-robot task planning or TAMP approaches that explicitly represent time for temporal planning, our formulation and algorithm do not require explicit time representation. This approach avoids the complexity of introducing a scheduling problem and aligns with the observation made by Boutilier and Brafman [21] that explicit time representation is not always necessary. In our approach, time is implicitly revealed as a byproduct of solving the refinement problem.
(4) **Implicit composite roadmap construction**: As mentioned at the beginning of this section, explicit construction of a composite roadmap from \(\{G_{r}\}_{r=1}^{R}\) is impractical. Instead, we employ the concept of implicit composite roadmap construction, referred to as _subdimensional expansion_ in the literature [22, 23, 24]. This approach involves generating individual roadmaps for each robot independently, ignoring collisions with other robots. These individual roadmaps are then combined in a manner that takes into account robot-robot collisions. The resulting composite roadmap consists only of explored vertices and edges.
In the following subsections, we present each component of the algorithm depicted in Figure 2.
### _Movable object placements_
In this step, we relax most of the mode-specific constraints and retain only the CFree and Contain constraints in all abstract actions. Moreover, in the CFree constraint, we disregard the robot configurations from the argument, resulting in \(\texttt{CFree}\big{(}\{p_{m}\}_{m=1}^{M},\{f\}_{f=1}^{F}\big{)}\). This step can be seen as the teleportation of movable objects from one workspace region to another, excluding any robot involvement. The objective is to find valid assignments for all the relevant pose variables \(v_{m}^{p}\) present in the given abstract actions \(\{a_{r}^{A_{r}}\}_{r=1}^{R}\) to satisfy the Contain constraint and, if necessary, introduce additional ordering constraints to resolve the CFree constraint with other movable objects.
This subproblem can be effectively solved by further decomposing it into multiple workspace region-specific problems since placements in one workspace region are completely independent of those in other regions. Let's consider a specific workspace region \(w\) present in the given abstract actions \(\{a_{r}^{A_{r}}\}_{r=1}^{R}\); we apply the same procedure to other relevant workspaces. For workspace region \(w\), we find a set of sequences that specify the ordering of addition (or placement) and removal operations for each relevant movable object. Note that this sequence set can be derived entirely from \(\big{\langle}\{a_{r}^{A_{r}}\}_{r=1}^{R},\prec\big{\rangle}\), and that each sequence consists of an alternating sequence of addition and removal operations.
From the sequence set in workspace region \(w\), we determine pose variable assignments for the subset of sequences that involve addition operations. We employ a sampling strategy by uniformly drawing a predetermined number of placement samples in workspace region \(w\) for each movable object in the subset.
To avoid unnecessary introduction of additional ordering constraints, we make the following observation: If we can solve the most constrained problem, where the CFree constraint is applied to all movable objects already located in workspace region \(w\) and those that will be added, no additional ordering constraints will be necessary. Only when the CFree constraint is violated for some movable objects, do we introduce additional ordering constraints, ensuring that one movable object is added after another is removed. This observation is based on the idea that in spacious workspace regions, the CFree constraint is mostly satisfied without the need for additional ordering constraints. However, in tiny workspace regions, many ordering constraints may be required.
In this step, for each abstract action under consideration, we store information about the movable objects from \(\{m\}_{m=1}^{M}\) and fixed objects from \(\{f\}_{f=1}^{F}\) involved in evaluating collisions with the corresponding movable object. This cached information will be utilized in the subsequent steps.
If no valid assignments can be found even after evaluating all possible combinations of the predetermined number of placement samples, we have two options. First, we can stop the process and declare the problem as infeasible. In this case, the next steps do not need to be attempted, as they rely on finding valid assignments in this subproblem. Alternatively, we can choose to draw more samples until a predetermined time limit is reached.
### _Transition configurations_
After obtaining valid assignments for all relevant pose variables associated with abstract actions \(\{a_{r}^{A_{r}}\}_{r=1}^{R}\), our next step is to find valid assignments for all relevant transition configuration variables \(v_{r}^{q}\) and grasp variables \(v_{r}^{g}\). However, in this process, we continue to disregard certain mode-specific constraints, such as Motion and Hold, as well as the presence of other robots. Instead, we focus on considering the CFree, Kin, and Grasp constraints. This step aims to identify feasible transition configurations and grasps for all given abstract actions \(\{a_{r}^{A_{r}}\}_{r=1}^{R}\) that are compatible with the movable object poses obtained in the previous step.
We no longer need to take the workspace region-specific approach as in the previous step. Instead, we address this subproblem for each pair of abstract actions of the same
Fig. 2: The overall framework.
robot consisting of \(\sigma^{\text{s}}\) and \(\sigma^{\text{r}}\) sequenced by the ordering constraints \(\prec\). Let's consider the sequential abstract actions corresponding to robot \(r\), denoted as \(\sigma^{\text{s}}(r,m,w,w^{\prime})\) and \(\sigma^{\text{r}}(r,m,w^{\prime},w^{\prime\prime})\). These abstract actions indicate that robot \(r\) grasps movable object \(m\) in workspace region \(w^{\prime}\) and moves to workspace region \(w^{\prime\prime}\) to place the object there. The same rule is applied to all other pairs of abstract actions of the same robot sequenced by the ordering constraints \(\prec\).
Instead of considering \(\{q_{r}\}_{r=1}^{R}\) as arguments in the CFree constraint, we only consider robot \(r\)'s configuration \(q_{r}\), ignoring other robots. As for the remaining object-related arguments, we retrieve the collision information cached in the previous step, which indicates which objects must be considered for collision checking. Since collisions among objects have already been confirmed in the previous step, we only assess collisions between robot \(r\) and the relevant objects using the CFree constraint.
Since the Grasp constraint is associated with the mode \(\sigma^{\text{s}}\), we first find a valid assignment for the grasp variable \(v_{r}^{g}\) corresponding to the abstract action \(\sigma^{\text{s}}\) by sampling a predetermined number of grasps. Once a valid grasp is found, we compute \(q_{r}\) with respect to the grasp \(g_{r,m}\) using the Kin constraint. In the case of a mobile manipulator, as used in our experiments, computing \(q_{r}\) involves determining a base pose and subsequently solving an inverse kinematic problem (_i.e._, Kin) to verify reachability to grasp \(g_{r,m}\)[25]. This computed \(q_{r}\) is for the abstract action \(\sigma^{\text{s}}\). Similarly, the same grasp \(g_{r,m}\) is used to find another \(q_{r}^{\prime}\) for the corresponding abstract action \(\sigma^{\text{r}}\). The computed configurations \(q_{r}\) and \(q_{r}^{\prime}\) are then used in their respective CFree constraints to ensure collision-free transition configurations.
If valid transition configurations can be found for all the abstract actions \(\{a_{r}^{A_{r}}\}_{r=1}^{R}\) from the set of possible grasp samples, we can proceed to the next step. However, if valid transition configurations cannot be found, we have three options. First, we can choose to stop the process as explained in the previous step, indicating that a solution cannot be found. Second, we can backtrack to the previous step and explore unevaluated combinations of placement samples to potentially find valid transition configurations. To improve efficiency, we can also inform the previous step about the cause of failure, allowing suitable ordering constraints to be added and prevent the same failures in future attempts. Lastly, we can increase the number of grasp samples and reevaluate this step to improve the chances of finding valid transition configurations.
### _Individual motion planning_
Even after obtaining feasible transition configurations, as mentioned in the fourth principle, solving for paths of all robots simultaneously by explicitly constructing a composite roadmap is a challenging task. To address this complexity, we leverage the discrete RRT (dRRT [23, 24]) algorithm, which is built upon the subdimensional expansion concept. The dRRT algorithm is specifically designed for solving _single-modal_ motion planning problems involving multiple robots. In our algorithm, we extend the capabilities of dRRT in two aspects: (1) individual motion planning is generalized to multi-modal motion planning, considering multiple abstract actions, and (2) our algorithm accommodates robots holding objects, which affects the collision-checking process.
In this step, we focus on considering the Motion and Hold constraints, given feasible transition configurations. During individual motion planning, we still disregard the presence of other robots. Furthermore, we assume that all movable objects, except for the one held by the corresponding robot, have been placed in their respective workspace regions, as determined in the movable object placement step. As a result, the CFree constraint still includes the same arguments as in the previous transition configuration step. However, the Hold constraint allows for collision between the robot and the movable object it holds.
Unlike the previous steps, we decompose this subproblem into multiple individual motion planning problems. Specifically, we can find a sequence of abstract actions for each robot from \(\big{\langle}\{a_{r}^{A_{r}}\}_{r=1}^{R},\prec\big{\rangle}\) and apply PRM to each abstract action in the sequence. In this case, the transition configurations serve as start and goal configurations, and we generate a predetermined number of samples in the respective configuration space \(C_{r}\). Throughout this process, we apply the CFree and Hold constraints as mentioned before. This subproblem can be seen as the verification of reachability from the start transition configuration of the first abstract action to the goal transition configuration of the last abstract action.
If valid individual paths can be found for all robots, we can proceed to the last step. Otherwise, we have the same three options as in the transition configuration step.
### _Composite motion planning_
We are now ready to consider all the intact mode-specific constraints introduced in Section II-C by merging the individual paths obtained from the previous step. This step involves constructing a tensor-product roadmap from individual roadmaps \(\{G_{r}^{A_{r}}=(V_{r}^{A_{r}},E_{r}^{A_{r}})\}_{r=1}^{R}\), where \(A_{r}\) is the abstract action index set for robot \(r\). We denote the resulting tensor-product roadmap as \(G=(V,E)\). In \(G\), the set of vertices \(V\) is the Cartesian product of the vertices from \(\{G_{r}^{A_{r}}\}_{r=1}^{R}\), represented as \(V=\{(v_{1},...,v_{r},...,v_{R})|\forall r\ v_{r}\in V_{r}^{A_{r}}\}\). The set of edges \(E\) is defined as \(E=\big{\{}\big{(}(v_{1},...,v_{r},...,v_{R}),(v_{1}^{\prime},...,v_{r}^{\prime},...,v_{R}^{\prime})\big{)}\big{|}\forall i\ \exists(v_{r},v_{r}^{\prime})\ \big{(}(v_{r},v_{r}^{\prime})\in E_{r}^{A_{r}} \lor v_{r}=v_{r}^{\prime}\big{)}\big{\}}\). Note that in \(E\), the condition \(v_{r}=v_{r}^{\prime}\) allows some robots to remain stationary. However, since robot-robot collisions and collisions between robots and movable objects held by other robots were not considered in the CFree constraint in the previous steps, some edges in \(E\) may contain collision paths among robots.
Due to limited space, we provide a brief explanation of how dRRT works and how we modify it for our problem. For detailed explanations, please refer to the works [23, 24]. dRRT is based on RRT [26] and serves as the underlying framework for constructing the composite search graph \(G\). dRRT incrementally builds \(G\) by sampling configurations in the composite configuration space \(\prod_{r=1}^{R}C_{r}\) and connecting
them using an oracle function that searches for neighboring vertices. The oracle function finds the nearest neighbor vertex \(v_{r}\) and another neighbor vertex \(v^{\prime}_{r}\) within the individual roadmap \(G^{A_{r}}_{r}\) for a given sampled configuration. During the composite search, the intact CFree and Hold constraints, as explained in Section II-C, are used to ensure collision-free and object-holding paths.
During the composite search, when the goal configuration (_i.e._, transition configuration) of one robot's roadmap is reached, the next roadmap for the same robot is considered. The ordering constraints \(\prec\) are taken into account in the composite search, ensuring that no adjacent edges connected to a goal configuration of the corresponding roadmap are used until another robot's roadmap, as determined by \(\prec\), is reached.
If the modified dRRT algorithm finds a valid composite path for all robots, we declare that a solution path satisfying the mode-specific constraints and ordering constraints has been found, given the input \(\left\langle\{a^{A_{r}}_{r}\}_{r=1}^{R},\prec,s_{0}\right\rangle\). dRRT has its own time limit, and if this limit is exceeded, we backtrack to the previous step. Additionally, we set an overall time limit for the entire process, and if this limit is exceeded, the algorithm terminates with no solution.
## IV Experiments
We perform two sets of experiments in PyBullet [27] to evaluate the performance of the proposed algorithm. (1) Ablation study: We analyze the effectiveness of decomposition by comparing planning time with merged hierarchies. (2) Comparison with the synchronous approach: We evaluate the makespan (_i.e._, the execution time of the last robot) of our algorithm against the synchronous method to highlight our method's ability to discover more effective solutions.
All the experiments are conducted using the task shown in Figure 1. We consider mobile manipulators as our robots with three and seven-dimensional configuration spaces for base motion and arm motion, respectively. Each abstract action consists of a sequence of three motion planning problems: base motion reaching a desired base position, arm motion grasping a target object, and arm motion returning to a home position. Base poses and grasp poses are all sampled, as is typically done in the literature [16, 25]. Due to the limited space, we provide the details of the task, such as specifications of input tuple \(\left\langle\{a^{A_{r}}_{r}\}_{r=1}^{R},\prec\right\rangle\), in the video. As the task contains \(15\) abstract actions, there are a total of \(45\) individual motion planning problems to solve the task. We report the results in Table I, where statistics are collected by solving the problem with \(25\) different random seeds.
**Ablation study**: Since the importance of the decomposition between Steps 3 and 4 is emphasized in dRRT [23, 24], we focus on the importance of decomposition among Steps 1, 2, and 3. The first ablation is to merge Steps 1 and 2 (_i.e._, MERGE \(1\&2\)), and the second one is to merge Steps 1, 2, and 3 (_i.e._, MERGE \(1\)-\(3\)).
The results in the first row of Table I indicate that useful heuristics can be found by decomposition, and thus, a solution is found quickly. MERGE \(1\)-\(3\) takes longer than \(10\) minutes in all instances due to the generation of unnecessary motion planning problems in Step 3 that do not lead to a solution. We observe some differences between our algorithm and MERGE \(1\&2\), but they are not significant. This implies that, although MERGE \(1\&2\) had to solve many unnecessary inverse kinematic problems, the heuristic found by Step 2 is powerful in solving the rest of the problem, as Steps 3 and 4 consume the majority of planning time.
**Comparison with the synchronous approach**: In the synchronous approach, all robots either leave and arrive at their corresponding transition configurations at the same time or remain idle during that time period. In tasks where robots manipulate objects in the same workspace regions (_e.g.,_ all robots converge at workspace region \(3\)), if the planner does not find feasible transition configurations for all robots, some robots need to remain idle. Moreover, robots \(2\) and \(3\) can only start moving to workspace region \(3\) after robot \(1\) places an object there.
Makespan results in the second row of Table I support that our asynchronous algorithm is more execution time efficient than the synchronous one, which aligns with the above observations. In any case, the synchronous approach is impractical; if one of the abstract actions requires a robot to move a long distance, all the remaining robots must wait.
## V Related Work
In this section, we briefly review existing MR-TAMP research, in addition to those referred to in the introduction, which rely on pre-discretization or the synchronous approach. Various task types have been investigated, such as assembly [28, 29, 30] and clutter removal [31]. Challenges that have not been addressed in this work are discussed in the context of MR-TAMP, including decentralized communication [5] and spatial and temporal uncertainty [32].
One distinguishing feature of this work is its implicit time representation, whereas the majority of existing works [29, 30, 33, 34] reason about time explicitly, which incurs the relatively complex overhead of task scheduling.
To solve MR-TAMP problems efficiently, various approximations have been introduced, including state space decomposition [35, 36, 33] and shared space graph [37]. Although incorporating approximations may lead to a loss of feasibility guarantees, it is an interesting avenue for future research.
Optimization-based approaches [29, 38] have made progress in MR-TAMP by leveraging logic-geometric programming [39]. The most recent work [29] in this direction
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline \hline Algorithms & Our algorithm & MERGE \(1\&2\) & MERGE \(1\)–\(3\) \\ \hline Planning time (s) & \(324.7\pm 40.2\) & \(371.2\pm 54.6\) & — \\ \hline \hline Algorithms & Our algorithm & Synchronous \\ \hline Makespan (simulation steps) & \(5118.3\pm 148.4\) & \(7432.1\pm 211.8\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Experimental results. The numbers represent mean and 95\(\%\) confidence interval. \(-\) implies that all instances take longer than 10 minutes to solve.
focuses on the assembly task but still relies on explicit time representations.
## VI Conclusion
In this work, we formulate a general MR-TAMP problem as H-CSP when a task plan is given, which is inherently asynchronous. We propose a refinement planning algorithm driven by design principles and evaluate its efficiency and advantages over the synchronous approach in simulation.
An immediate direction for future work is to develop a partial-order task planner capable of generating the input tuple of abstract actions and ordering constraints to complete the framework. This framework should facilitate bidirectional communication between the task planner and the proposed refinement planner to support full integration.
|
2309.10000 | Detecting covariate drift in text data using document embeddings and
dimensionality reduction | Detecting covariate drift in text data is essential for maintaining the
reliability and performance of text analysis models. In this research, we
investigate the effectiveness of different document embeddings, dimensionality
reduction techniques, and drift detection methods for identifying covariate
drift in text data. We explore three popular document embeddings: term
frequency-inverse document frequency (TF-IDF) using Latent semantic
analysis(LSA) for dimentionality reduction and Doc2Vec, and BERT embeddings,
with and without using principal component analysis (PCA) for dimensionality
reduction. To quantify the divergence between training and test data
distributions, we employ the Kolmogorov-Smirnov (KS) statistic and the Maximum
Mean Discrepancy (MMD) test as drift detection methods. Experimental results
demonstrate that certain combinations of embeddings, dimensionality reduction
techniques, and drift detection methods outperform others in detecting
covariate drift. Our findings contribute to the advancement of reliable text
analysis models by providing insights into effective approaches for addressing
covariate drift in text data. | Vinayak Sodar, Ankit Sekseria | 2023-09-17T07:34:57Z | http://arxiv.org/abs/2309.10000v1 | # Detecting covariate drift in text data using document embeddings and dimensionality reduction.
###### Abstract
Detecting covariate drift in text data is essential for maintaining the reliability and performance of text analysis models. In this research, we investigate the effectiveness of different document embeddings, dimensionality reduction techniques, and drift detection methods for identifying covariate drift in text data. We explore three popular document embeddings: term frequency-inverse document frequency (TF-IDF) using Latent semantic analysis(LSA) for dimentionality reduction and Doc2Vec, and BERT embeddings, with and without using principal component analysis (PCA) for dimensionality reduction. To quantify the divergence between training and test data distributions, we employ the Kolmogorov-Smirnov (KS) statistic and the Maximum Mean Discrepancy (MMD) test as drift detection methods. Experimental results demonstrate that certain combinations of embeddings, dimensionality reduction techniques, and drift detection methods outperform others in detecting covariate drift. Our findings contribute to the advancement of reliable text analysis models by providing insights into effective approaches for addressing covariate drift in text data.
Covariate drift Dimentionality reduction Text data
## 1 Introduction
In recent years, the abundance of text data and its crucial role in various applications, such as natural language processing, information retrieval, and sentiment analysis, has garnered significant attention. However, one key challenge that researchers and practitioners face when working with text data is the presence of covariate drift. Covariate drift refers to the phenomenon where the underlying distribution of the data changes over time, leading to a mismatch between the training and test data.
Detecting and addressing covariate drift is of paramount importance as it can have detrimental effects on the performance and reliability of text analysis models. When drift occurs, models trained on historical data may become obsolete or yield inaccurate results when applied to current data. Thus, developing effective methods to identify and mitigate covariate drift is crucial for maintaining the efficacy of text data analysis.
In this research, our objective is to identify which document embeddings, dimensionality reduction techniques, and drift detection methods work best for detecting covariate drift in text data. Specifically, we explore the effectiveness of three widely used document embeddings: term frequency-inverse document frequency (TF-IDF), Doc2Vec, and BERT embeddings. Additionally, we investigate the impact of dimensionality reduction techniques on drift detection, such as principal component analysis (PCA) and Latent Semantic Analysis(LSA).
To evaluate the performance of the different approaches, we employ two popular drift detection methods: the Kolmogorov-Smirnov (KS) statistic and the Maximum Mean Discrepancy (MMD) test. These methods provide statistical measures to quantify the divergence between the training and test data distributions.
By conducting comprehensive experiments and comparative analyses, we aim to identify the most effective combination of embeddings, dimensionality reduction techniques, and drift detection methods for detecting and monitoring covariate drift in text data. The insights gained from this research will contribute to enhancing the robustness and reliability of text analysis models, enabling their effective deployment in dynamic environments where data distributions evolve over time.
The remainder of this paper is organized as follows: Section 2 provides background information and reviews related work on covariate drift detection, document embeddings, and dimensionality reduction techniques. Section 3 presents the methodology, including the datasets used, document embeddings, dimensionality reduction techniques, and drift detection methods. Section 4 details the experimental setup, while Section 5 presents the results and analysis, followed by concluding remarks in Section 6.
## 2 Background and related work
Covariate drift detection in text data poses a significant challenge in maintaining the reliability and performance of text analysis models. When working with text data, it is crucial to ensure that the models are robust and adaptable to changing data distributions. Covariate drift can occur due to various factors, such as changes in user behavior, emerging trends, or shifts in the data collection process.
To address covariate drift, several approaches have been proposed in the literature. Drift detection methods play a crucial role in identifying changes in data distributions over time. The Kolmogorov-Smirnov (KS) statistic has been widely used as a drift detection measure. Basseville and Nikiforov [1] introduced the KS statistic for detecting changes in the distribution of time series data. Kim and Scott [2] adapted the KS statistic for drift detection in classification tasks, demonstrating its effectiveness in identifying changes in data streams. Liu et al. [3] applied the KS statistic to detect covariate drift in text data, specifically in the context of detecting concept drift in text classification.
Another prominent drift detection method is the Maximum Mean Discrepancy (MMD) test. Gretton et al. [4] introduced the MMD test as a measure of discrepancy between two probability distributions. The MMD test has been widely used in various domains, including computer vision and natural language processing, for drift detection purposes. Li et al. [5] employed the MMD test to detect concept drift in sentiment analysis tasks, demonstrating its effectiveness in capturing changes in the sentiment distribution of text data.
Document embeddings have proven to be effective in capturing the semantic representations of text documents. Term frequency-inverse document frequency (TF-IDF) is a classic method that assigns weights to terms based on their frequency and inverse document frequency. Salton and Buckley [6] introduced TF-IDF as a measure of term importance in information retrieval. Doc2Vec, proposed by Le and Mikolov [7], learns distributed representations of documents by training a neural network to predict words within a document. BERT (Bidirectional Encoder Representations from Transformers), introduced by Devlin et al. [8], generates contextualized embeddings by considering the entire sentence or document. These document embeddings have been extensively used in various text analysis tasks, including sentiment analysis, topic modeling, and document classification.
In the context of dimensionality reduction, Principal Component Analysis (PCA) is a widely used technique that transforms high-dimensional data into a lower-dimensional space while preserving the maximum variance. Pearson [9] introduced PCA as a method for dimensionality reduction. Another dimensionality reduction technique commonly employed is Latent Semantic Analysis (LSA), which uses singular value decomposition (SVD) to identify the underlying latent semantic structure in the data. LSA has been widely used in text analysis to capture the latent topics and reduce the dimensionality of text data [10].
In their study, Wang et al. (2020)[11] proposed a method for detecting drift in topic distributions of text data using Dirichlet Process Mixture Models. They demonstrated the effectiveness of their approach in identifying changes in topic proportions and capturing covariate drift in text corpora. Zhang et al. (2019)[12] explored the use of word embeddings and clustering techniques for drift detection in text streams. They introduced a novel method that combines K-means clustering with cosine similarity to detect changes in text data distributions. Their findings showed the applicability of clustering-based approaches in identifying covariate drift in text streams. Chen and Lin (2017)[13] focused on drift detection in sentiment analysis tasks and proposed a method based on sentiment lexicon expansion. They utilized a sentiment lexicon to detect changes in sentiment distributions and successfully identified covariate drift in sentiment analysis models. Liu et al. (2021)[14] investigated drift detection in text data using distributional shifts in word embeddings. They proposed a method that measures the distance between word embeddings across different time periods to identify changes in word semantics and detect covariate drift in text data. In their research, Smith et al. (2018)[15] explored the application of transfer learning techniques for detecting drift in text data. They demonstrated that pre-trained models, such as those trained on large-scale text corpora, can be fine-tuned to identify changes in data distributions and detect covariate drift in text analysis tasks.
By investigating the performance of TF-IDF, Doc2Vec, and BERT embeddings, with and without dimensionality reduction using PCA and LSA, and utilizing the KS statistic and MMD test for drift detection, we aim to provide insights into the best strategies for detecting and addressing covariate drift in text data. The outcomes of this research
will contribute to the development of robust text analysis models that can adapt to evolving data distributions and ensure reliable performance in dynamic environments.
## 3 Methodology
In this section, we describe the methodology used in our study to detect covariate drift in text data. We explore different document embeddings, dimensionality reduction techniques, and drift detectors to identify the most effective approaches.
### Document Embeddings
Document embeddings play a crucial role in capturing the semantic representations of text documents. We experiment with three popular document embedding methods: TF-IDF, Doc2Vec, and BERT.
TF-IDF (Term Frequency-Inverse Document Frequency) is a classic method for generating document embeddings. It assigns weights to terms based on their frequency in a document and their inverse document frequency in the corpus. TF-IDF captures the importance of terms in a document and can effectively represent the document's content.
Doc2Vec is a neural network-based approach that learns distributed representations of documents. It extends the Word2Vec model to capture the semantic meaning of entire documents by training a neural network to predict words within a document. Doc2Vec provides dense vector representations that encode the contextual information of the document.
BERT (Bidirectional Encoder Representations from Transformers) is a powerful language model that generates contextualized embeddings. It considers the entire sentence or document to generate representations that capture the context and meaning of the text. BERT embeddings are pre-trained on a large corpus and can capture fine-grained nuances in the document's semantics.
### Dimensionality Reduction Techniques
Dimensionality reduction techniques help reduce the dimensionality of the document embeddings, making them more computationally efficient and potentially improving their performance. We consider two widely used dimensionality reduction techniques: Principal Component Analysis (PCA) and Latent Semantic Analysis (LSA).
PCA is a popular linear dimensionality reduction technique that transforms high-dimensional data into a lower-dimensional space while preserving the maximum variance. It identifies the principal components that capture the most significant variation in the data. By projecting the document embeddings onto the principal components, we obtain lower-dimensional representations that retain the most important information.
LSA utilizes singular value decomposition (SVD) to identify the underlying latent semantic structure in the data. It reduces the dimensionality of the document embeddings by capturing the most important latent topics. LSA represents documents in a low-dimensional semantic space, where the similarity between documents is indicative of their semantic similarity. By applying LSA to the document embeddings, we can capture the latent topics and reduce the dimensionality of the data.
### Drift Detectors
To detect covariate drift in the text data, we employ two drift detection methods: Maximum Mean Discrepancy (MMD) and Kolmogorov-Smirnov (KS) statistic.
#### 3.3.1 Maximum Mean Discrepancy (MMD)
The Maximum Mean Discrepancy (MMD) is a statistical measure used to quantify the discrepancy between two probability distributions, \(\mathcal{P}\) and \(\mathcal{Q}\). It provides a way to assess the difference between the distributions based on their respective samples.
The MMD is defined as the supremum difference between the expected values of a kernel function \(k\) applied to samples drawn from \(\mathcal{P}\) and \(\mathcal{Q}\). The formula for MMD with the kernel function is given by:
\[MMD(\mathcal{P},\mathcal{Q})=\sup_{f\in\mathcal{F}}\left(\mathbb{E}_{X\sim \mathcal{P}}[f(X)]-\mathbb{E}_{Y\sim\mathcal{Q}}[f(Y)]\right)\]
In this formula, \(\sup\) represents the supremum operator, and \(\mathcal{F}\) denotes a class of functions used for the comparison.
The empirical calculation of Maximum Mean Discrepancy (MMD) involves estimating the discrepancy between two distributions based on their samples. The formula for empirically calculating MMD is as follows:
\[MMD(\mathcal{P},\mathcal{Q})=\frac{1}{n(n-1)}\sum_{i\neq j}k(x_{i},x_{j})+\frac {1}{m(m-1)}\sum_{i\neq j}k(y_{i},y_{j})-\frac{2}{mn}\sum_{i,j}k(x_{i},y_{j})\]
where \(\mathcal{P}\) and \(\mathcal{Q}\) represent two distributions, \(x_{i}\) and \(y_{i}\) denote samples drawn from \(\mathcal{P}\) and \(\mathcal{Q}\) respectively, \(n\) and \(m\) are the respective sample sizes, and \(k\) is a kernel function.
The choice of the kernel function, \(k\), is crucial as it determines the sensitivity of MMD to different aspects of the distributions.
A commonly used kernel function is the Gaussian kernel, which measures the similarity between two samples based on their distance. The Gaussian kernel function is defined as:
\[k(x,y)=\exp\left(-\frac{\|x-y\|^{2}}{2\sigma^{2}}\right)\]
Here, \(x\) and \(y\) represent the samples from the distributions, and \(\sigma\) is a parameter controlling the width of the kernel.
By calculating the MMD between \(\mathcal{P}\) and \(\mathcal{Q}\), we can assess the dissimilarity between the distributions and detect covariate drift if the value of MMD is significant.
#### 3.3.2 Kolmogorov-Smirnov (KS) Statistic
The KS statistic measures the maximum difference between the cumulative distribution functions of two probability distributions. It is commonly used for detecting changes in data distributions and can be applied to identify covariate drift in text data. By comparing the KS statistic between the distributions of the reference and current data, we can determine if there is a significant change in the data distribution, indicating covariate drift.
The KS statistic quantifies the maximum difference between the cumulative distribution functions (CDFs) of the two distributions being compared. Given two distributions, \(\mathcal{P}\) and \(\mathcal{Q}\), the KS statistic is computed as:
\[KS=\max_{i}\left(|F_{\mathcal{P}}(x_{i})-F_{\mathcal{Q}}(x_{i})|\right)\]
where \(F_{\mathcal{P}}(x_{i})\) and \(F_{\mathcal{Q}}(x_{i})\) represent the CDFs of \(\mathcal{P}\) and \(\mathcal{Q}\), respectively, and \(x_{i}\) denotes the \(i\)th data point.
In the context of multivariate data distributions, the KS statistic can be used by extending it to multiple dimensions. For each dimension, the KS statistic is calculated independently. Then, the maximum KS statistic across all dimensions is considered as the overall KS statistic for the multivariate data.
When comparing multiple dimensions simultaneously, it is important to account for multiple hypothesis testing to control the family-wise error rate. One commonly used correction method is the Bonferroni correction. The Bonferroni correction adjusts the significance threshold by dividing it by the number of dimensions being considered. This correction helps reduce the likelihood of false positive detections when performing multiple comparisons.
To apply the Bonferroni correction, suppose the desired significance level is \(\alpha\). If \(m\) dimensions are being compared, the adjusted significance level, denoted as \(\alpha_{adj}\), is given by:
\[\alpha_{adj}=\frac{\alpha}{m}\]
In hypothesis testing, the p-value is compared against the significance level to determine the statistical significance of the results. With the Bonferroni correction, the p-value threshold is adjusted as well. If the calculated p-value for a particular comparison is less than or equal to \(\alpha_{adj}\), it is considered statistically significant.
By applying the Bonferroni correction, the p-value threshold is made more stringent, reducing the chance of false positive detections. This correction is particularly useful in scenarios involving multiple comparisons, such as when comparing multiple dimensions in multivariate data, as it helps maintain the overall statistical validity of the analysis.
Here we report the p-value after multiplying it by m and we consider significance at 0.05 to follow standard convention.
By calculating the KS statistic and applying the Bonferroni correction, we can detect significant differences between distributions and identify covariate drift in multivariate data.
By combining these document embeddings, dimensionality reduction techniques, and drift detectors, we aim to evaluate the performance of different approaches in detecting covariate drift in text data.
### Dataset
For our research, we utilize the AG-News dataset, specifically the AG-News Subset train dataset. The AG-News dataset is a widely used benchmark for text classification tasks, containing news articles from various categories. The subset we focus on consists of news articles from four categories: World, Sports, Business, and Sci/Tech.
The AG-News Subset is a labeled dataset that provides a balanced distribution of articles across the four categories. Each article in the dataset is represented by its title and description, capturing the essence of the news content. This dataset serves as a suitable choice for evaluating our methodology in detecting covariate drift in text data.
The AG-News Subset offers several advantages for our research. Firstly, it provides a diverse set of news articles covering different domains, enabling us to capture a wide range of textual variations and potential drift scenarios. Secondly, the balanced distribution of articles across categories ensures that our analysis is not biased towards any specific domain, allowing us to assess the performance of our methodology across different categories.
By employing the AG-News Subset, we aim to evaluate the effectiveness of our proposed approaches in detecting covariate drift and capturing distributional shifts in text data. The utilization of this dataset contributes to the robustness and generalizability of our findings.
## 4 Experimental setup
In this section, we describe the experimental setup used to evaluate our methodology for detecting covariate drift in text data. We constructed the training set and performed multiple experiments using the AG-News Subset train dataset.
### Datasets constructed
We utilized the AG-News Subset dataset for our experiments.To ensure that our training set focuses on specific categories, we removed the sports category from the news. This allowed us to investigate the effectiveness of our methodology in detecting drift specifically related to sports news. The training set was constructed by randomly sampling 15,000 articles from the Ag news subset train dataset excluding the sports category.
To evaluate the performance of our methodology, we created multiple test sets. Each test set consisted of 5,000 samples drawn randomly with replacement from the dataset, excluding the samples used in the training set. This sampling process was repeated five times to generate five distinct test sets for each experiment.
To evaluate the performance of our methodology in various drift scenarios, we modified the test datasets to include different percentages of sports news. Each test set still contained a total of 5,000 samples, but the proportion of sports news was varied. Specifically, we created test datasets with 0%, 10%, 25%, 75%, and 100% of the samples being from the sports category. This modification refered as drift level below allowed us to assess the impact of different levels of sports news inclusion on the detection of covariate drift.
### Experimental Procedure
For each experiment, we applied our methodology to detect covariate drift in the text data. We trained our models on the constructed training set and performed drift detection on each of the five drift levels separately.
To assess the statistical significance of the detected drift, we calculated the p-value for each test set. The p-value represents the probability of obtaining a test statistic as extreme as, or more extreme than, the observed value under the null hypothesis of no drift.
By repeating the experiment i.e each level of drift five times, we obtained a distribution of p-values. From this distribution, we computed the mean p-value as a measure of the average statistical significance across the multiple tests. Additionally, we calculated the standard deviation to quantify the variability of the results.
This experimental setup allowed us to evaluate the effectiveness of our methodology in detecting covariate drift in text data while providing robust statistical measures to support the validity of our findings.
For the code used to implement this experimental setup and perform the analysis, please refer to our GitHub repository at: [https://github.com/vinayaksodar/nlp_drift_paper_code.git](https://github.com/vinayaksodar/nlp_drift_paper_code.git).
## 5 Results and analysis
The results of the experiments are presented in Tables 1-4, which show the p-values of the KS and MMD metrics for different models and drift levels. Significant p-values are highlighted in bold.
Among the drift detection metrics, the KS statistic performs surprisingly well by detecting drift at all drift levels in all the experiments. It even provides higher p-values than MMD when there is no supposed drift. The KS statistic is also computationally more efficient as calculating a p-value for it doesn't require a permutation test, unlike MMD. However, MMD is also able to detect drift at all levels. It performs worse when there is no drift and even detects drift at the 0 drift level in the TFIDF-LSA experiment.
Among the models used, the bert model performed the best across metrics, regardless of whether dimensionality reduction was used. The TFIDF model performed the worst, while the doc2vec model was somewhere in the middle. Interestingly, dimensionality reduction doesn't seem to impact the results significantly in both the doc2vec and bert models.
## 6 Conclusion
The results obtained from these experiments provide valuable insights into the performance of different models and drift detection metrics. The KS statistic proves to be a reliable metric, consistently detecting drift across all experiments. MMD, while effective in detecting drift, has limitations when there is no actual drift present. The bert model
\begin{table}
\begin{tabular}{c|c c|c c|c} \hline model & \multicolumn{2}{c|}{KS} & \multicolumn{2}{c|}{MMD} & drift level \\ & mean & stddev & mean & stddev & \\ \hline TFIDF-LSA & 0.05 & 0.05 & **0.00** & 0.00 & 0 \\ TFIDF-LSA & **0.00** & 0.00 & **0.00** & 0.00 & 0.10 \\ TFIDF-LSA & **0.00** & 0.00 & **0.00** & 0.00 & 0.25 \\ TFIDF-LSA & **0.00** & 0.00 & **0.00** & 0.00 & 0.50 \\ TFIDF-LSA & **0.00** & 0.00 & **0.00** & 0.00 & 0.75 \\ TFIDF-LSA & **0.00** & 0.00 & **0.00** & 0.00 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: P-values of KS and MMD metrics for different percentages of samples using the tfidf model and dimentionality reduction using lsa.
\begin{table}
\begin{tabular}{c|c c|c c|c} \hline model & \multicolumn{2}{c|}{KS} & \multicolumn{2}{c|}{MMD} & drift level \\ & mean & stddev & mean & stddev & \\ \hline doc2vec & 0.08 & 0.08 & 0.07 & 0.06 & 0 \\ doc2vec & **0.01** & 0.02 & **0.00** & 0.00 & 0.1 \\ doc2vec & **0.00** & 0.00 & **0.00** & 0.00 & 0.25 \\ doc2vec & **0.00** & 0.00 & **0.00** & 0.00 & 0.75 \\ doc2vec & **0.00** & 0.00 & **0.00** & 0.00 & 1 \\ \hline \end{tabular}
\end{table}
Table 2: P-values of KS and MMD metrics for different percentages of samples using the doc2vec model.
\begin{table}
\begin{tabular}{c|c c|c c|c} \hline model & \multicolumn{2}{c|}{KS} & \multicolumn{2}{c|}{MMD} & drift level \\ & mean & stddev & mean & stddev & \\ \hline doc2vec-pca & 0.21 & 0.25 & 0.08 & 0.11 & 0 \\ doc2vec-pca & **0.05** & 0.05 & **0.01** & 0.01 & 0.1 \\ doc2vec-pca & **0.00** & 0.00 & **0.00** & 0.00 & 0.25 \\ doc2vec-pca & **0.00** & 0.00 & **0.00** & 0.00 & 0.5 \\ doc2vec-pca & **0.00** & 0.00 & **0.00** & 0.00 & 0.75 \\ doc2vec-pca & **0.00** & 0.00 & **0.00** & 0.00 & 1 \\ \hline \end{tabular}
\end{table}
Table 3: P-values of KS and MMD metrics for different drift levels using the doc2vec model and dimentionality reduction using PCA.
stands out as the top performer, indicating its robustness in capturing and adapting to drift in text data. On the other hand, the TFIDF model demonstrates weaker performance, suggesting the need for more sophisticated approaches in detecting drift. The doc2vec model performs reasonably well, positioning it between the TFIDF and bert models. Since dimentionality reduction seems doen't seem to impact the results, it may be used when it is more computationally efficient to do so
These findings contribute to the understanding of drift detection in text data. Further analysis and experimentation can be conducted to explore additional models, dimensionality reduction techniques, and drift detection methods to improve the accuracy and efficiency of drift detection in various text analysis tasks.
|
2309.17009 | Deep Representation Learning for Prediction of Temporal Event Sets in
the Continuous Time Domain | Temporal Point Processes (TPP) play an important role in predicting or
forecasting events. Although these problems have been studied extensively,
predicting multiple simultaneously occurring events can be challenging. For
instance, more often than not, a patient gets admitted to a hospital with
multiple conditions at a time. Similarly people buy more than one stock and
multiple news breaks out at the same time. Moreover, these events do not occur
at discrete time intervals, and forecasting event sets in the continuous time
domain remains an open problem. Naive approaches for extending the existing TPP
models for solving this problem lead to dealing with an exponentially large
number of events or ignoring set dependencies among events. In this work, we
propose a scalable and efficient approach based on TPPs to solve this problem.
Our proposed approach incorporates contextual event embeddings, temporal
information, and domain features to model the temporal event sets. We
demonstrate the effectiveness of our approach through extensive experiments on
multiple datasets, showing that our model outperforms existing methods in terms
of prediction metrics and computational efficiency. To the best of our
knowledge, this is the first work that solves the problem of predicting event
set intensities in the continuous time domain by using TPPs. | Parag Dutta, Kawin Mayilvaghanan, Pratyaksha Sinha, Ambedkar Dukkipati | 2023-09-29T06:46:31Z | http://arxiv.org/abs/2309.17009v1 | # Deep Representation Learning for Prediction of Temporal Event Sets in the Continuous Time Domain
###### Abstract
Temporal Point Processes (TPP) play an important role in predicting or forecasting events. Although these problems have been studied extensively, predicting multiple simultaneously occurring events can be challenging. For instance, more often than not, a patient gets admitted to a hospital with multiple conditions at a time. Similarly people buy more than one stock and multiple news breaks out at the same time. Moreover, these events do not occur at discrete time intervals, and forecasting event sets in the continuous time domain remains an open problem. Naive approaches for extending the existing TPP models for solving this problem lead to dealing with an exponentially large number of events or ignoring set dependencies among events. In this work, we propose a scalable and efficient approach based on TPPs to solve this problem. Our proposed approach incorporates contextual event embeddings, temporal information, and domain features to model the temporal event sets. We demonstrate the effectiveness of our approach through extensive experiments on multiple datasets, showing that our model outperforms existing methods in terms of prediction metrics and computational efficiency. To the best of our knowledge, this is the first work that solves the problem of predicting event set intensities in the continuous time domain by using TPPs.1
Footnote 1: In proceedings of ACML 2023
Temporal Point Processes, Self-supervised learning, Forecasting, Events
## 1 Introduction
In today's complex and dynamic world, the need for accurate and reliable predictions is greater than ever. By making event predictions, we can identify potential risks and opportunities and take appropriate action to prepare for or capitalize on them. Event prediction problems have been studied in machine learning literature extensively, where the approaches range from sequence modeling to temporal point processes. Almost every approach deals with the problem of predicting a single event based on historical data. On the other hand, many practical problems require forecasting multiple events (a set of events), which inevitably requires us to model the distribution of such event sets over time because a simple prediction of whether an event set will occur is insufficient. In medical diagnosis, for instance, it is critical to know whether a particular condition is present and when it is likely to occur next so that preventive measures are taken accordingly (refer to Figure 1). Another example is trying to predict when and what set of items will a person check-out on an e-commerce
website. Prior knowledge of the same can help reduce the shipment charges if the items are placed at convenient locations beforehand. Solution to event set prediction problem can provide valuable insights into the underlying patterns in the data and can be useful for identifying trends and making long-term predictions about the future.
Although multi-variate temporal event modeling has been explored before, these methods are rendered ineffective for modeling temporal event sets (Liniger, 2009; Mei and Eisner, 2017; Zuo et al., 2020). One may try to modify existing approaches for predicting temporal event sets. Considering all combinations of events as unique events can be one way to model the problem and still use the existing temporal event modeling approaches. However, the number of events increases exponentially and is impractical. An alternate approach is to decompose the event set into multiple singleton elements, assign the event set timestamp to each event of the event set individually, and then model them as regular temporal events. This approach, although tractable, does not consider the relations and dependencies among the events in the event sets.
In this paper, we propose a new approach based on deep representation learning that can resolve all the above-mentioned problems. Our _contributions_ are as follows:
1. We propose a Contextual Self-Supervised Contrastive Learning objective for training an _Event-Encoder_, which learns representations of events in event sets.
2. We propose TESET, a Temporal Event set modeling framework that uses event set embeddings and combines them sequentially using transformer-based models.
3. We utilize intensity and temporal prediction heads to predict the intensity distribution of the event set along with the time of occurrence.
4. In our approach, we also facilitate using domain-specific features for learning better representations.
Figure 1: A typical temporal event set data sequence \(\mathcal{S}\). Temporal Event set Modeling aims to predict both the event sets and the time of its occurrence given the corresponding history in the continuous time domain. For instance, as shown in the figure, given hospitalization history, we predict when and with what diseases/conditions the patient might be hospitalized in the future.
## 2 Related Works
Classic temporal event modeling works include Gaussian Processes (Ebden, 2015) and Multi-variate Hawkes Processes (Liniger, 2009), as mentioned earlier. To deal with the parametric kernels of the Hawkes process, Mei and Eisner (2017) proposed the Neural Hawkes Process, which can use the expressive power of LSTMs to learn the intensities. Transformer Hawkes Process (Zuo et al., 2020) is another work that tries to use the computational efficiency of Transformers and the self-attention mechanism to solve RNNs' inability to learn long-term dependencies.
While the work of Choi et al. (2015) tries to model the patient's EHR, it does not consider the patient having multiple codes in the same visit. For the same task, (Choi et al., 2017) uses a graph-based model to show improvements over simple RNN-based methods (Choi et al., 2016). The work of Shang et al. (2019) uses a variant of the Masked Language Modelling (MLM) objective as a pre-training task. Similarly, for recommendations, (Kang and McAuley, 2018) uses attention-based models, and (Sun et al., 2019) uses an MLM.
BEHRT (Li et al., 2020) depicts diagnoses as words, visits as sentences, and a patient's entire medical history as a document to use multi-head self-attention, positional encoding, and MLM for EHR. Med-Bert (Rasmy et al., 2021) further adds to this concept by using serialization embeddings (order of codes within each visit using prior knowledge) besides code embeddings and positional (visit) encodings. Bert4Rec (Sun et al., 2019) uses an MLM-like bidirectional pre-training for predicting user-item interactions. Transformer4Rec (de Souza Pereira Moreira et al., 2021) further uses session information to enhance the previous works.
Recent works in set modeling as Sets2Sets (Hu and He, 2019) propose an encoder-decoder framework to predict event sets at discrete time steps, where the event set representation is obtained by aggregating the corresponding event embeddings by average pooling. DSNTSP (Sun et al., 2020) uses a transformer framework to learn item and set representations and captures temporal dependencies separately.
However, it must be noted that all the aforementioned methods either lack the ability to encode sets or they are applicable only for a discrete-time setting. To the best of our knowledge, the work proposed in this paper is the first to models event sets in the continuous time domain and solve the forecasting problem using TTPs.
## 3 Proposed Approach
We propose a two-step representation learning approach in Section 3.2 and 3.3 for modeling the temporal event sets. The pre-trained representation model thus obtained after the two steps of training can then be fine-tuned for the required downstream tasks.
### Notations and Preliminaries
Let \(\mathcal{S}\) denote an input sequence. Each element \(\mathbf{s}_{k}\in\mathcal{S}\) corresponds to an event set and is ordered chronologically, where \(k\in[|\mathcal{S}|]\) (\(|\mathbf{x}|\) counts the number of elements in the set \(\mathbf{x}\) and \([n]\) represents the set \(\{1,2,...,n\}\)). \(\mathbf{s}_{k}\subset\mathcal{I}\) is a set of events, where \(\mathcal{I}\) is the set of all possible events. Every \(\mathbf{s}_{k}\) has an associated timestamp and (optionally) a set of features that we denote by \(\mathbf{t}_{k}\) and \(\mathbf{f}_{k}\) respectively. The features can be both static or dynamic; however, the feature set needs to be consistent across all the events. For instance, the age and weight of a patient change across hospital visits, whereas gender can be assumed to remain the same. \(\mathcal{T}\subset\mathcal{I}\) is the target set. The target set is different from
the input set of events. For instance, in the case of hospital visits, the target set may consist of only diagnoses, whereas the set of all possible events can additionally contain procedures and treatments.
We use \(\mathcal{M}\) to denote the model being trained for a given task. \(\mathcal{M}\) has a module called an encoder, denoted by \(\mathcal{M}_{E}\), for encoding the input sequences along with a given set of features corresponding to each event in the sequence. We also use \(\mathcal{A}_{E}\) to denote an auxiliary encoder model as described in Section 3.2. For instance, the auxiliary encoder \(\mathcal{A}_{E}\) can be modeled using an affine layer, and \(\mathcal{M}\) can be modeled using a transformer (Vaswani et al., 2017).
### Learning Contextual Embeddings of Events
A measure of similarity among the vector representations of events \(\mathbf{i}\in\mathcal{I}\) is required among the co-occurring events for learning meaningful contextual event embeddings. Consequently, in the first step of training, we use a self-supervised noise contrastive pre-training objective, similar to Noise Contrastive Estimation (Gutmann and Hyvarinen, 2010), for learning vector representations corresponding to every event in the set \(\mathcal{I}\). The input to \(\mathcal{A}_{E}\) is an event \(\mathbf{i}\in\mathcal{I}\) and the output of \(\mathcal{A}_{E}\) is \(\mathbf{v}_{emb}\), which is a \(\mathbf{d}_{emb}\)-dimensional embedding vector.
To train this encoder network, we iterate over each \(\mathbf{s}_{k}\) in \(\mathcal{S}\), for all sequences in the dataset. At each iteration, we have the event set \(\mathbf{s}_{k}\), which consists of a set of events. We sample two events \(\mathbf{i}^{a}\) and \(\mathbf{i}^{p}\) uniformly at random from \(\mathbf{s}_{k}\), which becomes our anchor sample and positive sample respectively. We similarly sample our negative sample \(\mathbf{i}^{n}\) uniformly at random from among the events that are not present in the set \(s_{k}\). i.e.
\[\mathbf{i}^{a}\sim\mathbb{U}(\mathbf{s}_{k});\mathbf{i}^{p}\sim\mathbb{U}( \mathbf{s}_{k}\backslash\{\mathbf{i}^{a}\});\mathbf{i}^{n}\sim\mathbb{U}( \mathcal{I}\backslash\mathbf{s}_{k}) \tag{1}\]
where \(\mathbb{U}(\cdot)\) denotes sampling uniformly at random from a given set of events. We then pass \(\mathbf{i}^{a}\), \(\mathbf{i}^{p}\), and \(\mathbf{i}^{n}\) through \(\mathcal{A}_{E}(\cdot)\) to obtain \(\mathbf{v}_{emb}^{a}\), \(\mathbf{v}_{emb}^{p}\), and \(\mathbf{v}_{emb}^{n}\) respectively (as shown in Equation 3).
Then we calculate and maximize the following auxiliary contextual loss objective:
\[\mathcal{L}_{aux}=\log(\sigma(\mathbf{v}_{emb}^{a}\cdot\mathbf{v}_{emb}^{p}) )+\log(1-\sigma(\mathbf{v}_{emb}^{a}\cdot\mathbf{v}_{emb}^{n})) \tag{2}\]
Figure 2: [Best viewed in color] Block diagram of our proposed approaches: (a) Learning event set representations, and (b) Inference procedure of our Bayesian Transformer based TESET model.
where \(\mathbf{a}\cdot\mathbf{b}\) represents the inner product among the vectors \(\mathbf{a}\) and \(\mathbf{b}\), and \(\sigma(\mathbf{x})=1/1+e^{-\mathbf{x}}\) represents the sigmoid function. Finally, the error is back-propagated through the auxiliary encoder model \(\mathcal{A}_{E}\) and the corresponding parameters are updated using an appropriate optimizer.
### Temporal Event set (TESET) Modeling
After the auxiliary encoder model \(\mathcal{A}_{E}\) is trained, it can generate embeddings as follows:
\[\mathbf{v}_{emb}=\mathcal{A}_{E}(\mathbf{i}) \tag{3}\]
In the next step of training, we train the encoder module \(\mathcal{M}_{E}\) in our model \(\mathcal{M}\). For a given sequence \(\mathcal{S}\), we assume the most recently occurred set of events to be \(\mathbf{s}_{k}\). \(\mathbf{s}_{k}\) also had associated \(\mathbf{t}_{k}\) (a positive real value) as its corresponding timestamp, denoting when the event occurred in the timeline. Additionally \(\mathbf{s}_{k}\) may also optionally contain an associated set of features \(\mathbf{f}_{k}\).
All the previous set of events along with their corresponding timestamps and features until \(\mathbf{s}_{k}\) is assumed to be history as follows:
\[\mathcal{H}_{k}=\{\langle\mathbf{s}_{1},\mathbf{t}_{1},\mathbf{f}_{1}\rangle, \langle\mathbf{s}_{2},\mathbf{t}_{2},\mathbf{f}_{2}\rangle,...,\langle\mathbf{ s}_{k-1},\mathbf{t}_{k-1},\mathbf{f}_{k-1}\rangle\} \tag{4}\]
We denote the target event set \(\mathbf{e}_{k+1}\) as
\[\mathbf{e}_{k+1}=\mathbf{e}_{k+1}\cap\mathcal{T} \tag{5}\]
The objective in this step is to predict the tuple \(\langle\mathbf{e}_{k+1},\mathbf{t}_{k+1}\rangle\) given the tuple \(\langle\mathbf{s}_{k},\mathbf{t}_{k},\mathbf{f}_{k},\mathcal{H}_{k}\rangle\) as the input. In other words, the goal is to model the prediction of the set of next events along with the timestamp when it is supposed to occur given the most recent event, its timestamp, its associated features, and the entire history of events in that sequence of events.
Notice that the events \(\mathbf{s}_{k}\in\mathcal{S}\) are sets of events. Hence, the events do not necessarily consist of only singleton elements and may contain two or more events. Consequently, we require \(\mathcal{M}_{E}\) to be composed of a hierarchy of encoders: **(i)** Set Encoder that will combine the sets and give a single representation for all the events in the set, and **(ii)** Sequential Encoder that will take these combined representations as input and encode them temporally.
However, this approach possesses its own set of challenges as follows:
1. Set encoding is a difficult problem since the set representations must satisfy properties such as permutation invariance and equivariance.
2. During implementation, the event encoder either requires to be duplicated with one copy corresponding to every event set \(s_{1},...,s_{k}\) or techniques such as gradient accumulation are needed while training. The duplication again requires high accelerator memory and efficient coding to utilize parallelization properly.
3. The sequential nature of the event encoder prevents efficient parallelization, on top of it waiting for the event encoder to get the set representations.
As a solution to these problems, we propose a transformer-based architecture for training the model's encoder module \(\mathcal{M}_{E}\). We stack all the events in the most recent occurred event \(\mathbf{s}_{k}\) together with all the events \(\mathbf{s}_{1},...,\mathbf{s}_{k-1}\) in history \(\mathcal{H}_{k}\). Thus, assuming the events in each event set \(\mathbf{s}_{j}\) are: \(\mathbf{i}_{j}^{1},\mathbf{i}_{j}^{2},...,\mathbf{i}_{j}^{|\mathbf{s}_{j}|}\), the current event set along with the history event sets in a given sequence \(\mathcal{S}\) becomes
\[\mathcal{S}_{k}=\mathbf{i}_{1}^{1},\mathbf{i}_{1}^{2},...,\mathbf{i}_{1}^{| \mathbf{s}_{1}|},\mathbf{i}_{2}^{1},\mathbf{i}_{2}^{2},...,\mathbf{i}_{2}^{| \mathbf{s}_{2}|},...,\mathbf{i}_{k}^{1},\mathbf{i}_{k}^{2},...,\mathbf{i}_{k}^ {|\mathbf{s}_{k}|} \tag{6}\]
In order to differentiate among the various event sets, we use the following techniques that are specifically applicable to a transformer-based architecture: (i) Special Tokens, and (ii) Custom SpatioTemporal Encodings containing both positional and temporal information.
Special Tokens:We use a special token, which is often referred to as the separator token (denoted by [SEP]) in the literature, after the event listed as \(\mathbf{i}_{j}^{|\mathbf{s}_{j}|}\) for all \(j\leq k\). This enables us to separate the event sets from each other whilst also providing us with a representation corresponding to each event set in the sequence \(\mathcal{S}_{k}\). We use another classifier special token (denoted by [CLS]) at the very end of the sequence \(\mathcal{S}_{k}\). This token helps us summarize the contents of the entire sequence, and its corresponding vector can be used for downstream tasks. We denote the resultant augmented sequence of events and tokens as \(\mathcal{S}_{k}^{*}\). In the rest of this section, we will assume all the elements in the sequence \(\mathcal{S}_{k}^{*}\) to be tokens to keep the discussion and notations uniform with the transformer literature. Hence the augmented sequence becomes
\[\mathcal{S}_{k}^{*}=\mathbf{i}_{1}^{1},...,\mathbf{i}_{1}^{|\mathbf{s}_{1}|}, \texttt{[SEP]},\mathbf{i}_{2}^{1},...,\mathbf{i}_{2}^{|\mathbf{s}_{1}|}, \texttt{[SEP]},...,\texttt{[SEP]},\mathbf{i}_{k}^{1},...,\mathbf{i}_{k}^{| \mathbf{s}_{k}|},\texttt{[SEP]},\texttt{[CLS]} \tag{7}\]
SpatioTemporal Embeddings:Next, we add custom Spatial and Temporal (SpatioTemporal) Encodings to all the events in the augmented sequence \(\mathcal{S}_{k}^{*}\). The transformer framework assumes the entire sequence \(\mathcal{S}_{k}\) as an atomic unit. It is therefore essential that we specify an encoding vector for each token in \(\mathbf{s}_{k}\) such that it can not only enable the model to differentiate among various event sets in the sequence but also effectively accumulate contextual information in the respective output
embeddings. Our requirement for encoding is different from positional encoding (Vaswani et al., 2017) due to the following reasons: **(i)** events \(\mathbf{i}_{j}^{1},\mathbf{i}_{j}^{2},...,\mathbf{i}_{j}^{|\mathbf{s}_{j}|}\) within a set \(\mathbf{s}_{j}\) for all \(j\leq k\) are unordered, unlike the ordered words in a textual sequence, and **(ii)** two consecutive timesteps are not uniformly separated in the timeline. For instance, the duration between the current and next visit of a patient might vary from as short as a month to as long as multiple years.
Consequently, we use the SpatioTemporal Encodings as described below, which can handle these non-uniform temporal differences whilst also retaining information about the co-occurrence of events in event sets.
\[\mathbf{v}_{enc}^{pos}(j,d) =\begin{cases}\sin\left(j/10000^{\frac{2d}{4emb}}\right)&;\text{ if }j\text{ is even}\\ \cos\left(j/10000^{\frac{2d}{4emb}}\right)&;\text{otherwise}\end{cases}\] \[\mathbf{v}_{enc}^{temp}(\mathbf{t}_{j},d) =\begin{cases}\sin\left(\mathbf{t}_{j}/10000^{\frac{2d}{4emb}} \right)&;\text{if }\mathbf{t}_{j}\text{ is even}\\ \cos\left(\mathbf{t}_{j}/10000^{\frac{2d}{4emb}}\right)&;\text{otherwise} \end{cases}\] \[\mathbf{v}_{enc}(j,\mathbf{t}_{j},d) =\mathbf{v}_{enc}^{pos}(j,d)+\mathbf{v}_{enc}^{temp}(\mathbf{t} _{j},d) \tag{8}\]
where \(\mathbf{t}_{j}\) is the timestamp corresponding to the \(j^{th}\) event set \(\mathbf{s}_{j}\) for \(1\leq j\leq k\).
Note that the initial value of \(\mathbf{v}_{emb}\) is obtained by passing each event from event sets through \(\mathcal{A}_{E}\). Then we add \(\mathbf{v}_{enc}\) to \(\mathbf{v}_{emb}\) before passing it on to the Transformer model. We get a \(\mathbf{d}_{emb}\)-dimensional embedding vector \(\mathbf{v}_{emb}^{\texttt{[CLS]}}\) corresponding to the [CLS] token. We denote this output vector by \(\mathbf{v}_{out}\). Refer to Figure 2 for a block diagram of our approach.
Additionally, we use the following two prediction heads for training the representation model \(\mathcal{M}\): **(i)** Event set Prediction Head, denoted by \(\mathcal{P}_{E}\), and **(ii)** Temporal Prediction Head, denoted by \(\mathcal{P}_{T}\). \(\mathcal{P}_{E}\) takes \(\mathbf{v}_{out}\) as input and predicts \(M\) pairs of Gaussian distributional parameter vectors \(\langle\mu_{\hat{\mathbf{e}}_{k+1}}^{1},\sigma_{\hat{\mathbf{e}}_{k+1}}^{1} \rangle,\)\(\langle\mu_{\hat{\mathbf{e}}_{k+1}}^{2},\sigma_{\hat{\mathbf{e}}_{k+1}}^{2} \rangle,\)\(...,\langle\mu_{\hat{\mathbf{e}}_{k+1}}^{M},\sigma_{\hat{\mathbf{e}}_{k+1}}^{M}\rangle\) along with \(M\) mixing coefficients \(\alpha_{\hat{\mathbf{e}}_{k+1}}^{1},\alpha_{\hat{\mathbf{e}}_{k+1}}^{2},..., \alpha_{\hat{\mathbf{e}}_{k+1}}^{M}\). The \(M\) event set prediction vectors \(\hat{\mathbf{e}}_{k+1}^{1},\hat{\mathbf{e}}_{k+1}^{2},...,\hat{\mathbf{e}}_{k +1}^{M}\) are sampled from the distribution parameters with the same number of dimensions as events in the target set \(\mathcal{T}\) with each dimension modeling a Bernoulli distribution. After mixing the sampled vectors according to the mixing coefficients, we get
\[\hat{\mathbf{e}}_{k+1}=\alpha_{\hat{\mathbf{e}}_{k+1}}^{1}\cdot\hat{\mathbf{e }}_{k+1}^{1},\alpha_{\hat{\mathbf{e}}_{k+1}}^{2}\cdot\hat{\mathbf{e}}_{k+1}^{2 },...,\alpha_{\hat{\mathbf{e}}_{k+1}}^{M}\cdot\hat{\mathbf{e}}_{k+1}^{M} \tag{9}\]
Similarly, \(\mathcal{P}_{T}\) takes \(v_{out}\) as input and outputs the \(M\) temporal Gaussian distributional parameter pairs \(\langle\mu_{\hat{\mathbf{t}}_{k+1}}^{1},\sigma_{\hat{\mathbf{t}}_{k+1}}^{1}\rangle\), \(\langle\mu_{\hat{\mathbf{t}}_{k+1}}^{2},\sigma_{\hat{\mathbf{t}}_{k+1}}^{2} \rangle,\)\(...,\langle\mu_{\hat{\mathbf{t}}_{k+1}}^{M},\sigma_{\hat{\mathbf{t}}_{k+1}}^{M}\rangle\) along with \(M\) temporal mixing coefficients \(\alpha_{\hat{\mathbf{t}}_{k+1}}^{1},\alpha_{\hat{\mathbf{t}}_{k+1}}^{2},..., \alpha_{\hat{\mathbf{t}}_{k+1}}^{M}\). We sample scalars \(\hat{\mathbf{t}}_{k+1}^{1},\hat{\mathbf{t}}_{k+1}^{2},...,\hat{\mathbf{t}}_{k +1}^{M}\) and similar to Equation 9, we obtain \(\hat{\mathbf{t}}_{k+1}\) by mixing them according to the mixing coefficients. We use reparametrization (similar to Kingma and Welling (2014)) to enable backpropagation in our model.
Temporal Event set Modeling:In order to learn representations in the second step of pre-training, we propose the Temporal Event set Modeling objective as described below. Upon sampling the tuple \(\langle\hat{\mathbf{e}}_{k+1},\hat{\mathbf{t}}_{k+1}\rangle\) corresponding to an input \(\langle\mathbf{s}_{k},\mathbf{t}_{k},\mathbf{f}_{k},\mathcal{H}_{k}\rangle\), we use the following loss objectives as a part of our TESET Modeling:
**(i)**: An element-wise binary cross-entropy loss on \(\hat{\mathbf{e}}_{k+1}\) against \(\mathbf{e}_{k+1}\):
\[\mathcal{L}_{Event}^{BCE}=\frac{1}{|\mathcal{T}|}\sum_{d\in[|\mathcal{T}|]} \mathbbm{1}_{\{\mathcal{T}^{(d)}\in\mathbf{e}_{k+1}\}}\hat{\mathbf{e}}_{k+1}^{( d)}+\mathbbm{1}_{\{\mathcal{T}^{(d)}\notin\mathbf{e}_{k+1}\}}(1-\hat{ \mathbf{e}}_{k+1}^{(d)}) \tag{10}\]
where \(\mathbb{1}\) denotes the indicator function and \(\mathbf{v}^{(d)}\) represents the \(d^{th}\) dimension of the vector \(\mathbf{v}\).
**(ii)**: Additionally, we use dice loss for handling the class imbalance problem in \(s_{k+1}\cap\mathcal{T}\).
\[\mathcal{L}_{Event}^{Dice}=1-\frac{1}{|\mathcal{T}|}\sum_{d\in[|\mathcal{T}|]} \frac{2\operatorname{\hat{\mathbf{e}}}_{k+1}^{(d)}\operatorname{\mathbf{e}} _{k+1}^{(d)}+\epsilon}{\sum_{d^{\prime}\in[|\mathcal{T}|]}\operatorname{\hat{ \mathbf{e}}}_{k+1}^{(d^{\prime})}+\operatorname{\mathbf{e}}_{k+1}^{(d^{\prime })}+\epsilon} \tag{11}\]
where \(\epsilon\) is a small Laplace Smoothening constant.
**(iii)**: Finally, for the timestamp prediction head, we use Huber loss to align \(\hat{\mathbf{t}}_{k+1}\) and \(\mathbf{t}_{k+1}\):
\[\mathcal{L}_{Temporal}^{Huber}=\begin{cases}\Delta^{2}/2&;\text{if }\Delta< \delta\\ \delta(\Delta-\delta/2)&;\text{otherwise}\end{cases} \tag{12}\]
where \(\Delta\) is the absolute value of \(\hat{\mathbf{t}}_{k+1}-\mathbf{t}_{k+1}\) and \(\delta\) is a positive constant.
We minimize a linear combination of the above loss objectives as follows:
\[\mathcal{L}=\lambda_{1}\mathcal{L}_{Event}^{BCE}+\lambda_{2}\mathcal{L}_{Event }^{Dice}+\lambda_{3}\mathcal{L}_{Temporal}^{Huber} \tag{13}\]
where, \(\lambda_{1},\lambda_{2},\lambda_{3}>0\). We calculate this loss and back-propagate the gradients through our encoder models \(\mathcal{M}_{E}\) and \(\mathcal{A}_{E}\) and update the model parameters accordingly.
**Multiple Generations**: We implement our Transformer model \(\mathcal{M}\) using the Probabilistic Bayesian neural network framework. Essentially, every time we train the model we sample the weights and biases (for every layer) from a weight distribution, and then update the distribution through backpropagation. This enables us to sample the weights multiple times, thus providing us with an ensemble of \(N\) networks whose predictions we combine to get our final predicted outputs. This stabilizes the training, helps converge the loss objective faster, and the validation metrics improve noticeably during the initial stages of training (see Section 5 for more details).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Training method} & \multicolumn{2}{c}{Synthea} & \multicolumn{2}{c}{Instacart} \\ \cline{2-5} & Event set & Time pred & Event set & Time pred \\ & pred (DSC) & (MAE) & pred (DSC) & (MAE) \\ \hline \hline \multicolumn{5}{l}{_Baselines:_} \\ Neural Hawkes Process & 0.08 & 2.50 & 0.29 & 0.24 \\ Transformer Hawkes Process & 0.18 & 2.41 & 0.32 & 0.24 \\ Hierarchical Model & 0.12 & 2.51 & 0.30 & 0.23 \\ \hline \hline \multicolumn{5}{l}{_Ours:_} \\ TESET & 0.20 & 2.29 & 0.35 & 0.21 \\ TESET + Contextual Embeddings & **0.30** & **2.17** & **0.42** & **0.18** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Temporal Event set Modeling Results. We compare our approaches to baselines. For DSC, the larger the better; for MAE, the smaller the better. We can see that even without Contextual embeddings, our methods outperform the baselines. Best results are in bold**
## 4 Experiments2
Footnote 2: Codes for our experiments are available at: [https://github.com/paragduttaiisc/temporal_event_set_modeling](https://github.com/paragduttaiisc/temporal_event_set_modeling)
### Datasets
1. **Synthea:**(Walonoski et al., 2017) encompasses the comprehensive medical records of each patient generated synthetically. The medical history of each patient is represented as a sequential sequence of their hospital visits, along with the corresponding timestamps denoting the time of each visit. Each hospital visit comprises an event set, containing the diagnoses, treatments, and procedures administered during that particular visit, in addition to the patient's characteristics such as age, weight, and gender.
2. **Instacart:**(Instacart, 2017) is a comprehensive collection of customers' order histories. Each individual customer's order history is represented as a sequential arrangement of orders, wherein each order includes a set of items purchased by the respective customer and the corresponding timestamp indicating the time of the order.
3. **MIMIC-III:**(Johnson et al., 2018) provided includes the historical data of patients who have visited the Intensive Care Unit (ICU) at a hospital. By analyzing the Electronic Health Record (EHR) history of each patient, we extract the sequential information regarding the set of medical conditions diagnosed and the respective admission timestamps. For the purpose of the finetuning task, the medical codes used in the Synthea dataset are correspondingly mapped to the medical codes employed in the MIMIC-III dataset.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{FT?} & \multirow{2}{*}{Training method} & \multicolumn{2}{c}{Synthea} & \multicolumn{2}{c}{Instacart} \\ \cline{3-5} & & Event set given time & Time given event & Event set given time & Time given event \\ & & (DSC) & (MAE) & (DSC) & (MAE) \\ \hline \multirow{3}{*}{\begin{tabular}{c} Trained from \\ scratch \\ \end{tabular} } & Neural Hawkes Process & 0.21 & 5.70 & 0.35 & 2.19 \\ & Transformer Hawkes Process & 0.20 & 4.52 & 0.34 & 2.15 \\ & Hierarchical Model & 0.19 & 5.29 & 0.34 & 2.20 \\ & TESET (Ours) & _0.22_ & _4.28_ & _0.38_ & _1.83_ \\ \hline \multirow{3}{*}{
\begin{tabular}{c} Fine- \\ tuned \\ \end{tabular} } & Neural Hawkes Process & 0.13 & 6.01 & 0.30 & 2.29 \\ & Transformer Hawkes Process & 0.19 & 4.60 & 0.33 & 2.24 \\ & Hierarchical Model & 0.18 & 5.87 & 0.35 & 2.31 \\ & TESET (Ours) & _0.25_ & _3.91_ & _0.41_ & _1.19_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Fine-tuning results.** FT stands ‘fine-tuned’. Our models consistently outperform the baselines in the setting of being fine-tuned for the downstream tasks. It should be noted that Fine tuning doesn’t work on the baseline models, and they often perform worse. In each stratum, the best-performing models have been italicized. The best-performing models have been shown in bold.
### Baselines
We quantify the advantage of our proposed approach by comparing with the following competitive baselines3:
Footnote 3: Baselines 1 and 2 models are originally used for event prediction given a sequence of events and Baseline 3 is originally used in the discrete timestep set prediction. We extend them to predict sets in the continuous domain.
1. **Neural Hawkes Process (NHP):** The NHP (Mei and Eisner, 2017) employs a Recurrent Neural Network, specifically a continuous-LSTM, to parameterize the intensity function \(\lambda\) of the Hawkes process. The intensity function is \(K\)-dimensional, denoted as \(\lambda_{k}(t)=f_{k}(w_{k}^{T}h(t)\); where, \(f_{k}(.)\) is the decay function \(\delta\) that is chosen to be the softplus function, \(K\) is the number of events, and \(h(t)\) is the hidden state of the LSTM.
2. **Transformer Hawkes Process (THP):** The THP (Zuo et al., 2020) utilizes a self-attention mechanism and temporal encoding to model the Hawkes process. This approach effectively captures long-term dependencies while maintaining computational efficiency, distinguishing it from the NHP (Neural Hawkes Process).
3. **Hierarchical Model (HM):** HM uses a hierarchical encoder where in the first step it encodes the sets and provides a representation for each set using a pooling function and in the second step it encodes the set representations temporally. We use a fully connected neural network to encode the sets and a Bi-LSTM model for encoding
Figure 3: [Best viewed in color] Our model’s predicted intensity plot for an elderly female patient who had a history of diabetes. The peak of two high-intensity disease curves (Diabetic Renal Disease and Hypert triglyceride) coincides with the date of actual hospitalization. Also, it is predicted that neuropathy might be a problem in the future, which is a well-known condition for people suffering from Type II Diabetes.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & Event set & Time & Event set given & Time given \\ & (DSC) & (MAE) & time (DSC) & event (MAE) \\ \hline TESET trained from scratch & 0.49 & 0.67 & 0.47 & 0.70 \\ TESET fine-tuned & _0.52_ & _0.14_ & _0.50_ & _0.19_ \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Transfer learning results. We fine-tune the TEM model trained on the synthetic dataset using the MIMIC-III dataset. It is observable that synthetic to real transfer works better for our approach.**
the set representations temporally. This is in similar lines to the Sets2Sets model (Hu and He, 2019).
### Downstream tasks
We demonstrate the superiority of the representations learned by our TESET model by fine-tuning on the following downstream tasks:
1. **Event set prediction given time:** In this downstream task, the idea is to model \(\mathbf{s}_{k+1}\) given the tuple \((\mathbf{s}_{k},\mathbf{t}_{k},\mathbf{f}_{k},\)\(\mathcal{H}_{k},\mathbf{t}_{k+1})\). In other words, we would like to predict the event set that might occur in the future given the future timestamp in addition to the tuple of the most recent event, its timestamp, its associated features, and the entire history of events-sets in that sequence as mentioned in Section 3. Note that the future timestamp when we want to predict the most probable event set lies in the continuous domain.
2. **Temporal prediction given an event:** Conversely, in this downstream task, the idea is to model \(\mathbf{t}_{k+1}\) given the tuple \((\mathbf{s}_{k},\mathbf{t}_{k},\mathbf{f}_{k},\mathbf{i})\), where \(\mathbf{i}\in\mathcal{T}\). In other words, we would like to predict the most probable time when a particular event might occur in the future given the event from the target set in addition to the tuple of a most recent event, its timestamp, its associated features, and the entire history of events in that sequence of events.
### Ablation Studies
We formulate our ablation experiments in the form of the following interesting research questions:
**RQ-1:** Are the representations learned by TEM useful for related tasks?
**RQ-2:** What is the role of incorporating additional features into our framework?
**RQ-3:** How effective is the Contextual Event Representation Learning step?
**RQ-4:** How likely are our approaches to adapt and generalize with respect to a domain shift?
\begin{table}
\begin{tabular}{l c c} \hline \hline Transformer Encoding & Event set pred. (DSC) & Time pred. (MAE) \\ \hline Positional Enc (Vaswani et al., 2017) & 0.35 & 0.22 \\ SpatioTemporal Enc (Ours) & **0.42** & **0.18** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **SpatioTemporal Encodings**: Need for custom encoding during TEM is evident from the considerable advantage we observe. Models were trained on the Instacart dataset.
Figure 4: [Best viewed in color] Resource usage and training time comparison during TEM training on Synthea dataset. The TESET model is the fastest although it has similar computational requirements.
**RQ-5:** Can the advantage of using SpatioTemporal encodings over conventional positional encodings be quantified?
**RQ-6:** What is the training time saved by considering the set of temporal data points rather than each event individually?
**RQ-7:** Is the Bayesian Transformer with the distributional heads even required?
**RQ-8:** Do the predicted event intensities for a given history sequence correspond to something meaningful?
## 5 Results
Table 1 compares the performance of our proposed models with baselines under similar settings on two different tasks and two datasets:
1. It is evident that our TESET model outperforms existing baselines in both event set and temporal prediction metrics. We achieve 0.12 and 0.10 DSC improvement (absolute metrics) in Synthea and Instamart datasets respectively for the event set prediction sub-task. We also achieve 0.34 and 0.05 absolute improvement in MAE in the same datasets for the time prediction sub-task.
2. We can quantify **RQ-3** by looking at the difference of metrics with and without using contextual embeddings. It can be noticed that using contextual embedding is clearly advantageous.
Figure 6 additionally shows a magnified t-SNE plot of the Contextual vectors in the representation space to demonstrate the clustering of similar items.
Table 2 compares the fine-tuning results for the event set given time and time given event downstream task:
Figure 5: [Best viewed in color] The plot on the **left** compares the test set dice scores for the TESET variants. Bayesian Transformer extends Simple Transformer with the Probabilistic Bayesian NN framework, while the Transformer with Gaussian Heads predict single Gaussian distribution at the final layer. It is clearly observable that Bayesian Transformer with Distributional Heads (Ours) is more stable and performs well right from the start. The plot on the **right** plots the test set dice scores of the TESET model with a combination of various features. It can be noticed that using all the features has considerable advantage.
1. It can be noticed from the table that our methods outperform the baselines when fine-tuned instead of being trained from scratch.
2. It is again evident that our model (TESET) learns representations during TEM that can be used for downstream tasks, thus answering **RQ-1**. On the other hand, the baseline approaches learn representations that are not generalizable for downstream tasks, and hence they perform better (compared to themselves) when trained from scratch.
Figure 5 answers **RQ-2**. We can attribute the consistently improved performance of the model throughout the TEM training to the use of domain-specific features such as age, weight and gender. When trained with all of the features together, the model achieves the best performance.
Table 3 presents the domain generalization capabilities of the representations learned by our TEM model, answering **RQ-4**. We can see that even though our TEM model was trained on the
Synthea dataset, it generalizes quite effectively
to the real-world dataset MIMIC-III.
Table 4 answers **RQ-5** by showing that our SpatioTemporal Encodings definitely score higher metrics in both event set prediction and time prediction during TEM when compared to vanilla Positional Embeddings.
We answer **RQ-6** by observing Figure 4. It can be observed that the NHP and THP are \(10\times\) and \(4\times\) slower compared to our TESET model. Even the Hierarchical approach is \(1.3\times\) slower. We additionally present asymptotic computation and time complexity analysis in Table 5.
Figure 6: [Best viewed in color] 2D t-SNE embeddings of the representations learned after first step of our approach in Synthea Dataset. It can be observed that clusters are formed in the embedding space.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & NHP & THP & HM & TESET (Ours) \\ \hline Computational Complexity & \(\mathcal{O}(T\cdot\mu_{E}\cdot d^{2})\) & \(\mathcal{O}(T^{2}\cdot\mu_{E}^{2}\cdot d)\) & \(\mathcal{O}(T\cdot d)\) & \(\mathcal{O}(T^{2}\cdot d)\) \\ Time Complexity & \(\mathcal{O}(T\cdot\mu_{E})\) & \(\mathcal{O}(1)\) & \(\mathcal{O}(1)\) & \(\mathcal{O}(1)\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Time and Computational Complexity.** An analysis of the computational and time complexity for each layer of the baseline methods and our method. The notations are as follows: \(T\) indicates the Sequence Length, \(\mu_{E}\) indicates the Average Event-Set Length (average number of items in the event-sets), and \(d\) indicates the Embedding (hidden) dimension.
From Figure 5, we can compare the training plots for the following models (i) simple transformer, (ii) transformer with distributional heads, (iii) Bayesian transformer with distributional heads. The considerable advantage of the model (iii) is clearly visible from the plots, thus answering **RQ-7**.
Finally, from Figure 3, we can see that the predicted intensities of various correlated diseases are shown to be high in the future. The peak of the curves coincides with the next hospitalization date in the dataset. Thus, not only is **RQ-8** answered by meaningful predictions, but also the hospitalization can be prevented with precautionary checkup before the predicted date of hospitalization.
## 6 Limitations and Future Works
Our method is limited to representing the relationship among items (such as diseases and treatments) as distances amongst embeddings in the representation space. Consequently, our method can only capture pairwise relationships between items but not more complex relationships such as transitive or hierarchical. One natural extension of our work would be to infuse external knowledge from a knowledge graph. From the entities and their relationships, it might be possible to capture more complex relationships among items and additional information such as their attributes and side effects.
Another limitation of our method is that it only predicts the items in the set themselves, and not how the set should be used in a decision-making context. For example, if we are predicting diseases and treatments, our method cannot tell us which treatment is best for a particular patient. Thus, another ambitious future direction for our work would be to extend our work for decision making, for instance by learning decision-making strategies that tell us which treatment to give to a patient at each time step, based on the patient's current state and the history of treatments that they have received. This is often called the dynamic treatment regime, and extending our work in this domain would make it more useful in real-world applications.
## 7 Conclusion
In this paper, we propose a method for modeling the temporal event set distribution. We additionally learn self-supervised contextual event embeddings and incorporate temporal and domain-specific features into the framework to generate better representations. We also provide a Transformer based approach along with SpatioTemporal Encodings to model the same. We empirically demonstrate the validity of our methods along with the necessity of the various components of our proposed methods through appropriate experiments.
## Acknowledgement
The authors would like to thank the SERB, Department of Science and Technology, Government of India, for the generous funding towards this work through the IMPRINT Project: IMP/2019/000383. |
2307.16863 | MetaCAM: Ensemble-Based Class Activation Map | The need for clear, trustworthy explanations of deep learning model
predictions is essential for high-criticality fields, such as medicine and
biometric identification. Class Activation Maps (CAMs) are an increasingly
popular category of visual explanation methods for Convolutional Neural
Networks (CNNs). However, the performance of individual CAMs depends largely on
experimental parameters such as the selected image, target class, and model.
Here, we propose MetaCAM, an ensemble-based method for combining multiple
existing CAM methods based on the consensus of the top-k% most highly activated
pixels across component CAMs. We perform experiments to quantifiably determine
the optimal combination of 11 CAMs for a given MetaCAM experiment. A new method
denoted Cumulative Residual Effect (CRE) is proposed to summarize large-scale
ensemble-based experiments. We also present adaptive thresholding and
demonstrate how it can be applied to individual CAMs to improve their
performance, measured using pixel perturbation method Remove and Debias (ROAD).
Lastly, we show that MetaCAM outperforms existing CAMs and refines the most
salient regions of images used for model predictions. In a specific example,
MetaCAM improved ROAD performance to 0.393 compared to 11 individual CAMs with
ranges from -0.101-0.172, demonstrating the importance of combining CAMs
through an ensembling method and adaptive thresholding. | Emily Kaczmarek, Olivier X. Miguel, Alexa C. Bowie, Robin Ducharme, Alysha L. J. Dingwall-Harvey, Steven Hawken, Christine M. Armour, Mark C. Walker, Kevin Dick | 2023-07-31T17:20:48Z | http://arxiv.org/abs/2307.16863v1 | # MetaCAM: Ensemble-Based Class Activation Map
###### Abstract
The need for clear, trustworthy explanations of deep learning model predictions is essential for high-criticality fields, such as medicine and biometric identification. Class Activation Maps (CAMs) are an increasingly popular category of visual explanation methods for Convolutional Neural Networks (CNNs). However, the performance of individual CAMs depends largely on experimental parameters such as the selected image, target class, and model. Here, we propose MetaCAM, an ensemble-based method for combining multiple existing CAM methods based on the consensus of the top-\(k\)% most highly activated pixels across component CAMs. We perform experiments to quantifibly determine the optimal combination of 11 CAMs for a given MetaCAM experiment. A new method denoted Cumulative Residual Effect (CRE) is proposed to summarize large-scale ensemble-based experiments. We also present adaptive thresholding and demonstrate how it can be applied to individual CAMs to improve their performance, measured using pixel perturbation method Remove and Debias (ROAD). Lastly, we show that MetaCAM outperforms existing CAMs and refines the most salient regions of images used for model predictions. In a specific example, MetaCAM improved ROAD performance to 0.393 compared to 11 individual CAMs with ranges from -0.101-0.172, demonstrating the importance of combining CAMs through an ensembling method and adaptive thresholding.
## 1 Introduction
Convolutional neural networks (CNNs) are state-of-the-art deep learning architectures developed for image analysis tasks, including classification, segmentation, and object detection. These methods were originally considered uninterpretable (_i.e._ 'black boxes') given that specific regions or features of an image used to produce a model's prediction were unknown. Having clear, reliable model interpretation improves confidence and trust in deploying artificial intelligence in real-world settings. This is of particular importance in high-criticality fields, such as medicine, autonomous driving, and automatic biometric identification [15]. Furthermore, interpretability can identify decisions using incorrect information, such as biases or undesired markings in images. There is a definitive need for dependable visualizations of salient regions used in predictions.
Numerous studies have investigated improving the explainability of CNNs. An increasingly popular class of explainable algorithms for interpreting CNN model predictions are Class Activation Maps (CAMs) [19]. Originally developed by Zhou _et al._, CAMs create heat-map visualizations indicating the most salient regions of an image used by a CNN model for a given task. These visualizations are typically generated through a linear weighting of the feature maps produced by the final convolutional layer of a network. Many variations have been propose to improve upon the original CAM formulation [1, 3, 4, 5, 6, 9, 12, 16, 18]. There is, however, little consensus regarding which CAM
method produces the most accurate reflection of important regions within an image.
The comparison across different CAM variants has also been largely inconsistent, both with what CAM methods are compared and what performance metric is considered. Qualitatively, performance may be evaluated in a given study by comparing visualizations between various CAMs. Quantitatively, various performance metrics have been proposed including perturbation analysis, object localization and segmentation, and human trust/class discrimination, making relative CAM ranking infeasible. Furthermore, the performance of CAM methods varies across the parameters of individual experiments, such as the chosen images, their target classes, and the CNN model. The combination of CAM visualizations for improved performance and consistency has recently been investigated [13]; however, the selection of contributing CAMs is arbitrary and understudied.
In this work, we address these issues by proposing a generalized method MetaCAM, a consensus-based CAM method that outputs the top-\(k\)% pixels in agreement across any number and combination of component CAM methods. This consensus-based approach ensures that if a particular CAM performs poorly for a specific task, its contribution will be mediated in the final MetaCAM visualization. Consequently, MetaCAM may be used reliably in a diverse application to generate valid CAM visualization depicting the most salient regions of an image.
We additionally develop an adaptive thresholding method to determine optimal top-\(k\) values for maximizing MetaCAM performance. We further extend this method to refine and improve existing CAM methods. Through large-scale comparative experiments, we systematically determine what combinations of component CAMs should be considered as part of MetaCAM among 11 unique publicly implemented CAM methods[6].
Our key contributions are as follows:
* We propose MetaCAM, a novel ensemble-based CAM method that combines existing CAMs; we perform extensive experimentation to determine the best aggregation of CAM methods.
* We demonstrate that MetaCAM can be extended to include non-CAM visual explanation methods such as FullGrad [17]. For simplicity, when referring to CAM methods throughout this paper, we additionally subsume FullGrad.
* We develop adaptive thresholding to improve the performance of MetaCAM. We further demonstrate how this can be extended to individual CAM methods to greatly improve performance and refine visualizations.
* We perform a systematic evaluation of MetaCAM combinations of 11 individual CAM methods. We summarize performance across MetaCAM and all individual CAMs for numerous images, target classes, and CNN models using an unbiased quantitative performance metric, Remove and Debias (ROAD).
## 2 Related Work
### CAM Methods
For a given CAM method, the CAM visualization \(L\) is generated from a linearly weighted summation of all \(k\) feature maps \(A\) at the chosen layer \(l\) of a CNN architecture \(f\). Each CAM method produces a map for a given image, \(x\), and class-discriminative methods further specify the desired class output, \(c\). Most CAM methods also perform a ReLU operation after the final summation to retain positive activations (not included in Eq. 1). The CAM formulation is thus: \(L^{c}_{CAM(A)}=\sum_{k}(\alpha^{c}_{k}A_{k}),\textit{where}\ A=f^{l}(x)\) (1)
The original CAM determined \(\alpha^{c}_{k}\), the importance of individual feature maps, using the weights of the final dense layer leading to class predictions in a CNN [19]. However, this required the final dense layer to be preceded by a convolutional layer and global average pooling, which can reduce performance and is not included in many modern CNNs. As such, new CAM methods have been developed that are CNN model-agnostic and have improved upon existing CAM performance. We have grouped 11 existing CAM variants into distinct categories based on how they compute Eq. 1.
**Basic GradCAMs:** GradCAM was the first method to propose the use of gradients to determine the importance of feature maps [16]. Specifically, the gradient of the desired class with respect to the feature maps at a specified layer is used as the \(\alpha^{c}_{k}\) weights in Eq. 1. This eliminated the need for a specific network architecture and demonstrated improved performance over the original CAM computation. GradCAM++ is a variation of GradCAM which leverages a weighted average of first-order gradients based on higher-order gradients [1]. While these two methods are consistently used across studies and are highly popularised, certain critiques (_e.g._[4]) have led to the development of further CAM variants.
**GradCAM Variants:** Different GradCAM variants have been proposed to further improve the performance of GradCAM and GradCAM++ and address some of their limitations. Fu _et al._ argue that there is little theoretical explanation to explain averaging gradients in GradCAM [5]. Instead, they propose XGradCAM, which calculates a weighted average of gradients determined through an optimization problem based on axiom constraints of sensitivity and conservation. EigenGradCAM is also a GradCAM variant that has also been implemented by Gildenblat as a class-discriminative variation of EigenCAM [6, 12]. In this case,
\(\alpha^{\mathrm{c}}_{k}\) remains as the gradient, but rather than multiplying this by the feature map activations \(A\), the principal components of the activations are instead used.
**Elementwise GradCAMs:** Rather than weighting feature maps by the average of first or second gradients, some methods suggest performing elementwise multiplication of gradients by feature maps. Draelos and Carin show that averaging gradients may cause certain image elements to falsely appear as regions of importance [4]. For example, negative elementwise gradients may become positive from averaging and therefore incorrectly highlight areas on their respective feature maps. To avoid this, HiResCAM multiplies feature maps by their elementwise gradients, which ensures each pixel is weighted for importance by their respective gradient value [4]. A similar elementwise CAM implementation, GradCAMElementwise, was proposed by Gildenblat [6]. In this case, after performing elementwise multiplication of feature maps by gradients, ReLU is employed prior to the summation of feature maps.
**LayerCAM:** Jiang _et al._ recognized that the final layers of a CNN produce coarse feature maps, which may cause the CAM visualizations to lose important fine-detailed information [9]. To counter this, LayerCAM feature map activations are weighted using spatial-specific gradients (elementwise gradients), using positive gradients only. This ensures detailed information can be captured from any layer in the CNN, and these layers can be summed together for more specific CAM visualizations.
**Image Perturbation-Based Approaches:** While high performance has been achieved using gradient-based CAM methods, studies suggest that gradient saturation may lead to noisy or diminished visualizations. Wang _et al._ also show that gradient-based methods may result in incorrectly weighted feature maps [18]. ScoreCAM and AblationCAM are two alternative approaches that use perturbations to identify feature map importance. ScoreCAM image perturbations are created by masking the original input with each feature map [18]. The importance weights are then based on the forward output score of the network for the perturbed image. Conversely, AblationCAM zeroes out image regions that are activated in feature maps [3]. The reduced performance based on the perturbed image is then used as the \(\alpha^{\mathrm{c}}_{k}\) importance weights for AblationCAM.
**Non-Discriminative Approaches:** In addition to gradient saturation, the calculation of gradients can be time-consuming and dependent on correct image classification. EigenCAM eliminates these problems and the need for class discrimination by computing the principal components of the feature maps (this method does not follow the general CAM formulation in Eq. 1) [12].
**Non-CAM Gradient-Based Approaches:** While our study proposes MetaCAM, the combination of any CAM-based approach, it can also be extended into non-CAM feature maps. We demonstrate this with the inclusion of FullGrad, which is a non-discriminative method that uses the summed gradients of all bias terms throughout a CNN [17]. We include FullGrad and any other visual explanation methods under the term 'CAM' throughout this paper.
### Quantitative Evaluation
In order to evaluate performance and accuracy of CAM methods, there have been numerous quantitative metrics proposed. Here, we summarize three of the most commonly used metrics for CAM evaluation.
**Perturbation Analysis:** One method of evaluating the quality of CAM visualizations is by perturbing the original image with the outputted activations. Some papers choose to mask the least-activated regions from CAM in the original image [1, 3, 17, 18], while others choose to perturb the most-activated regions [5, 9, 16, 17]. The perturbated image is then processed by the model, and the resulting classification score is used to determine the drop or increase in confidence specifically attributable to the perturbation.
**Object Localization and Segmentation:** Another CAM performance metric is based on object localization/segmentation, demonstrated in [1, 4, 9, 12, 16, 18]. Here, true bounding boxes or segmentation masks are provided to identify objects within an image. The most highly activated pixels in CAM feature maps are converted to a binary mask, and the mask is then compared to the desired bounding box or segmentation mask. This is commonly measured through the intersection over union (IoU).
**Human Evaluation and Class Discrimination:** One of the reasons CAM methods have been developed is to instill higher trust in CNN model decisions. Certain studies choose to evaluate this by asking human raters to identify which CAM image (among different CAM methods) is more reliable [1, 3, 4]. Similarly, to evaluate whether CAM methods can correctly identify classes, individuals are asked to select which class is best highlighted in the image [3, 5, 16].
Certain metrics must be used with caution when evaluating CAM performance. A CAM model may use areas of an image outside the desired class to make its prediction and as such, there exists concern as to whether object localization and human evaluation/class discrimination metrics are truly evaluating the accuracy of CAM methods. Draelos and Carin highlight this using an example that water in an image may help provide evidence for the classification of a boat [4]. Thus, IoU and class discrimination should not be used to identify if CAM methods accurately identify salient regions of an image used for model prediction.
## 3 Proposed Method
In this section, we describe MetaCAM's formulation; a conceptual overview of our proposed method is depicted in
Figure 1. First, we define the problem and outline several methods of averaging individual CAM methods to develop MetaCAM. Next, we describe how MetaCAM can formulated based on the consensus of CAM methods. Briefly, the top _k_% most highly activated pixels in agreement across all chosen CAMs are used to create MetaCAM. We also describe how this thresholding technique can be used to refine individual CAMs. Lastly, we define how CAMs are evaluated using the Remove and Debias (ROAD) method.
### Problem Formulation
Consider a number of feature map visualizations \(n\) generated by different CAM methods. The objective is to determine a method of combining these visualizations to improve upon individual CAM performance. MetaCAM should create an accurate visualization of salient regions used in prediction regardless of selected image, target class, or model.
### Averaging-Based MetaCAM
An obvious initial formulation of MetaCAM is to simply take an average of all individual CAM methods. Prior to combining CAM visualizations, all maps are normalized between 0 and 1. CAMs that produce an invalid output for a given image/model are removed prior to calculation.
\[L^{c}_{MetaCAM(A)}=\frac{\sum\limits_{n}(L^{c}_{CAM(A)_{n}})}{n} \tag{2}\]
where \(n\) is the number of individuals CAMs. If all CAMs included in the formulation provide highly accurate visualizations, averaging across CAMs should refine the activated regions and improve performance. However, if any of the included CAMs activate 'incorrect' regions of the image, equally weighting all CAMs will reduce the performance of MetaCAM and yield lower performance than other top-performing CAMs. To diminish the inclusion of poor-performing CAMs, a weighted average can be used:
\[L^{c}_{MetaCAM(A)}=\frac{\sum\limits_{n}(w_{n}\cdot L^{c}_{CAM(A)_{n}})}{ \sum\limits_{n}w_{n}} \tag{3}\]
The weights of each CAM can be determined by any quantitative measure of CAM performance. In this study, we choose the ROAD value (section 3.4) to numerically quantify and compare CAMs. In addition to using the ROAD values as weights themselves, transformations can be applied to augment the difference in performance between CAMs. We experiment with min-max, softmax (with initial amplification of ROAD weights to a minimum of \(10^{1}\)), and exponential normalization of weights.
While a weighted average of CAMs may improve MetaCAM performance over equal-weighting, poor-performing CAMs are not entirely removed from the overall formulation of MetaCAM and may still negatively affect performance. For this reason, we opt for a consensus-based MetaCAM formulation.
### Adaptive Thresholding-Based MetaCAM
To reduce the impact of poor-performing CAMs, we leverage a consensus-based formulation of MetaCAM. Here, rather than averaging CAMs, MetaCAM is generated using the top-_k_% of pixels in agreement across all CAM methods. Given the consensus of pixel activations across all methods, regions from individual CAMs that are incorrectly activated will be excluded from the final MetaCAM formulation. The activation maps from all CAM methods are summed, and a threshold is applied where the top-_k_% of summed activations are used as MetaCAM's activations, with all other activations set to zero:
\[L^{c}_{MetaCAM(A_{i,j})}=\left\{\begin{array}{ll}\sum\limits_{n}(L^{c}_{CAM(A _{i,j})_{n}})&\text{if }\geq t\\ 0&\text{otherwise}\end{array}\right.\]
where \(t\) is the chosen threshold and _i,j_ represent pixel locations. The best-performing threshold for MetaCAM is dependent on a given image, target class, and model.
Figure 1: Conceptual overview of the proposed MetaCAM visual explanation method. For clarity, \(w=1\) when thresholding is used (\(k<100\)%) and used otherwise when \(k=100\)%.
We therefore implement adaptive thresholding, which computes the ROAD performance of MetaCAM at different thresholds and returns the highest-performing resultant activation map. For a fair comparison with individual CAM methods, we extend this adaptive thresholding approach for each component CAM method. Thus, ROAD percentiles are calculated using the top-\(k\)% of highly activated pixels in each activation map.
### MetaCAM Evaluation
To quantify all CAM methods considered in this study we implement a pixel perturbation method, Remove and Debias (ROAD) [14]. ROAD addresses issues of data leakage found in other pixel perturbation methods and improves computational efficiency [14]. To determine the performance of a CAM visualization, ROAD uses noisy linear imputations to perturb either the most or least activated image pixels. The imputations are applied to individual pixels based on neighbouring pixel values, creating a blurred perturbation of the original image as opposed to masking the pixels entirely. The perturbed image is then evaluated by the network to determine the increase or decrease in prediction confidence. Prediction confidence varies depending on the percent of pixels perturbed; for this reason, we choose to evaluate ROAD at 20%, 40%, 60%, and 80% of perturbation, taking an average across all percentiles for robust evaluation. We also consider Gildenblat's proposition to combine the confidence drop when the most activated pixels are perturbed, and the confidence increase when the least activated pixels are perturbed [6]. Our final ROAD evaluation score is thus:
\[ROAD(L_{CAM}^{c})=\sum_{p}\frac{(C_{L_{LRP}}^{p}-C_{L_{MRP}}^{p})}{2} \tag{4}\]
where \(p\) is the percentile, \(C\) is the confidence, _LRP_ represents least relevant pixels, and _MRP_ represents most relevant pixels.
## 4 Experiments
In this section, we outline the experiments used to evaluate MetaCAM. Briefly, we first describe the image dataset, element classes and models, and experimental setup used to evaluate MetaCAM's performance in sections 4.1 and 4.2. Next, we test different combinations of CAMs (CAM-sets) to determine both the optimal formulation of MetaCAM and top-\(k\) pixel threshold, evaluated using a novel method demoted Cumulative Residual Effect (CRE) and normalized ROAD scores in sections 4.3, 4.4, 4.5, 4.6 4.7. Lastly, we compare MetaCAM to individual CAMs both quantitatively and qualitatively in sections 4.8, 4.9.
### Datasets & Pre-Trained Models
To compare MetaCAM performance to other individual CAM methods we selected various images from the ImageNet ILSVRC 2012 validation dataset [2] in addition to sample images commonly considered as benchmarks for CAM method comparison. Each of the 2D RGB natural images with ground truth segmentation maps has a corresponding numerical class ID. Images are preprocessed by reshaping to an initial size of \(256\times 256\) and center cropping to \(224\times 224\), followed by normalization. For simplicity, a given image (_e.g._\(x\)), class (_e.g._\(c\)), and model (_e.g.\(f(\cdot)\)_) are hereafter denoted as an \((x,c,f(\cdot))\) 3-tuple. We use ResNet152 [7] and DenseNet161 [8] models (unless otherwise specified) pre-trained on the ImageNet-1K dataset from PyTorch [11], comprised of 1,000 possible classes.
### Experimental Setup
To experimentally determine what ideal combination of individual CAMs produces the best ROAD score for a given \((x,c,f(\cdot))\) we might trivially consider all available CAMs and/or vision explanation methods. While an initial starting point, such an approach comes at a maximal computational expense and does not provably guarantee optimal results. Rather, as much ensemble-based research is conducted, the optimal combination of components must be determined experimentally. To that end, we leveraged a systematic approach for determining what CAMs to include/exclude in a particular MetaCAM application to a given \((x,c,f(\cdot))\).
Figure 2: CAM inclusion study curves summarising the adaptive thresholding performance across \(m=64\) binary experiments. The mean performance is banded by threshold-wise 95% confidence intervals.
### Systematically Determining the Ideal CAM-set for MetaCAM
The systematic and unbiased methodology to determine what CAM combination produces the optimal MetaCAM result can be determined experimentally. To that end, we leverage high-performance-computing (HPC) infrastructure to uniquely explore the hyper-parameterized inclusion-/exclusion-space of all available CAM methods.
For the \(n=11\) individual CAM methods available under consideration, we opted for six unique groupings of methodologies to reduce the computational expense of our systematic experiments (\(m=2^{11}\) unique experiments is computationally taxing). Ultimately, we organized all CAM methods into six groupings based on methodological similarity, as reflected in the introduction; certain CAMs were excluded from these large-scale experiments given their individual computational expense. Consequently, we grouped all CAM methods into \(n=6;m=2^{n}\) individual experiments that represent the inclusion/exclusion of specific groups:
A : [HiResCAM, GradCAMElementwise], B : [GradCAM, GradCAM++], C : [XGradCAM], D : [AblationCAM, ScoreCAM], E : [LayerCAM], F : [FullGrad]
Thus, all systematically determined experiments are summarized through a CAM-specific binary number where \(100000\) position-wise represents only the CAMs of group A (1 indicates inclusion only for the binary position for group A). Similarly, 111111 represents the inclusion of all CAMs in groups ABCDEFG. Our work quantifies all ROAD performance of the \(m=64\) individual experiments for a given \((x,c,f(\cdot))\). The maximum achievable ROAD score is determined across all \(m=64\) experiments and a relative ranking of CAM inclusion/exclusions is further explored in subsequent sections. Individual experiments were allocated \(4\times\) cores, 64GB of RAM, and either an NVIDIA P100 Pascal or V100 Volta GPU through the Digital Research Alliance HPC infrastructure.
### Experimentally Determining Ideal MetaCAM Top-\(k\)% Pixel Threshold
The proposed MetaCAM method leverages top-\(k\) pixel adaptive thresholding that can be experimentally determined across the \(m=64\) individual experiments. For a given range of values \(k\) (typically \(k\in[15,45]\), determined experimentally) the mean ROAD score is depicted for various \((x,c,f(\cdot))\) in Figure 2. We note that there does not exist a universally applicable threshold range and that in some
Figure 4: Cumulative Residual Effect summarizing group-wise impact on MetaCAM performance
Figure 5: Comparison of normalized inter-experiment maximum ROAD scores.
Figure 3: Frequency for which a given top-\(k\)% pixel threshold produced the maximum ROAD score across \(m=64\) binary experiments.
cases (_e.g._ E) there does not exist a definitive ideal threshold whereas for others (_e.g._ A,B,C,D,G) there are peak threshold or ranges providing dramatic increases in ROAD score.
To further investigate the influence on MetaCAM's top-\(k\) thresholding, Figure 3 depicts a sample of four experiments and the frequency at which a given value \(k\) produced the top-1 maximum ROAD score. Interestingly, certain \((x,c,f(\cdot))\) produce a definitive \(k\) (_e.g._ A,C) while others suggest a general range for \(k\) (_e.g._ B,D). Intuitively, the complexity of the contents of a given image and the classes represented therein are strong determinants of the ideal \(k\); images with a clear distinction of class-specific pixels and the remainder of the image do not overly benefit from an adaptive threshold (Figure 2E) while all others suggest tuning \(k\). As a rule of thumb, a modest amount of top-\(k\) thresholding appears to consistently improve ROAD score, often by a considerable margin.
### Cumulative Residual Effect
The \(m=64\) binary CAM-group inclusion/exclusion experiments for a given \((x,c,f(\cdot))\) produce a unique relative ranking of experiment-specific binary numbers according to the MetaCAM ROAD score. To quantify the influence a given CAM group had on the resultant MetaCAM ROAD score we propose the Cumulative Residual Effect method. CRE determines the relative positive/negative effect of each CAM group by taking the residual of the individual MetaCAM score with the median value of all scores for the \(m=64\) experiments and sum this residual (either a positive or negative value) with each contributing CAM groups within that experiment. This produces a group-wise summary representing the relative impact of including/excluding that CAM group; all CAM groups are included in exactly 32 experiments and excluded in exactly 32 experiments.
Since this method leverages residuals, it has an additional useful property of being aggregated with other experiments; that is, inter-experiment CREs are obtained by summing intra-experiment group-wise values. In Figure 4 we illustrate four example CRE plots that may be roughly interpreted in a way analogous to SHAP force plots [10]. The inclusion of certain CAM groups will positively improve MetaCAM ROAD score (_e.g._ A,B) relative to the median of all experiments, and in other cases, their inclusion negatively influences overall ROAD scores (_e.g._ C,D; it is critical to note that this quantification is relative to the
Figure 6: Visualization of the MetaCAMs from the best performing threshold across various images, classes, & model architectures.
Figure 7: MetaCAM outperforms individual methods even when augmented through adaptive thresholding.
experiment-wide median ROAD score).
### Comparison of MetaCAMs across Images, Classes, & Models
To fully depict the MetaCAM output across varying \((x,c,f(\cdot))\), we visualize the best-performing MetaCAMs by ROAD score in Figure 5. We note that our consensus-based approach reliably detects the target class in all cases with dramatic improvements in ROAD score over individual CAMs (section 4.8).
### "Bad" CAMs & RandomCAM can Improve Performance
Surprisingly, the inclusion of individually poor-performing CAMs (such as EigenCAM which often resulted in low or negative ROAD scores) and random noise (_e.g._ RandomCAM, an activation map generated from a random uniform distribution between -1 and 1 [6]) resulted in improved MetaCAM performance. Our initial top-performing MetaCAM combination using DenseNet161 to predict the cat in the catdog image achieved ROAD performance of 0.295. However, the inclusion of EigenCAM and RandomCAM improved ROAD performance to 0.393 and 0.314, respectively. In addition, the highest-performing threshold decreased from 19 to 10 with EigenCAM, and 15 with RandomCAM. We posit that the inclusion of poor-performing CAMs or random noise forces MetaCAM to further refine its output to only those highest-consensus pixels at lower top-\(k\) threshold values. Giving random or incorrect regions higher activations results in a refinement of the specific area used in model predictions and contributes to the exclusion other classes within the same image. Interestingly, future work might consider incorporating various "bad"/random-based CAMs to experimentally determine their impact on overall MetaCAM performance.
### MetaCAM Outperforms Individual Methods
To compare the performance of MetaCAM against other visual explanation methods, we applied adaptive thresholds to each individual visualization, displayed in Figure 7. MetaCAM outperforms all individual CAM methods, shown by the largest peak at \(k=10\). Most CAMs have a peak performance using a threshold between the top 10%-30% of most highly activated pixels. This indicates that adaptive thresholding is able to improve performance of all visual explanation methods, by selecting the top relevant pixels for a given \((x,c,f(\cdot))\). In addition, Figure 7 also shows the original performance of CAMs without adaptive thresholding (\(k=100\)). Here, ROAD performance ranges between -0.101-0.172, further demonstrating that MetaCAM (ROAD=0.393) has a dramatic increase in performance compared to original CAMs.
### Comparison of CAMs across different \(k\)% Pixel Thresholds
Figure 8 displays our adaptive thresholding technique applied to all 11 CAMs explored in this study, in addition to MetaCAM. CAMs are shown using thresholds of the top 15%, 30%, and 45% activated pixels, as well as the original CAM visualization. These qualitative results present the usefulness of thresholding; reducing the number of activated pixels focuses on the most salient regions of each image. Many of the original CAM visualizations activate large regions of the image, including both the cat and dog despite only using the cat class ID (281) as the target. Adaptive thresholding is able to refine the activations of all CAMs to focus on the desired target class. It is important to note that thresholding is not performing object localization; it is focusing on the regions of an image used by a model for
Figure 8: Comparison of individual CAMs using a pre-trained DenseNet161 and the cat class ID (281).
prediction, as measured by ROAD.
Interestingly, EigenCAM incorrectly highlights the dog in the image, instead of the desired cat class. This explains the negative ROAD value for EigenCAM in Figure 7. EigenCAM is a non-discriminative CAM method that uses principal components to create activation maps. However, when there are multiple classes within the same image, the order of prinicipal components must be specified (_e.g.,_ first principal components vs. second principal components). EigenCAM performs well on images with a "single-subject", but otherwise requires a user to determine the number and rank of various components within an image to perform successfully. This requires a level of hand-engineering and data leakage. Despite the potential of including 'incorrect' classes in an image, EigenCAM may still be beneficial to the performance of MetaCAM as described previously.
## 5 Conclusion
In this study, we propose MetaCAM, a consensus-based combination of any number of existing CAM formulations using the top \(k\)% of pixels in agreement across all methods. Our experiments demonstrate that MetaCAM is able to outperform existing CAM methods, both with and without adaptive thresholding. We expect MetaCAM to be of particular use in high-criticality fields.
## Acknowledgment
The authors would like to acknowledge Dr. Katherine Muldoon for her support of this work. The authors also acknowledge that this study took place on unceded Algonquin Anishinabe territory.
|
2309.09237 | Human Movement Forecasting with Loose Clothing | Human motion prediction and trajectory forecasting are essential in human
motion analysis. Nowadays, sensors can be seamlessly integrated into clothing
using cutting-edge electronic textile (e-textile) technology, allowing
long-term recording of human movements outside the laboratory. Motivated by the
recent findings that clothing-attached sensors can achieve higher activity
recognition accuracy than body-attached sensors. This work investigates the
performance of human motion prediction using clothing-attached sensors compared
with body-attached sensors. It reports experiments in which statistical models
learnt from the movement of loose clothing are used to predict motion patterns
of the body of robotically simulated and real human behaviours.
Counterintuitively, the results show that fabric-attached sensors can have
better motion prediction performance than rigid-attached sensors. Specifically,
The fabric-attached sensor can improve the accuracy up to 40% and requires up
to 80% less duration of the past trajectory to achieve high prediction accuracy
(i.e., 95%) compared to the rigid-attached sensor. | Tianchen Shen, Irene Di Giulio, Matthew Howard | 2023-09-17T10:56:06Z | http://arxiv.org/abs/2309.09237v3 | # Trajectory Forecasting with Loose Clothing Using Left-to-Right Hidden Markov Model
###### Abstract
Trajectory forecasting has become an interesting research area driven by advancements in wearable sensing technology. Sensors can be seamlessly integrated into clothing using cutting-edge electronic textiles technology, allowing long-term recording of human movements outside the laboratory. Motivated by the recent findings that clothing-attached sensors can achieve _higher_ activity recognition accuracy than body-attached sensors, this work investigates motion prediction and trajectory forecasting using rigid-attached and clothing-attached sensors. The future trajectory is forecasted from the probabilistic trajectory model formulated by left-to-right hidden Markov model (LR-HMM) and motion prediction accuracy is computed by the classification rule. Surprisingly, the results show that clothing-attached sensors can forecast the future trajectory and have _better_ performance than body-attached sensors in terms of motion prediction accuracy. In some cases, the clothing-attached sensor can _enhance_ accuracy by \(45\%\) compared to the body-attached sensor and requires approximately \(80\%\)_less_ duration of the historical trajectory to achieve the same level of accuracy as the body-attached sensor.
## I Introduction
Human trajectory forecasting (HTF) is crucial for understanding human motion in various research areas [1], ranging from human-robot interaction (_e.g.,_ service robots [2]), human-robot collaboration in manufacturing [3] to rehabilitation devices (_e.g.,_ exoskeleton robots [4]). With the latest e-textiles technology, sensors can be embedded in clothing [5], ensuring comfortable and unobtrusive wear for users. This allows for the recording of human movement outside the laboratory for a long period [6].
However, a challenge in recording human movement using e-textiles is the potential inclusion of motion artifacts caused by the movement of the clothing with respect to the body. Many different approaches have been used to address this issue. For example, (i) sensors have been tightly affixed to the body using tape [7, 8], (ii) attached to tight-fitting clothing [9] or (iii) statistical machine learning/signal processing methods have been employed to reduce artifacts [10, 11, 12]. In contrast, an increasing body of work [13, 14, 15, 16, 17] suggests fabric motion could _help_ human motion analysis. Taking inspiration from these works, this paper focuses on _trajectory forecasting using loose clothing_.
Trajectory forecasting is defined as the task of predicting the future movement of objects (_e.g.,_ the human body) [1]. Fig. 1 shows the diagram of trajectory forecasting using the clothing-attached sensor while the human is walking. Clothing-attached sensor records clothing motion (solid line) and the future trajectory of the human movement (dashed line and shaded area) is forecasted.
To this end, this paper solves the trajectory forecasting problem using time-series analysis techniques. Hidden Markov Model (HMM) is used since it is widely used for modelling various types of time series data [18, 19]. The classification rule is utilised to evaluate the performance of motion recognition based on a short historical trajectory. The results suggest that clothing-attached sensors can not only forecast a future trajectory but also lead to _higher_ prediction accuracy compared to rigidly-attached sensors. Furthermore, this paper computes the statistical distance (_i.e.,_ cross-fitness distance) between two LR-HMMs to estimate discrimination information and is used to understand this physical phenomenon. To the best of the authors' knowledge, this is the first paper using fabric movement to forecast the future trajectory.
Fig. 1: Illustration of trajectory forecasting (dashed line and shaded area) based on the historical trajectory (solid line) using the clothing-attached sensor when the human is walking.
## II Problem Definition
The following introduces the problems that need to be addressed when solving the trajectory forecasting task.
The objective of this task is to forecast future movement observations within the context of wearable sensing. The body/clothing-attached sensors collect \(t\) time steps as a person performs a specific movement with \(\mathcal{K}\) repetitions. The observation \(\mathbf{Y}\) consists of sensor readings. It can be formed as \(\mathbf{Y}^{(\mathcal{K})}=(\mathbf{y}_{1},\mathbf{y}_{2},\mathbf{y}_{3}\dots \mathbf{y}_{t})\). The sensor may collect \(\mathcal{M}\)-dimensional readings at every time step \(t\). Each element in observation \(\mathbf{Y}\) can be denoted as \(\mathbf{y}_{t}\in\mathbb{R}^{\mathcal{M}}\). Human/robot movements are assumed to belong to one of a finite, discrete set of classes so the task is to predict the movement's class label (_i.e.,_\(c_{i}\) with \(i\in\{1,2\}\)) 1 from \(\mathbf{Y}\) and, from this, forecast the future trajectory. The future trajectory \(\mathbf{\tilde{Y}}\) can be represented as \(\mathbf{\tilde{Y}}=(\mathbf{\tilde{y}}_{t+1},\mathbf{\tilde{y}}_{t+2},\mathbf{ \tilde{y}}_{t+3}\dots\mathbf{\tilde{y}}_{T})\). Each element can also be \(\mathcal{M}\) dimensional \(\mathbf{\tilde{y}}_{T}\in\mathbb{R}^{\mathcal{M}}\).
Footnote 1: Throughout the paper, without loss of generality, all prediction tasks are assumed to be binary.
This paper uses HMMs to solve this task as follows [20]. Consider an \(\mathcal{N}\)-state Markov chain where the hidden states are denoted as \(S=\{s_{1},s_{2},\dots,s_{\mathcal{N}}\}\). The number of hidden states is set to be equal to the number of time steps in each observation sequence \(\mathbf{Y}\) (_i.e.,_\(\mathcal{N}=\mathcal{T}\)). This implies that each hidden state represents a specific time step in observation \(\mathbf{Y}\). The transitions between these are defined by the transition probability matrix \(\mathbf{A}\), whose elements are
\[A_{ij}=P(\mathbf{y}_{t}=j|\mathbf{y}_{t-1}=i),0<i\leq\mathcal{N},0<j\leq \mathcal{N}. \tag{1}\]
\(A_{ij}\) represents the probability of transitioning to state \(j\) at time \(t\), conditional on being in state \(i\) at time \(t-1\). The relationship between observations \(\mathbf{Y}\) and hidden states \(S\) at each time step \(t\) of the sensor readings is described by state-observation probability \(\mathbf{B}_{t}\), which is treated as a Gaussian distribution
\[\begin{split} B_{j}(t)&=P(\mathbf{y}_{t}|s_{t}=s_{ j})\\ &=\mathcal{N}(\mu_{j},\Sigma_{j}),0<t\leq\mathcal{N},0<j\leq \mathcal{N}.\end{split} \tag{2}\]
\(B_{j}(t)\) is the probability that the observation is \(\mathbf{y}_{t}\) when the state is \(s_{j}\) at time \(t\). The probabilities of each state for the observation at the first time step of the sensor readings (_i.e.,_\(t=1\)) is
\[\pi_{i}=P(\mathbf{y}_{1}=s_{i}),\#20<i\leq\mathcal{N}. \tag{3}\]
This type of HMM is also known as a LR-HMM [21]. A compact notation of LR-HMM parameter is \(\boldsymbol{\theta}=\{\pi,\mathbf{A},\mathbf{B}\}\).
In this paper, given the observation sequences (_i.e.,_ sensor readings) of two categories of movements \(\mathbf{Y}_{c=1}\), \(\mathbf{Y}_{c=2}\), the HMM parameter \(\boldsymbol{\theta}_{c=1}\), \(\boldsymbol{\theta}_{c=2}\) are estimated using the Baum-Welch algorithm. The historical movement (red line) is then recognised (_i.e.,_\(c=1\) or \(c=2\) is determined) using the forward algorithm and the classification rule, and based on this, the probabilistic trajectory model is used to forecast the future trajectory (dashed red line and shaded area). To formulate the probabilistic trajectory model, the most likely state sequence \(S^{*}=\{s_{1}^{*},s_{2}^{*},\dots,s_{\mathcal{N}}^{*}\}\) is estimated using the Viterbi algorithm. The framework is illustrated in Fig. 2.
## III Methodology
This section introduces the methods (_i.e.,_ Baum-Welch algorithm, the forward algorithm, the classification rule and the Viterbi algorithm) to address the problem defined in SSII.
### _Left-to-right hidden Markov model parameter \(\boldsymbol{\theta}\) estimation_
The first step is to estimate the HMM parameters \(\boldsymbol{\theta}\), by maximising \(P(\mathbf{Y}^{(\mathcal{K})}|\boldsymbol{\theta})\) given observations \(\mathbf{Y}^{(\mathcal{K})}\)_i.e.,_
\[\tilde{\boldsymbol{\theta}}=\operatorname*{argmax}_{\boldsymbol{\theta}}\sum_{ k=1}^{\mathcal{K}}\log P(\mathbf{Y}^{(\mathcal{K})}|\boldsymbol{\theta}). \tag{4}\]
Baum-Welch algorithm is used to solve this problem [22]. The following describes each step.
* The initial values of HMM parameters \(\boldsymbol{\theta}^{0}=\{\pi^{0},A^{0},B^{0}\}\) are chosen randomly from observation \(\mathbf{Y}^{(\mathcal{K})}\).
* E-step: \(\gamma_{t}(i)\) is defined as the probability of being in state \(s_{i}\) at time \(t\), given the observation sequence \(\mathbf{Y}\) and model \(\boldsymbol{\theta}^{l}\). It can be shown \[\gamma_{t}(i)=P(s_{t}=s_{i}|\mathbf{Y},\boldsymbol{\theta}^{l})=\frac{\alpha_{ t}(i)\beta_{t}(i)}{\sum_{j=1}^{\mathcal{N}}\alpha_{t}(j)\beta_{t}(j)},\] (5) where \[\alpha_{t}(i)=B_{i}(\mathbf{y}_{t})\sum_{j=1}^{\mathcal{N}}\alpha_{t-1}(j)A_{ji},\] (6) and \[\beta_{t}(i)=\sum_{j=1}^{\mathcal{N}}\beta_{t+1}(j)A_{ij}B_{j}(\mathbf{y}_{t+1 }).\] (7)
* \(\xi_{t}(i,j)\) is defined as the probability of being in state \(i\) at time \(t\), and state \(j\) at time \(t+1\), given the HMM parameter \(\boldsymbol{\theta}^{l}\) and observation \(\mathbf{Y}\): \[\begin{split}\xi_{t}(i,j)&=P(s_{t}=i,s_{t+1}=j| \mathbf{Y},\boldsymbol{\theta}^{l})\\ &=\frac{\alpha_{t}(i)A_{ij}B_{j}(\mathbf{y}_{t+1})\beta_{t+1}(j)}{ \sum_{i=1}^{\mathcal{N}}\alpha_{t}(i)\beta_{t}(i)}.\end{split}\] (8) \(\gamma_{t}^{(\mathcal{K})}(i)\) and \(\xi_{t}^{(\mathcal{K})}(i,j)\) for each observation sequence \(\mathcal{K}\) can be computed repetitively using equation (5)(8). The new \(\boldsymbol{\theta}^{l+1}\) can be updated as below [23].
* M-step: \[\pi^{l+1}=\frac{\sum_{k=1}^{\mathcal{K}}\gamma_{t=1}^{\mathcal{K}}(i)}{\mathcal{ K}}.\] (9) \[A_{ij}^{l+1}=\frac{\sum_{k=1}^{\mathcal{K}}\sum_{l=1}^{\mathcal{T}-1}\xi_{t}^{ \mathcal{K}}(i,j)}{\sum_{k=1}^{\mathcal{N}}\sum_{t=1}^{\mathcal{T}-1}\gamma_{t}^ {(\mathcal{K})}(i)},\] (10) \[B_{i}^{l+1}(\nu_{k})=\frac{\sum_{k=1}^{\mathcal{K}}\sum_{t=1}^{\mathcal{T}}1_{ \mathbf{y}_{t}^{(k)}=\nu^{(k)}}\gamma_{t}^{(\mathcal{K})}(i)}{\sum_{k=1}^{ \mathcal{K}}\sum_{t=1}^{\mathcal{T}}\gamma_{t}^{(\mathcal{K})}(i)},\] (11) where \[1_{\mathbf{y}_{t}^{(k)}=\nu^{(k)}}=\begin{cases}1&\text{if }\mathbf{y}_{t}^{(k)}=\nu^{(k)}\\ 0&\text{otherwise}\end{cases}\] (12) is an indicator function.
* The E-step and M-step are iterated until HMM parameter \(\boldsymbol{\theta}\) converges.
### _Forward algorithm_
In this paper, trajectory forecasting is based on the movement class of the historical trajectory, therefore it is important to assign a correct class label to it. To this end, the forward algorithm is used to compute the probability of the historical trajectory (_i.e.,_ the likelihood) given the LR-HMMs \(\mathbf{\theta}_{c}\). The forward variable is donated as:
\[\alpha_{t}(i)=P(\mathbf{y}_{1},\mathbf{y}_{2}\ldots,s_{t}=s_{i}|\mathbf{\theta}_{c }). \tag{13}\]
The steps of the forward algorithm are:
* Initialisation: the forward probability of the historical observation at the first time step (_i.e.,_\(t=1\)), is \[\alpha_{1}(i)=\pi_{i}B_{i}(\mathbf{y}_{1}).\] (14)
* Induction: the recursive formula for the forward probability of the historical observations at time \(t\) is (6).
* Termination: Summing the forward probabilities \(\alpha_{t}\) of all possible states at the end time \(t\) \[P(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{t}|\mathbf{\theta}_{c})=\sum_{ i=1}^{\mathcal{N}}\alpha_{t}(i).\] (15)
### _The classification rule_
The classification rule, as described in [24] is used to make decisions by comparing likelihoods based on two LR-HMM parameters, \(\mathbf{\theta}_{c=1}\) and \(\mathbf{\theta}_{c=2}\). _i.e.,_
\[c=\operatorname*{argmax}_{c}\log P(\mathbf{Y}|\mathbf{\theta}_{c}),c\in\{1,2\}. \tag{16}\]
### _Viterbi algorithm_
In LR-HMM, each hidden state corresponds to a specific time step. However, it is not explicitly determined which state corresponds to each time step. It needs to find the most likely state sequence to ensure the state sequence corresponds to the observation sequence with the highest probability. To make it, the Viterbi algorithm is used to find the most likely state sequence, as described in [25]. The most likely state sequence is denoted as \(S^{*}=\{s_{1}^{*},s_{2}^{*},\ldots,s_{\mathcal{N}}^{*}\}\). This problem can be described as below
\[S^{*}_{t}=\max_{s_{1}^{*},s_{2}^{*},\ldots,s_{t-1}^{*}}P(s_{1}^{*},s_{2}^{*}, \ldots,s_{t}^{*}=i,\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{t}|\mathbf{ \theta}). \tag{17}\]
\(\delta_{t}(i)\) is defined as the highest probability along a single path at time \(t\). \(\Psi_{t}(i)\) is used to keep track of the argument which maximised it. \(S^{*}_{t}\) is estimated using the following steps.
* Initialisation: \[\begin{split}\delta_{1}(i)&=\pi_{i}B_{i}(\mathbf{y }_{1}).\\ \Psi_{1}(i)&=0.\end{split}\] (18)
* Recursion: \[\begin{split}\delta_{t}(i)&=\max_{i}(\delta_{t-1}( i)+A_{ij})+B_{i}(\mathbf{y}_{t})).\\ \Psi_{t}(i)&=\operatorname*{argmax}_{i}(\delta_{t-1} (i)+A_{ij}).\end{split}\] (19)
* Termination \[\begin{split} P^{*}&=\max(\delta_{\mathcal{T}}(i)). \\ S^{*}_{\mathcal{T}}&=\operatorname*{argmax}(\delta_{ \mathcal{T}}(i)).\end{split}\] (20)
* Path (state sequence) backtracking: \[S^{*}_{t}=\Psi_{t+1}(s^{*}_{t+1}),t=\mathcal{T}-1,\mathcal{T}-2,\ldots,1.\] (21)
The state-observation probability \(\mathbf{B}\) and the most likely state sequence \(S^{*}\) can be used to form the probabilistic trajectory model (_i.e.,_\(B_{S^{*}_{1}},B_{S^{*}_{2}}\ldots B_{S^{*}_{\mathcal{T}}}\)).
### _Statistical distance between HMM models_
The statistical distance is a measure of discrimination [26]. This paper computes the statistical distance between LR-HMMs \(\mathbf{\theta}_{c=1}\), \(\mathbf{\theta}_{c=2}\) to measure discrimination information between two categories of movements \(\mathbf{Y}_{c=1}\),\(\mathbf{Y}_{c=2}\). It is helpful to understand the increased prediction accuracy of clothing-attached sensors compared to rigid-attached sensors.
Fig. 2: The framework of trajectory forecasting using LR-HMM.
The cross-fitness distance \(D\) between two LR-HMMs is computed as proposed in [27]. The formula is shown below:
\[D(\mathbf{Y}_{c=1}||\mathbf{Y}_{c=2}) =\log P(\mathbf{Y}_{c=1}|\boldsymbol{\theta}_{c=1})+\log P(\mathbf{ Y}_{c=2}|\boldsymbol{\theta}_{c=2}) \tag{22}\] \[-\log P(\mathbf{Y}_{c=1}|\boldsymbol{\theta}_{c=2})-\log P( \mathbf{Y}_{c=2}|\boldsymbol{\theta}_{c=1}).\]
The forward algorithm can be used to compute each term in equation (22), which is introduced in SSIII-B.
## IV Experiments
In this section, the future trajectories are forecasted based on the historical trajectory collected from rigid and fabric-attached sensors. Additionally, the performance of motion prediction using rigid and fabric-attached sensors is compared. To achieve this, two physical mechanisms are used to perform periodic (_i.e.,_ simple harmonic motion2) and non-periodic movements (_i.e.,_ linear and curved point-to-point (PTP) movement)3), respectively.
Footnote 2: Data is available online at [https://doi.org/10.18742/22182358](https://doi.org/10.18742/22182358).
Footnote 3: The data for PTP movement will be made available on acceptance.
### _Case Study 1: Simple Harmonic Motion_
#### Iv-A1 Materials and Methods
An instrumented scotch yoke is used to implement simple harmonic motion physically [28]. The device consists of a sliding yoke, a rigid rod, two bearing blocks, a rotating disk of diameter \(20\,\mathrm{cm}\) and a DC motor (\(30:1,37D\) gear-motor, Poloul Corporation, USA) fixed at the fulcrum. A piece of fabric (\(30\,\mathrm{cm}{\times}5\,\mathrm{cm}\) strip of woven cotton) is attached at the tip of the yoke. Four sensors (NDI Aurora Magnetic Tracking device, NDI, Canada) are attached at the tip of the rigid yoke, \(10\,\mathrm{cm}\) away from the fulcrum, (i) namely \(R_{1}\). Three sensors are attached to the fabric. Specifically, they are attached along the length of the fabric (ii) \(20\,\mathrm{cm}\) (\(F_{2}\)), (iii) \(30\,\mathrm{cm}\) (\(F_{3}\)) and (iv) \(40\,\mathrm{cm}\) (\(F_{4}\)) from the fulcrum (_i.e.,_ at the tip of the fabric). The scotch yoke moves at two frequencies (\(\omega_{1}=1.05\pi\,\mathrm{rad}\,\mathrm{s}^{-1}\) and \(\omega_{2}=1.48\pi\,\mathrm{rad}\,\mathrm{s}^{-1}\)). With this set up, \(\mathcal{K}=30\) sequences with randomly starting positions of length \(T=5\,\mathrm{s}\) at each frequency are recorded. Each data point is assigned a label \(c\in\{1,2\}\) corresponding to low \(\omega_{1}\) and high frequency \(\omega_{2}\) respectively. They are denoted as \(\mathbf{Y}=\{(y_{1},c_{1}),\ldots,(y_{T},c_{T})\}\). More details about the hardware set-up and data collection can be referred to [17, 29].
The horizontal movement of each sensor's movement is used for further analysis (_i.e.,_\(\mathcal{M}=1\)). The value of the sensor reading is divided by \(100\). Additionally, the starting positions of the \(R_{1}\) movements in each trajectory should be the same. In other words, each trajectory needs to be time-aligned. \(29\) trajectories are randomly chosen from each category of movement \(\mathbf{Y}_{c}\) to estimate HMM denoted as \(\boldsymbol{\theta}_{c}\), where \(c=\{1,2\}\), using the method described in SSIII-A. The remaining trajectory is reserved for testing purpose. This process is repeated \(100\) times.
To forecast the future trajectory, the initial step involves estimating the HMM parameters \(\boldsymbol{\theta}_{c=1}\), \(\boldsymbol{\theta}_{c=2}\) using Baum-Welch algorithm as described in SSIII-A. The class label of the historical trajectory is decided by the forward algorithm and the classification rule as introduced in SSIII-B and SSIII-C, respectively. To evaluate the motion prediction performance, the accuracy is defined as the ratio of correct predictions to the total number of predictions, which in this case is \(200\). The duration of the test trajectory is examined from the first time step until the time when a distinct difference between the two categories of trajectory is observed. After predicting the class label correctly, the future trajectory can be forecasted from the probabilistic trajectory model. To formulate it, the most likely state sequence \(S^{*}\) needs to be estimated using the Viterbi algorithm as introduced in SSIII-D. The cross-fitness distance is repeatedly computed \(10\) times using equation (22).
Kevin Murphy's Matlab Toolbox [30] is used to implement the Baum-Welch algorithm, the forward algorithm and the Viterbi algorithm in this paper.
#### Iv-A2 Results
Fig. 3 shows prediction accuracy for discriminating low frequency \(\omega_{1}\) and high frequency \(\omega_{2}\) given the test trajectory from the initial time step to \(0.4\,\mathrm{s}\). As can be seen, clothing-attached sensors outperform better than the rigid-attached sensor. Specifically, the accuracy of the fabric-attched sensor (_i.e.,_\(F_{4}\)) could \(45\%\) higher, while the rigid sensor (_i.e.,_\(R_{1}\)) requires up to \(80\%\) more time to get the same level of accuracy. The accuracy is higher when the position of the clothing-attached sensor is farther from the point of attachment, particularly during the initial time steps. This is attributed to the fact that the sensor readings are recorded while the scotch yoke is consistently moving. Therefore, motion artifacts are present even during the initial time steps, which simplifies the prediction task.
The second column of Table I shows the cross-fitness distance between the HMM parameters \(\boldsymbol{\theta}_{\omega_{1}}\), \(\boldsymbol{\theta}_{\omega_{2}}\) for each sensor in this experiment. The cross-fitness distance is higher when the position of the clothing-attached sensor is farther from the point of attachment. It shows motion artifacts actually provide discrimination information that makes the prediction task easier. Motion prediction accuracy results can be verified by examining the cross-fitness distance results, which indicate an increase when the clothing-attached sensor is positioned farther from the point of attachment.
### _Case Study 2: Linear and curved point-to-point movements_
#### Iv-B1 Materials and Methods
KUKA's LBR iiwa robot arm (KUKA, Germany) is used to execute PTP movement encompassing both linear (SLIN) and curved (SCIRC) motion, using KUKA Sunrise.OS \(1.11\). Fig. 4(a) shows the experimental set up. The trajectory of the rigid-attached sensor (_i.e.,_\(R_{1}\)) and the fabric-attached sensor (_i.e.,_\(F_{4}\)) during linear and circular motions are illustrated in Fig. 4(b)(c), respectively.
The experimental setup is identical to the one described in SSIV-A, with the only difference being that the fabric is attached to the end effector of the robot arm in this experiment. The end effector of the robot arm moves at
\(2.25\mathrm{cm\,s^{-1}}\) in linear and circle motion. A three-dimensional translation movement of each sensor reading is used for the prediction task (_i.e.,_\(\mathcal{M}=3\)). With this set up, \(\mathcal{K}=30\) sequences of length \(T=5\,\mathrm{s}\) are recorded. Motion prediction and trajectory forecasting use the methods described in SSIV-A.
#### Iv-B2 Results
Fig. 5 and Fig. 6 illustrate the prediction accuracy between two classes of movement for linear and circular motions, respectively. As can be observed, the accuracy exhibit higher values when the clothing-attached sensor is positioned farther from the point of attachment. However, this increase is relatively small during the first few time steps. This is due to the fact that the robot arm accelerates from a state of rest to the set velocity, which takes approximately \(0.25\,\mathrm{s}\). Motion artifacts are not noticeable at low velocity movement. When the velocity increases, motion artifacts become noticeable and can lead to higher prediction accuracy.
The third and fourth column of Table I shows the cross-fitness distance between two HMMs parameters \(\mathbf{\theta}_{c=1}\), \(\mathbf{\theta}_{c=2}\) for each sensor in the case of linear and circular motion, respectively. The cross-fitness distance of the fabric-attached sensor is higher than the rigid-attached sensor. It indicates motion artifacts have discrimination information to simplify the prediction task.
Fig. 7 (a)(b) show the future trajectory (dashed line and shaded area) based on the historical trajectory based on the historical trajectory (solid line) when the robot arm is performing linear and circle motion, respectively. The dashed line is the future trajectory with the highest probability. The shaded area represents probability. Specifically, the further away from the dashed line, the less likely the trajectory is to occur. Blue and red represent two categories of movement. The thick and light line are sensor \(R_{1}\) and \(F_{4}\) movement, respectively 4.
Footnote 4: The trajectory forecasting results for other sensors’ reading and §IV-A results are not included in this paper due to page limitations.
## V Discussion
This work presents a framework for trajectory forecasting using loose clothing. Surprisingly, (i) the performance of motion prediction improves as the fabric becomes looser, signifying that the position of the clothing-attached sensor is farther from the point of fabric attachment, and (ii) clothing-attached sensors require a shorter duration of the historical trajectory to achieve the same level of accuracy as rigidly-attached sensors. (iii) This phenomenon is explained by computing the statistical distance (_i.e.,_ cross-fitness distance).
Fig. 4: (a) Experiment setup. The robot arm follows predefined trajectories with a piece of fabric attached to the end effector. The actual moving trajectories of the rigid-attached sensor (_i.e.,_\(R_{1}\) thick and dark line) and the fabric-attached sensor (\(F_{4}\) thin and light line) with (b) linear and (c) circle motion. The blue and red lines represent two trajectories that are used for prediction.
Fig. 3: The accuracy of motion prediction for various durations of the historical trajectory in the contexts of simple harmonic motion.
. Motion artifacts can increase statistical distance which contain more discrimination information and make it easier to distinguish two movements.
This finding suggests human motion trajectory can be forecasted with higher accuracy using clothing-attached sensors and needs a shorter historical trajectory duration. More broadly, the ability to analytically model these effects with a simple model opens up the possibility of improving the design and analysis of motion capture systems that utilise everyday garments, providing a high level of comfort and user acceptance.
In turn, this finding could have implications for many applications in robotics and automation. Such as, rehabilitation devices (_e.g.,_ control of exoskeleton robots [31] or prostheses [32]), and human-robot collaboration (_e.g.,_ assist the worker in the industry [33, 34], the mobile robot could assist the staff in the retail environment [35]). Future work may explore more complex human movements, such as multi-joint upper limb movement or gait.
\begin{table}
\begin{tabular}{l|l l l} \hline Sensor & Simple harmonic motion & Linear motion & Circle motion \\ \hline \(R_{1}\) & \(3.4\times 10^{4}\) & \(1.55\times 10^{5}\) & \(7.7\times 10^{4}\) \\ \hline \(F_{2}\) & \(3.8\times 10^{4}\) & \(2\times 10^{5}\) & \(1.3\times 10^{5}\) \\ \hline \(F_{3}\) & \(4.4\times 10^{4}\) & \(1.8\times 10^{5}\) & \(9.2\times 10^{4}\) \\ \hline \(F_{4}\) & \(6.7\times 10^{4}\) & \(1.7\times 10^{5}\) & \(1.8\times 10^{5}\) \\ \hline \end{tabular}
\end{table} TABLE I: Cross-fitness distance between two movements for each sensor across various types of motion. Reported are the mean value.
Fig. 5: The accuracy of motion prediction for various durations of the historical trajectory in the contexts of linear motion.
Fig. 6: The accuracy of motion prediction for various durations of the historical trajectory in the contexts of circle motion.
Fig. 7: The future trajectory (dashed line and shaded area) is forecasted based on the historical trajectory (solid line) in the contexts of (a) linear and (b) circle motion. The thick and light line are sensor \(R_{1}\) and \(F_{4}\) movement, respectively. Reported are the mean value \(\pm\) s.d. (includes three dimensional movement). |
2310.00416 | Refutation of Shapley Values for XAI -- Additional Evidence | Recent work demonstrated the inadequacy of Shapley values for explainable
artificial intelligence (XAI). Although to disprove a theory a single
counterexample suffices, a possible criticism of earlier work is that the focus
was solely on Boolean classifiers. To address such possible criticism, this
paper demonstrates the inadequacy of Shapley values for families of classifiers
where features are not boolean, but also for families of classifiers for which
multiple classes can be picked. Furthermore, the paper shows that the features
changed in any minimal $l_0$ distance adversarial examples do not include
irrelevant features, thus offering further arguments regarding the inadequacy
of Shapley values for XAI. | Xuanxiang Huang, Joao Marques-Silva | 2023-09-30T15:44:06Z | http://arxiv.org/abs/2310.00416v1 | # Refutation of Shapley Values for XAI - Additional Evidence
###### Abstract
Recent work demonstrated the inadequacy of Shapley values for explainable artificial intelligence (XAI). Although to disprove a theory a single counterexample suffices, a possible criticism of earlier work is that the focus was solely on Boolean classifiers. To address such possible criticism, this paper demonstrates the inadequacy of Shapley values for families of classifiers where features are not boolean, but also for families of classifiers for which multiple classes can be picked. Furthermore, the paper shows that the features changed in any minimal \(l_{0}\) distance adversarial examples do not include irrelevant features, thus offering further arguments regarding the inadequacy of Shapley values for XAI.
Explainable AI, Shapley values, Abductive reasoning 2023200320032003200320032003200320032003200320032003200320032003200320032003200320032003200320032003200320032003200320032003200320032032003200320320032003203200320320032032003203200320
Introduction
A number of recent reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] has demonstrated that, for some classifiers, Shapley values for XAI [Arenas et al. 2021b, 2023; den Broeck et al. 2021, 2022; Lipovetsky and Conklin 2001; Lundberg and Lee 2017; Strumbelj and Kononenko 2010, 2014] produce measures of relative feature importance that are uncorrelated with measures of feature relevancy, as proposed in the context of abductive reasoning [Eiter and Gottlob 1995; Huang et al. 2023, 2021].
Methods of XAI can be broadly characterized as based on _feature attribution_, as exemplified by the use of Shapley values [Lundberg and Lee 2017], or based on _feature selection_. Methods of feature selection include informal approaches [Ribeiro et al. 2018], but also formal logic-based approaches [Ignatiev et al. 2019a]. Whereas feature attribution assigns a score to each feature as a measure of its effective importance to a prediction, feature selection identifies a subset of features which are deemed sufficient for a prediction. Abductive explanations [Ignatiev et al. 2019a] provide a rigorous, model-accurate, method for computing explanations based on feature selection. Abductive explanations are grounded on logic-based abduction [Eiter and Gottlob 1995], which can be traced to the seminal work of Peirce on abduction [Hartshorne and Weiss 1931].
The results mentioned above [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] can be restated as follows: for some classifiers, formal definitions of feature importance based on feature selection are uncorrelated with axiomatic definitions of feature importance based on feature attribution as exemplified by Shapley values for XAI. More concretely, it has been shown that [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023]: i) features that are _irrelevant_ for a prediction can be assigned feature importance of greater absolute value than features that are _relevant_ for that prediction, and ii) features that are _relevant_ for a prediction can be assigned _no_ importance even when _irrelevant_ features assigned _some_ importance. (Recall that a feature is relevant if it occurs in some abductive explanation; otherwise it is irrelevant [Eiter and Gottlob 1995; Huang et al. 2023, 2021].)
An immediate corollary of these recent results is that relative measures of feature importance based on feature selection (and defined using the concept of feature relevancy in abductive reasoning [Eiter and Gottlob 1995]) cannot in general be related with relative measures of feature importance based on feature attribution (as obtained with Shapley values for XAI).
One might contend that the fact that the two measures of relative feature importance cannot be compared is not a major issue per se. However, earlier work [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] argued that irrelevant features should be deemed as having no feature importance, and that relevant features should have some sort of feature importance. This report provides additional insights on how to make this argument more intuitive. Furthermore, this report evaluates the role of features in finding minimal-distance adversarial examples, and shows that irrelevant features need never be changed for finding adversarial examples, i.e. those features do not occur in minimal Hamming (\(l_{0}\)) distance adversarial examples1. These observations further support earlier arguments about Shapley values for XAI providing misleading information about relative feature importance [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023]. A conclusion of the arguments proposed in earlier work and in this paper is that Shapley values for XAI can offer human decision-makers misleading information regarding relative feature importance.
Footnote 1: In this paper, features are assumed not to be real-valued. Other distances could be considered for real-valued features. Furthermore, and similarly to earlier work [Carlini et al. 2017; He et al. 2017; Kim et al. 2021; Kurakin et al. 2016; Papernot et al. 2016; Ruan et al. 2019] we will seek adversarial examples offering guarantees of minimality, either cardinality or subset-minimality.
Finally, one possible drawback of earlier results is that the obtained counterexamples consist of (arbitrary many) boolean classifiers. Hence, a natural question is whether earlier results [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] extend beyond boolean classifiers. This paper studies a number of non-boolean classifiers, and shows that the conclusions of earlier work also apply to those non-boolean classifiers.
The paper is organized as follows. Section 2 overviews the notation and definitions introduced in earlier work [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023]. Section 4 analyzes example classifiers defined with generalized tabular representations. Section 5 summarizes results obtained on publicly available decision trees. Moreover, Section 6 summarizes results in the case of OMDD (ordered multi-valued decidion diagram) classifiers. Moreover, Section 7 discusses a number of suggested threats to validity of the results in this report. Section 8 concludes the paper.
## 2. Preliminaries
We consider the notation and definitions used in earlier work [Arenas et al. 2021b, 2023; Huang and Marques-Silva 2023b,c; Ignatiev et al. 2019a; Marques-Silva 2022; Marques-Silva and Huang 2023; Marques-Silva and Ignatiev 2022]. These are briefly overviewed next, and borrow extensively from [Marques-Silva and Huang 2023].
### Classification Problems
A classification problem is defined on a set of features \(\mathcal{F}=\{1,\ldots,m\}\), and a set of classes \(\mathcal{K}=\{c_{1},\ldots,c_{K}\}\). Each feature \(i\in\mathcal{F}\) takes values from a domain \(\mathbb{D}_{i}\). Domains can be ordinal (e.g. real- or integer-valued) or categorical. Feature space is defined by the cartesian product of the domains of the features: \(\mathbb{F}=\mathbb{D}_{1}\times\cdots\times\mathbb{D}_{m}\). A classifier \(\mathcal{M}\) computes a (non-constant) classification function: \(\kappa:\mathbb{F}\rightarrow\mathcal{K}\)2. A classifier \(\mathcal{M}\) is associated with a tuple \((\mathcal{F},\mathbb{F},\mathcal{K},\kappa)\). For the purposes of this paper, we restrict \(\kappa\) to be a non-constant boolean function. This restriction does not in any way impact the validity of our results.
Footnote 2: A classifier that computes a constant function, i.e. the same prediction for all points in feature space, is of course uninteresting, and so it is explicitly disallowed.
Given a classifier \(\mathcal{M}\), and a point \(\mathbf{v}\in\mathbb{F}\), with \(c=\kappa(\mathbf{v})\) and \(c\in\mathcal{K}\), \((\mathbf{v},c)\) is referred to as an _instance_ (or sample). An explanation problem \(\mathcal{E}\) is associated with a tuple \((\mathcal{M},(\mathbf{v},c))\). As a result, \(\mathbf{v}\) represents a concrete point in feature space, whereas \(\mathbf{x}\in\mathbb{F}\) represents an arbitrary point in feature space.
### Formal Explanations
The presentation of formal explanations follows recent accounts (Marques-Silva, 2022). In the context of XAI, abductive explanations (AXp's) have been studied since 2018 (Ignatiev et al., 2019; Shih et al., 2018)3. Similar to other heuristic approaches, e.g. Anchors (Ribeiro et al., 2018), abductive explanations are an example of explainability by feature selection, i.e. a subset of features is selected as the explanation. AXp's represent a rigorous example of explainability by feature selection, and can be viewed as the answer to a "_Why (the prediction, given \(v\))?_" question. An AXp is defined as a subset-minimal (or irreducible) set of features \(\mathcal{X}\subseteq\mathcal{F}\) such that the features in \(\mathcal{X}\) are sufficient for the prediction, given \(\mathbf{v}\). This is to say that, if the features in \(\mathcal{X}\) are fixed to the values determined by \(\mathbf{v}\), then the prediction is guaranteed to be \(c=\kappa(\mathbf{v})\). The sufficiency for the prediction can be stated formally:
Footnote 3: Initial work considered prime implicants of boolean classifiers (Shih et al., 2018). Later work (Ignatiev et al., 2019) formulated explanations in terms of abductive reasoning, and considered a much wider range of classifiers.
\[\forall(\mathbf{x}\in\mathbb{F}).\left[\bigwedge_{i\in\mathcal{X}}(x_{i}=v_{i })\right]\rightarrow(\kappa(\mathbf{x})=\kappa(\mathbf{v})) \tag{1}\]
For simplicity, we associate a predicate WAXp with (1), such that WAXp\((\mathcal{X})\) holds if and only if (1) holds.
Observe that (1) is monotone on \(\mathcal{X}\), and so the two conditions for a set \(\mathcal{X}\subseteq\mathcal{F}\) to be an AXp (i.e. sufficiency for prediction and subset-minimality), can be stated as follows:
\[\forall(\mathbf{x}\in\mathbb{P}).\left[\bigwedge_{i\in\mathcal{X}}(x_{i}=v_{i })\right]\rightarrow(\kappa(\mathbf{x})=\kappa(\mathbf{v}))\wedge \tag{2}\]
Moreover, a predicate \(\text{AXp}:2^{\mathcal{F}}\rightarrow\{0,1\}\) is associated with (2), such that \(\text{AXp}(\mathcal{X};\mathcal{E})\) holds true if and only if (2) holds true4.
Footnote 4: When defining concepts, we will show the necessary parameterizations. However, in later uses, those parameterizations will be omitted, for simplicity.
An AXp can be interpreted as a logic rule of the form:
\[\text{IF}\quad\left[\bigwedge_{i\in\mathcal{X}}(x_{i}=v_{i})\right]\quad\text{ THEN}\quad(\kappa(\mathbf{x})=c) \tag{3}\]
where \(c=\kappa(\mathbf{v})\). It should be noted that informal XAI methods have also proposed the use of IF-THEN rules (Ribeiro et al., 2018) which, in the case of Anchors (Ribeiro et al., 2018) may or may not be sound (Ignatiev, 2020; Ignatiev et al., 2019). In contrast, rules obtained from AXp's are logically sound.
Alternatively, contrastive explanations (CXp's) represent a type of explanation that differs from AXp's, in that CXp's answer a "_Why Not (some other prediction, given \(\mathbf{v}\))?_" question (Ignatiev et al., 2020; Miller, 2019), again given \(\mathbf{v}\). Given a set \(\mathcal{Y}\subseteq\mathcal{F}\), sufficiency for changing the prediction can be stated formally:
\[\exists(\mathbf{x}\in\mathbb{F}).\left[\bigwedge\nolimits_{i\in\mathcal{Y}, \mathcal{Y}}(x_{i}=v_{i})\right]\wedge(\kappa(\mathbf{x})\neq\kappa(\mathbf{v})) \tag{4}\]
For simplicity, we associate a predicate WCXp with (4), such that WCXp\((\mathcal{Y})\) holds if and only if (4) holds.
A CXp is a subset-minimal set of features which, if allowed to take a value other than the value determined by \(\mathbf{v}\), then the prediction can be changed by choosing suitable values to those features.
Similarly to the case of AXp's, for CXp's (4) is monotone on \(\mathcal{Y}\), and so the two conditions (sufficiency for changing the prediction and subset-minimality) can be stated formally as follows:
\[\exists(\mathbf{x}\in\mathbb{F}).\left[\bigwedge\nolimits_{i\in \mathcal{Y},\mathcal{Y}}(x_{i}=v_{i})\right]\wedge(\kappa(\mathbf{x})\neq \kappa(\mathbf{v}))\wedge \tag{5}\] \[\forall(t\in\mathcal{Y}).\forall(\mathbf{x}\in\mathbb{F}).\left[ \bigwedge\nolimits_{i\in\mathcal{Y}(\mathcal{Y}(\{t\})}(x_{i}=v_{i})\right] \rightarrow(\kappa(\mathbf{x})=\kappa(\mathbf{v}))\]
A predicate \(\text{CXp}:2^{\mathcal{F}}\to\{0,1\}\) is associated with (5), such that \(\text{CXp}(\mathcal{Y};\mathcal{E})\) holds true if and only if (5) holds true.
Algorithms for computing AXp's and CXp's for different families of classifiers have been proposed in recent years ([16] provides a recent account of the progress observed in computing formal explanations). These algorithms include the use of automated reasoners (e.g. SAT, SMT or MILP solvers), or dedicated algorithms for families of classifiers for which computing one explanation is tractable.
Given an explanation problem \(\mathcal{E}\), the sets of AXp's and CXp's are represented by:
\[\mathbb{A}(\mathcal{E}) =\{\mathcal{X}\subseteq\mathcal{F}\,|\,\text{AXp}(\mathcal{X}; \mathcal{E})\} \tag{6}\] \[\mathbb{C}(\mathcal{E}) =\{\mathcal{Y}\subseteq\mathcal{F}\,|\,\text{CXp}(\mathcal{Y}; \mathcal{E})\} \tag{7}\]
For example, \(\mathbb{A}(\mathcal{E})\) represents the set of all logic rules that predict \(c=\kappa(\mathbf{v})\), which are consistent with \(\mathbf{v}\), and which are irreducible (i.e. no literal \(x_{i}=v_{i}\) can be discarded).
Furthermore, it has been proved [14] that (i) a set \(\mathcal{X}\subseteq\mathcal{F}\) is an AXp if and only if it is a minimal hitting set (MHS) of the set of CXp's; and (ii) a set \(\mathcal{Y}\subseteq\mathcal{F}\) is a CXp if and only if it is an MHS of the set of AXp's. This property is referred to as MHS duality, and can be traced back to the seminal work of R. Reiter [10] in model-based diagnosis. Moreover, MHS duality has been shown to be instrumental for the enumeration of AXp's and CXp's, but also for answering other explainability queries [16].
### Shapley Values for XAI
Shapley values were proposed in the 1950s, in the context of game theory [17], and find a wealth of uses [11]. More recently, starting in 2001, Shapley values have been extensively used for explaining the predictions of ML models, e.g. [15, 16, 17, 18, 19, 20, 21, 22], among a vast number of recent examples (see [16, 17, 23], among a vast number of recent examples (see [16] for a more comprehensive list of references). Shapley values represent one example of explainability by feature attribution, i.e. some score is assigned to each feature as a form of explanation. The complexity of computing Shapley values (as proposed in SHAP [15]) has been studied in recent years [1, 20, 21]. This section provides a brief overview of how Shapley values for explainability are computed. Throughout, we build on the notation used in recent work [16, 20], which builds on the work of [15].
Let \(\Upsilon:2^{\mathcal{F}}\to 2^{\mathcal{F}}\) be defined by,
\[\Upsilon(\mathcal{S};\mathbf{v})=\{\mathbf{x}\in\mathbb{F}\,|\,\,\wedge_{i \in\mathcal{S}}\,x_{i}=v_{i}\} \tag{8}\]
i.e. for a given set \(\mathcal{S}\) of features, and parameterized by the point \(\mathbf{v}\) in feature space, \(\Upsilon(\mathcal{S};\mathbf{v})\) denotes all the points in feature space that have in common with \(\mathbf{v}\) the values of the features specified by \(\mathcal{S}\).
Also, let \(\phi:2^{\mathcal{F}}\to\mathbb{R}\) be defined by,
\[\phi(\mathcal{S};\mathcal{M},\mathbf{v})=\frac{1}{2^{|\mathcal{F}|}\,|\,\, \mathcal{S}|}\sum_{\mathbf{x}\in\Upsilon(\mathcal{S};\mathbf{v})}\kappa( \mathbf{x}) \tag{9}\]
Thus, given a set \(\mathcal{S}\) of features, \(\phi(\mathcal{S};\mathcal{M},\mathbf{v})\) represents the average value of the classifier over the points of feature space represented by \(\Upsilon(\mathcal{S};\mathbf{v})\). The formulation presented in earlier work [16, 17] allows for different input distributions when computing the average values. For the purposes of this paper, it suffices to consider solely a uniform input distribution, and so the dependency on the input distribution is not accounted for.
To simplify the notation, the following definitions are used throughout,
\[\Delta(i,\mathcal{S};\mathcal{M},\mathbf{v}) =(\phi(\mathcal{S}\cup\{i\};\mathcal{M},\mathbf{v})-\phi(\mathcal{ S};\mathcal{M},\mathbf{v})) \tag{10}\] \[\varsigma(\mathcal{S};\mathcal{M},\mathbf{v}) =|S|\Upsilon(|\mathcal{T}|-|S|-1\rangle^{|\mathcal{T}|}/| \mathcal{T}|; \tag{11}\]
Finally, let \(\text{Sv}:\mathcal{F}\to\mathbb{R}\), i.e. the Shapley value for feature \(i\), be defined by,
\[\text{Sv}(i;\mathcal{M},\mathbf{v})=\sum_{\mathcal{S}\subseteq\{\mathcal{F}\, \{i\}\}}\varsigma(\mathcal{S};\mathcal{M},\mathbf{v})\times\Delta(i,\mathcal{ S};\mathcal{M},\mathbf{v}) \tag{12}\]
Given an instance \((\mathbf{v},c)\), the Shapley value assigned to each feature measures the _contribution_ of that feature with respect to the prediction.
Throughout this paper, we use the term _Shapley values_ to refer to the SHAP scores studied in earlier work [16, 20, 21, 22, 15, 16, 17]. This is to emphasize the different between Shapley values for XAI (or SHAP scores in earlier work), and the values computed by the tool SHAP [15]. As demonstrated in recent work [16], there can exist significant differences between Shapley values and the results produced by the tool SHAP.
### Feature (Ir)relevancy & Necessity
Given (6) and (7), we can aggregate the features that occur in AXp's and CXp's:
\[\mathcal{F}_{A(\mathcal{E})} =\bigcup\nolimits_{\mathcal{X}\in\mathbb{A}(\mathcal{E})}\mathcal{X} \tag{13}\] \[\mathcal{F}_{\mathbb{C}(\mathcal{E})} =\bigcup\nolimits_{\mathcal{Y}\in\mathbb{C}(\mathcal{E})}\mathcal{Y} \tag{14}\]
Moreover, MHS duality between the sets of AXp's and CXp's allows proving that: \(\mathcal{F}_{A(\mathcal{E})}=\mathcal{F}_{\mathbb{C}(\mathcal{E})}\). Hence, we just refer to \(\mathcal{F}_{A(\mathcal{E})}\) as the set of features that are contained in some AXp (or CXp).
A feature \(i\in\mathcal{F}\) is relevant if it is contained in some AXp, i.e. \(i\in\mathcal{F}_{A(\mathcal{E})}=\mathcal{F}_{\mathbb{C}(\mathcal{E})}\); otherwise it is irrelevant, i.e. \(i\not\in\mathcal{F}_{A(\mathcal{E})}\). A feature is necessary if it is contained in all AXp's5.
Footnote 5: It should be noted that feature relevancy and necessity mirror the concepts of relevancy and necessity studied in logic-based abduction (Eiter and Gottlob, 1995).
We will use the predicate \(\mathsf{Relevant}(i)\) to denote that feature \(i\) is relevant, and predicate \(\mathsf{Irrelevant}(i)\) to denote that feature \(i\) is irrelevant.
Relevant and irrelevant features provide a fine-grained characterization of feature importance, in that irrelevant features play no role whatsoever in prediction sufficiency. In fact, if \(p\in\mathcal{F}\) is an irrelevant feature, then we can write:
\[\forall(X\in\mathbb{A}(\mathcal{E})).\forall(u_{p}\in\mathbb{D}_{p}).\forall (\mathbf{x}\in\mathbb{F}).\]
\[\left[\bigwedge\nolimits_{i\in\mathcal{X}}(x_{i}=v_{i})\wedge(x_{p}=u_{p}) \right]\rightarrow(\kappa(\mathbf{x})=\kappa(\mathbf{v})) \tag{15}\]
The logic statement above clearly states that, if we fix the values of the features identified by any AXp then, no matter the value picked for feature \(p\), the prediction is guaranteed to be \(c=\kappa(\mathbf{v})\). The bottom line is that an irrelevant feature \(p\) is absolutely unimportant for the prediction, and so there is no reason to include it in a logic rule consistent with the instance.
As argued in earlier work (Huang and Marques-Silva, 2023b,c; Marques-Silva and Huang, 2023), the fact that irrelevant features are not considered in explanations means their value is absolutely unimportant for either keeping or changing the prediction.
## 3. Adversarial Examples vs (Ir)relevant Features
This section develops a number of results regarding the non-importance of irrelevant features for adversarial examples. These results offer further support to the claims of inadequacy of Shapley values in this and earlier reports (Huang and Marques-Silva, 2023b,c; Marques-Silva and Huang, 2023).
As indicated earlier, we consider categorical or discrete features and Hamming distance as a measure of distance between points in feature space. The Hamming distance is also referred to as the \(l_{0}\) measure of distance, and it is defined as follows:
\[||\mathbf{x}-\mathbf{y}||_{0}\triangleq\sum\limits_{i=1}^{m}\mathrm{ITE}(x_{i }\neq y_{i},1,0) \tag{16}\]
Given a point \(\mathbf{v}\) in feature space, an adversarial example (AE) is some other point \(\mathbf{x}\) in feature space that changes the prediction and such that the measure of distance \(l_{p}\) between the two points is small enough:
\[\exists(\mathbf{x}\in\mathbb{F}).||\mathbf{x}-\mathbf{v}||_{p}\leq\epsilon \wedge(\kappa(\mathbf{x})\neq\kappa(\mathbf{v})) \tag{17}\]
(in our case, we consider solely \(p=0\).) Although we could consider specific values of \(\epsilon\), as proposed in (Huang and Marques-Silva, 2023), we will opt in this paper for allowing \(\epsilon=+\infty\), and then asking for adversarial examples respecting some criterion of minimality.
The features that are changed for a given AE in (17) are denoted by \(\mathcal{A}\subseteq\mathcal{F}\). Thus, if we say that \(\mathcal{A}\) is an adversarial example, then (17) holds true for some \(\mathbf{x}\) such that \(\mathbf{x}\) and \(\mathbf{v}\) differ in the values of the features included in \(\mathcal{A}\).
Since we can represent AEs as sets (of the features that change their value), we will consider subset-minimal AEs, i.e. sets of features that represent adversarial examples, and no proper subset represents an adversarial example.
**Proposition 1**.: Given an instance \((\mathbf{v},c)\), if \(\mathcal{A}\) is an AE and \(j\in\mathcal{A}\) is an irrelevant feature, then there exists another AE \(\mathcal{B}\) with \(\mathcal{B}\subseteq\mathcal{A}\), with \(j\not\in\mathcal{B}\).
**Corollary 1**.: Subset- or cardinality-minimal AEs do not contain irrelevant features.
We can strengthen the above results, by analyzing instead feature relevancy.
**Proposition 2**.: A feature \(j\in\mathcal{F}\) is included in some (minimal) adversarial example iff feature \(j\) is relevant.
**Remark 1**.: Thus, we conclude that there is a tight relationship between adversarial examples and feature relevancy, and so with abductive and contrastive explanations, all of which relate with either keeping or changing the prediction. In contrast, the relative order of importance provided by Shapley values is not related neither with abductive explanations, nor with contrastive explanations. Finally, given the results in this and earlier reports (Huang and Marques-Silva, 2023b, c; Marques-Silva and Huang, 2023), Shapley values for XAI are also not related with \(l_{0}\)-minimal adversarial examples.
## 4. Classifiers Defined by Tabular Representations
Building on earlier work (Huang and Marques-Silva, 2023b, c; Marques-Silva and Huang, 2023), this section analyzes several additional examples, further extending the earlier results on the inadequacy of Shapley values for XAI.
### Example of Multi-Valued Classifier
**Classifier.** We consider the following multi-valued classifier, defined on boolean features, with \(\mathbb{D}_{i}=\mathbb{B}=\{0,1\}\), with \(1\leq i\leq m\):
\[\kappa_{1}(x_{1},x_{2},\ldots,x_{m})=\left\{\begin{array}{ll}1&\text{if }x_{1}=1 \\ \max\{i\,|\,x_{i}>0\wedge 1<i\leq m\}&\text{otherwise}\end{array}\right.\]
Although the classifier is defined on \(m\) features, throughout we will consider \(m=3\), to facilitate the analysis of the main claims. Thus, \(\mathbb{F}=\mathbb{B}^{3}\). Also, we consider the instance \(((1,0,0),1)\). The classifier is depicted in Figure 1, with a (multi-valued) tabular representation (TR) is shown in Figure 0(a), and a decision tree (DT) is shown in Figure 0(b). Given the TR/DT, we set \(\mathcal{K}=\{0,1,2,3\}\).
**Feature influence in predicted class.** Recall that the instance is \(((1,0,0),1)\), and so the predicted class is 1. By inspection of the DT, it is simple to conclude that, for any point in feature space, the predicted class is 1 if and only of \(x_{1}=1\), and that the predicted class is other than 1 if and only if \(x_{1}=0\). These statements hold true _independently_ of the values assigned to features 2 and 3. Thus, to keep the predicted class only the value of feature 1 matters. Similarly, to change the predicted class, only the value of feature 1 matters.
**Formal explanations & feature relevancy.** Table 1 (see Page 7) illustrates the role of each set of features in terms of explanation sufficiency and irredundancy. The computed explanations also serve for deciding feature (ir)relevancy. Unsurprisingly, feature 1 is shown to be relevant (and necessary), and features 2 and 3 are shown to be irrelevant. As can be concluded, feature 1 is sufficient for ensuring that the predicted class is 1. In contrast, when feature 1 takes value 1, the other features can be assigned _any_ value from their domain, since that does not change the predicted class. By subset-minimality (and so invoking Occam's razor), features 2 and 3 are never included in formal explanations.
**Features in adversarial examples.** Table 2 summarizes the possible adversarial examples for the classifier given instance \(((1,0,0),1)\). An adversarial example is a point \(\mathbf{y}\in\mathbb{F}\) that causes the prediction to change, and for which the Hamming (\(l_{0}\)) distance between the two points is minimized. (As noted earlier, we opt for subset-minimality.) As can be observed, it suffices to change the value of \(x_{1}\) to ensure that the prediction changes.
**Shapley values & feature importance.** Table 3 summarizes the computation of Shapley values (for XAI) (Arenas et al., 2021, 2023; Lundberg and Lee, 2017) for the classifier of Figure 1 and for the instance \(((1,0,0),1)\). As can be concluded, the relative order of feature importance is 3, 2, 1.
Figure 1. Multi-valued classifier. The DT path \(\langle 1,3\rangle\), which is consistent with instance \(((1,0,0),1)\), is highlighted.
**Assessment.** The following observations substantiate our claim that assigning importance to feature 2 or 3 is misleading for the classifier of Figure 1:
1. As shown in Table 1, any subset- (or cardinality-) minimal set of features that is sufficient for the prediction does not contain either feature 2 or feature 3.
2. Motivated by the duality between abductive and contrastive explanations (Ignatiev et al., 2020), any subset- (or cardinality-) minimal subset of features sufficient for changing the prediction does not include either feature 2 or feature 3.
3. A related observation, that offers a somewhat different perspective, is that given the relationship between (distance-restricted) abductive explanations and adversarial examples (Huang and Marques-Silva, 2023; Ignatiev et al., 2019), it is simple to prove that any (subset- or cardinality-) minimal \(l_{0}\) distance adversarial example will not include either feature 2 or feature 3.
### Example of Discrete Classifier
**Classifier.** We consider the following discrete classifier, defined on discrete features, with \(\mathbb{D}_{1}=\mathbb{B}=\{0,1\},\mathbb{D}_{i}=\{0,1,2\},i=2,3\).
\[\kappa_{2}(x_{1},x_{2},\ldots,x_{m})=\left\{\begin{array}{ll}1&\quad\text{if }x_{1}=1\\ 2&\quad\text{if }(x_{1}=0)\wedge(x_{2}=2)\wedge(x_{3}=2)\\ 0&\quad\text{otherwise}\end{array}\right.\]
Given the domains of the features, we have \(\mathbb{F}=\mathbb{B}\times\mathbb{D}_{2}\times\mathbb{D}_{3}\). Furthermore, we consider the instance \(((1,2,2),1)\). The classifier is shown in Figure 2, consisting of a tabular representation (see Figure 2a) and a decision tree (see Figure 2b) Given the classifier's description, we set \(\mathcal{K}=\{0,1,2\}\). Furthermore, Table 1 (see Page 7) illustrates the role of each set of features in terms of explanations sufficiency and irredundancy.
**Feature influence on predicted class.** Similar to the example in Section 4.1, for the concrete instance \(((1,2,2),1)\), we conclude that the value feature 1 determines the predicted class when the prediction is 1 (see Figure 2c).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Xp set \(\mathcal{S}\)} & \multirow{2}{*}{\(\mathcal{S}\) sufficient?} & \multirow{2}{*}{\(\mathcal{S}\) irreducible?} & \multicolumn{2}{c}{Pick \(\mathcal{X}\subseteq\mathcal{S}\),} & \multirow{2}{*}{Meaning of \(\mathcal{X}\) relative to \(\mathcal{S}\)} \\ & & & & \(\mathcal{X}\) sufficient \& irreducible & \\ \hline \(\emptyset\) & ✗ & – & – & – \\ \hline \(\{1\}\) & ✓ & ✓ & \{1\} & \(\mathcal{X}=\mathcal{S}\) is an AXp \\ \hline \(\{2\}\) & ✗ & – & – & – \\ \hline \(\{3\}\) & ✗ & – & – & – \\ \hline \(\{1,2\}\) & ✓ & ✗ & \{1\} & \(\begin{array}{l}\nabla(u_{2}\in\mathbb{D}_{2}).\nabla(\mathbf{x}\in\mathbb{ F}).\\ \left[(x_{1}=1)\wedge(x_{2}=u_{2})\right]\rightarrow(\kappa_{1}(\mathbf{x})=1) \\ \end{array}\) \\ \hline \(\{1,3\}\) & ✓ & ✗ & \{1\} & \(\begin{array}{l}\nabla(u_{3}\in\mathbb{D}_{2}).\nabla(\mathbf{x}\in\mathbb{ F}).\\ \left[(x_{1}=1)\wedge(x_{3}=u_{3})\right]\rightarrow(\kappa_{1}(\mathbf{x})=1) \\ \end{array}\) \\ \hline \(\{2,3\}\) & ✗ & – & – & – \\ \hline \(\{1,2,3\}\) & ✓ & ✗ & \{1\} & \(\begin{array}{l}\nabla(u_{2}\in\mathbb{D}_{2}).\nabla(u_{3}\in\mathbb{D}_{3}).\nabla(\mathbf{x}\in\mathbb{F}).\\ \left[(x_{1}=1)\wedge(x_{2}=u_{2})\wedge(x_{3}=u_{3})\right]\rightarrow( \kappa_{1}(\mathbf{x})=1)\end{array}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Detailed analysis of the classifiers in Figure 1 (and in Figures 2 and 4). The analysis is exactly the same for all these example classifiers.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(\kappa_{1}(\mathbf{x})\) & \(\kappa_{1}(\mathbf{x})\) & \(\neq\kappa_{1}(\mathbf{v})\)? & \(l_{0}\) distance & AE? \\ \hline \(0\) & \(0\) & \(0\) & \(0\) & ✓ & \(1\) & ✓ \\ \(0\) & \(0\) & \(1\) & \(3\) & ✓ & \(2\) & ✗ \\ \(0\) & \(1\) & \(0\) & \(2\) & ✓ & \(2\) & ✗ \\ \(0\) & \(1\) & \(1\) & \(3\) & ✓ & \(3\) & ✗ \\ \(1\) & \(0\) & \(0\) & \(1\) & ✗ & – & – \\ \(1\) & \(0\) & \(1\) & \(1\) & ✗ & – & – \\ \(1\) & \(1\) & \(0\) & \(1\) & ✗ & – & – \\ \(1\) & \(1\) & \(1\) & \(1\) & ✗ & – & – \\ \hline \hline \end{tabular}
\end{table}
Table 2. AE for \(\kappa_{1}\) on instance \(((1,0,0),1)\) for classifier from Figure 1a
**Formal explanations & feature relevancy.** The computation of formal explanations mimics the one for Figure 1, as shown in Table 1. As a result, we once again conclude that feature 1 is relevant (and necessary), and that features 2 and 3 are irrelevant.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline row \# & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(\kappa_{2}(\mathbf{x})\) \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\begin{tabular}{c c c c c} \hline \hline \(\mathcal{S}_{i}\) & \(\phi(\mathcal{S}_{i})\) & \(\phi(\mathcal{S}_{i}\cup\{i\})\) & \(\Delta(i,\mathcal{S}_{i})\) & \(\xi(\mathcal{S}_{i})\) & Sv(\(i\)) \\ \hline
0 & \(\nicefrac{{3}}{{2}}\) & 1 & \(-\nicefrac{{1}}{{2}}\) & \(\nicefrac{{1}}{{3}}\) & \(-\) \\ \(\{2\}\) & \(\nicefrac{{5}}{{4}}\) & 1 & \(-\nicefrac{{1}}{{4}}\) & \(\nicefrac{{1}}{{6}}\) & \(-\) \\ \(\{3\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{6}}\) & \(-\) \\ \(\{2,3\}\) & \(\nicefrac{{1}}{{2}}\) & 1 & \(\nicefrac{{1}}{{2}}\) & \(\nicefrac{{1}}{{3}}\) & \(-\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} \hline \hline \(\emptyset\) & \(\nicefrac{{3}}{{2}}\) & 5/4 & \(-\nicefrac{{1}}{{4}}\) & \(\nicefrac{{1}}{{3}}\) & \(-\) \\ \(\{1\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{6}}\) & \(-\) \\ \(\{3\}\) & 1 & \(\nicefrac{{1}}{{2}}\) & \(-\nicefrac{{1}}{{2}}\) & \(\nicefrac{{1}}{{6}}\) & \(-\) \\ \(\{1,3\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{3}}\) & \(-\) \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c} \hline \hline \(\emptyset\) & \(\nicefrac{{3}}{{2}}\) & 1 & \(-\nicefrac{{1}}{{2}}\) & \(\nicefrac{{1}}{{3}}\) & \(-\) \\ \(\{1\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{6}}\) & \(-\) \\ \(\{2\}\) & \(\nicefrac{{5}}{{4}}\) & \(\nicefrac{{1}}{{2}}\) & \(-\nicefrac{{3}}{{4}}\) & \(\nicefrac{{1}}{{6}}\) & \(-\) \\ \(\{1,2\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{3}}\) & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 3. Computation of Shapley values for the classifier \(\kappa_{1}\) of Figure 1
Figure 2. Example classifier \(\kappa_{2}\). The DT path \(\langle 1,3\rangle\), which is consistent with the instance \(((1,2,2),1)\), is highlighted.
**Features in adversarial examples.** By using the TR/DT in Figure 2, an analysis similar to that on Table 2 allows concluding that the only minimal \(l_{0}\) AE will only include feature 1, as the feature that must change value for the prediction to change.
**Shapley values & feature importance.** The computation of Shapley values for the classifier of Figure 2, and for the instance (\((1,2,2),1\)) is shown in Table 4. As can be observed, the relative order of feature importance obtained is: 2, 3, 1 (or 3, 2, 1). The interpretation that can be made is: feature 2 (or 3) is more important for the prediction than is feature 3 (or 2), and feature 3 (or 2) is more important for the prediction than is feature 1. This interpretation is in completely disagreement with the analysis of feature influence, with the analysis of feature relevancy, and with the analysis of adversarial examples. The bottom line is that the features that bear no influence in predicting class 1, are deemed the most important according to the computed Shapley values.
**Assessment.** As before in Section 4.1, we devised a classifier and an instance for which the only relevant feature, and the feature that bears some influence on the predicted class, is assigned an absolute Shapley value that is smaller than the absolute Shapley values of two other features, which are irrelevant for the prediction, and which are clear not to influence the prediction.
### A Simple Discrete Parameterized Classifier
**Classifier.** Aiming to extend the conclusions of prevision sections, we now consider a discrete classifier, defined on boolean features, with \(\mathbb{D}_{i}=\mathbb{B}=\{0,1\}\), with \(1\leq i\leq m\). For simplicity, we set \(m=2\), and just represent the classifier with a tabular representation as shown in Figure 3.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\mathcal{S}_{i}\) & \(\phi(\mathcal{S}_{i})\) & \(\phi(\mathcal{S}_{i}\cup\{i\})\) & \(\Delta(i,\mathcal{S}_{i})\) & \(\varsigma(\mathcal{S}_{i})\) & Sv(\(i\)) \\ \hline \(\emptyset\) & \(\nicefrac{{11}}{{18}}\) & 1 & \(\nicefrac{{7}}{{18}}\) & \(\nicefrac{{1}}{{3}}\) & – \\ \(\{2\}\) & \(\nicefrac{{5}}{{6}}\) & 1 & \(\nicefrac{{1}}{{6}}\) & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{3\}\) & \(\nicefrac{{5}}{{6}}\) & 1 & \(\nicefrac{{1}}{{6}}\) & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{2,3\}\) & \(\nicefrac{{3}}{{2}}\) & 1 & -\(\nicefrac{{1}}{{2}}\) & \(\nicefrac{{1}}{{3}}\) & – \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} \hline \hline \(\emptyset\) & \(\nicefrac{{11}}{{18}}\) & \(\nicefrac{{5}}{{6}}\) & \(\nicefrac{{2}}{{9}}\) & \(\nicefrac{{1}}{{3}}\) & – \\ \(\{1\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{3\}\) & \(\nicefrac{{5}}{{6}}\) & 3/2 & 2/3 & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{1,3\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{3}}\) & – \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} \hline \hline \(\emptyset\) & \(\nicefrac{{11}}{{18}}\) & \(\nicefrac{{5}}{{6}}\) & \(\nicefrac{{2}}{{9}}\) & \(\nicefrac{{1}}{{3}}\) & – \\ \(\{1\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{3\}\) & \(\nicefrac{{5}}{{6}}\) & 3/2 & 2/3 & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{1,3\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{3}}\) & – \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} \hline \hline \(\emptyset\) & \(\nicefrac{{11}}{{18}}\) & \(\nicefrac{{5}}{{6}}\) & \(\nicefrac{{2}}{{9}}\) & \(\nicefrac{{1}}{{3}}\) & – \\ \(\{1\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{2\}\) & \(\nicefrac{{5}}{{6}}\) & 3/2 & 2/3 & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{1,2\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{3}}\) & – \\ \hline \hline \end{tabular}
\begin{tabular}{c c c} \hline \hline \(\emptyset\) & \(\nicefrac{{11}}{{18}}\) & \(\nicefrac{{5}}{{6}}\) & \(\nicefrac{{2}}{{9}}\) & \(\nicefrac{{1}}{{3}}\) & – \\ \(\{1\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{2\}\) & \(\nicefrac{{5}}{{6}}\) & 3/2 & 2/3 & \(\nicefrac{{1}}{{6}}\) & – \\ \(\{1,2\}\) & 1 & 1 & 0 & \(\nicefrac{{1}}{{3}}\) & – \\ \hline \hline \end{tabular}
\end{table}
Table 4. Computation of Shapley values for the classifier \(\kappa_{2}\) of Figure 2.
Figure 3. Tabular representation for \(\kappa_{a}\)
For Figure 3, \(\mathbb{F}=\mathbb{B}^{2}\), and we let \(\alpha,\beta,\gamma\in\mathbb{Z}\). Moreover, we consider the instance \(((1,1),\alpha)\). It is easy to conclude that, as long as \(\alpha\neq\gamma\wedge\alpha\neq\beta\) and \(\delta=\alpha\), then feature 1 is relevant, and feature 2 is irrelevant. Given the table above, we set \(\mathcal{K}=\{\gamma,\beta,\alpha\}\), since we opt to pick \(\delta=\alpha\). We also impose \(\gamma\neq\alpha\wedge\beta\neq\alpha\), as pointed out above. As a result, throughout the remainder of this section, it will be the case that \(\delta=\alpha\), and that \(\gamma\neq\alpha\wedge\beta\neq\alpha\).
**Feature influence on predicted class.** For the instance \(((1,1),\alpha)\), with \(\delta=\alpha,\gamma\neq\alpha,\beta\neq\alpha\), it is clear that the predicted class is 1 if and only if feature 1 is assigned value 1, and that the predicted class is other that 1 if and only if feature 1 is assigned value 1.
**Formal explanations & feature relevancy.** Building on the examples in earlier sections, it is plain to conclude that feature 1 is relevant and feature 2 is irrelevant. Observe that feature 2 is never necessary, neither as one of the features required for keeping the prediction (i.e. included in some AXp), nor as one of the features required for changing the prediction (i.e. included in some CXp).
**Features in adversarial examples.** As noted above, to change the predicted class changes if and only if the value of feature 1 changes. No constraint is imposed on feature 2. Hence, minimal adversarial examples only require setting the value of feature 1.
**Shapley values & feature importance.** As will be argued below, and under the stated assumptions, the goal is for feature 1 to have a Shapley value of 0 and feature 2 to have a non-zero Shapley value. This way, the information provided by Shapley values is evidently misleading. For the classifier of Figure 3, and instance \(((1,1),\alpha)\), we can now compute the Shapley values as shown in Table 6 (see [Marques-Silva and Huang 2023] for the definitions).
**Instantiation.** Now, to achieve the goal of having \(\mathsf{Sv}(1)=0\) with feature 1 relevant, and \(\mathsf{Sv}(2)\neq 0\) with feature 2 irrelevant, we must have \(\nicefrac{{a}}{{2}}-\nicefrac{{3}}{{b}}-\nicefrac{{r}}{{8}}=0\) and \(\nicefrac{{\beta}}{{8}}-\nicefrac{{r}}{{8}}\neq 0\), and the initial constraint that \(\gamma\neq\alpha\). As an example, it is plain to conclude that \(\alpha=3,\beta=4,\gamma=0\) satisfies the constraints. Hence, we manage to have a classifier with two features, and an example instance such that feature 1 is relevant with a Shapley value of 0, and feature 2 is irrelevant with a non-zero Shapley value. Perhaps more importantly, if we pick the value of \(\alpha\in\mathbb{Z}\), then it suffices to set \(\alpha=\nicefrac{{3}}{{b}}+\nicefrac{{r}}{{4}}\) such that \(\beta\neq\gamma\). Hence, we have arbitrarily many discrete classifiers, for each of which a relevant feature has a Shapley value of 0, and an irrelevant feature has a Shapley value with a non-zero value. (Therefore, this example represents in fact a family of discrete classifiers.)
**Assessment.** Whereas for the previous examples, the absolute Shapley value of the relevant feature was small but non-zero, this example shows how one can create classifiers where information provided by Shapley values is completely misleading, i.e. with respect to feature 2, but also with respect to feature 1.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{\(i=1\)} \\ \hline \(\mathcal{S}_{i}\) & \(\Delta(i,\mathcal{S}_{i})\) & \(\varsigma(\mathcal{S}_{i})\) & \(\mathsf{Sv}(i)\) \\ \hline
0 & \(\nicefrac{{(2\alpha-\beta-\gamma)}}{{4}}\) & \(\nicefrac{{1}}{{2}}\) & – \\ \(\{2\}\) & \(\nicefrac{{(\alpha-\beta)}}{{2}}\) & \(\nicefrac{{1}}{{2}}\) & – \\ – & – & – & \(\nicefrac{{a}}{{2}}-\nicefrac{{3}}{{b}}-\nicefrac{{r}}{{8}}\) \\ \hline \hline & \multicolumn{3}{c}{\(i=2\)} \\ \hline \(\mathcal{S}_{i}\) & \(\Delta(i,\mathcal{S}_{i})\) & \(\varsigma(\mathcal{S}_{i})\) & \(\mathsf{Sv}(i)\) \\ \hline
0 & \(\nicefrac{{(\beta-\gamma)}}{{4}}\) & \(\nicefrac{{1}}{{2}}\) & – \\ \(\{1\}\) & 0 & \(\nicefrac{{1}}{{2}}\) & – \\ – & – & – & \(\nicefrac{{\beta}}{{8}}-\nicefrac{{r}}{{8}}\) \\ \hline \end{tabular}
\end{table}
Table 6. Shapley values for \(\kappa_{a}\) and instance \(((1,1),\alpha)\)
### Another Parameterized Discrete Classifier
**Classifier.** Motivated by the conclusions of the previous section, we consider in this section a somewhat more complex discrete classifier, defined on boolean features, with \(\mathbb{D}_{i}=\mathbb{B}=\{0,1\}\), with \(1\leq i\leq m\). For simplicity, we set \(m=3\), and just represent the classifier by a tabular representation, as shown in Figure 4.
For Figure 4, \(\mathbb{F}=\mathbb{B}^{3}\), and we let \(\alpha,\sigma_{j}\in\mathbb{Z}\), with \(j\in\{1,2,3,4\}\). Moreover, we consider the instance \(((1,1,1),\alpha)\). Given Figure 4, we set \(\mathcal{K}=\{\alpha,\sigma_{1},\ldots,\sigma_{4}\}\). Thus, depending on the actual values assigned to \(\alpha,\sigma_{i},\,1\leq i\leq 4\), it holds that \(|\mathcal{K}|\leq 5\). We also impose \(\sigma_{i}\neq\alpha,i=1,\ldots,4\), to ensure feature relevancy as intended and as discussed in earlier examples.
**Feature influence on predicted class.** From Figure 4, and based on the analysis of earlier examples, it is plain to conclude that, for the instance \(((1,1,1),1)\), the predicted class is \(1\) if and only if feature \(1\) is assigned value \(1\), and the predicted class is other than \(1\) if and only if feature \(1\) is assigned value \(0\). As before, features \(2\) and \(3\) bear no relevance in the predicting class \(1\), or in changing the predicted class \(1\) to something esel
**Formal explanations & feature relevancy.** In a similar way, it is immediate to conclude that, with \(\alpha\neq\alpha_{j},\,j=1,\ldots,6\), there exists a single AXp \(\{1\}\) and a single CXp \(\{1\}\), which agrees with the analysis of the influence of each feature on the predicted class \(1\).
**Features in adversarial examples.** From Figure 4, it is also plain that, for the parameterized classifier of Figure 4, any minimal \(l_{0}\) adversarial example must include feature \(1\), whereas features \(2\) and \(3\) serve no purpose in changing the predicted class.
**Shapley values & feature importance.** Building on the approach adopted in earlier sections, we can compute the Shapley values for the parameterized classifier, and summarized in Table 7.
**Instantiations.** Given Table 7, and as before, our goal is to obtain \(\mathrm{Sv}(1)=0\), with feature \(1\) relevant, and \(\mathrm{Sv}(2)\neq 0\wedge\mathrm{Sv}(3)\neq 0\), with features \(2\) and \(3\) irrelevant. As a result, from the Table 7 we get,
\[\begin{array}{c}\nicefrac{{\alpha}}{{2}}-\nicefrac{{\alpha}}{{2}}- \nicefrac{{\alpha}}{{2}}-\nicefrac{{\alpha}}{{2}}-\nicefrac{{\alpha}}{{2}}- \nicefrac{{\alpha}}{{2}}-7\nicefrac{{\alpha}}{{2}}+\nicefrac{{\alpha}}{{2}} +\nicefrac{{\alpha}}{{2}}+\nicefrac{{\alpha}}{{2}}\neq 0\\ -\nicefrac{{\alpha}}{{2}}+\nicefrac{{\alpha}}{{2}}-\nicefrac{{\alpha}}{{2}}+ \nicefrac{{\alpha}}{{2}}+\nicefrac{{\alpha}}{{2}}+\nicefrac{{\alpha}}{{2}} \neq 0\\ -\nicefrac{{\alpha}}{{2}}+\nicefrac{{\alpha}}{{2}}-\nicefrac{{\alpha}}{{2}}+ \nicefrac{{\alpha}}{{2}}+\nicefrac{{\alpha}}{{2}}\neq 0\end{array}\]
As an example, these conditions can be satisfied by setting \(\sigma_{1}=\sigma_{4}=0\), \(\sigma_{2}=\sigma_{3}=3\) and \(\alpha=1\). By plugging in these values in the expressions for the different Shapley values, we then get \(\mathrm{Sv}(1)=0,\mathrm{Sv}(2)=\mathrm{Sv}(3)=-\nicefrac{{1}}{{3}}\). It is simple to make the difference in Shapley values more significant by setting for example \(\sigma_{1}=\sigma_{4}=0\), \(\sigma_{2}=\sigma_{3}=12\) and \(\alpha=4\). In this case, we get \(\mathrm{Sv}(1)=0,\mathrm{Sv}(2)=\mathrm{Sv}(3)=-\nicefrac{{1}}{{2}}\).
**Assessment.** Clearly, by suitably selecting the values of \(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\), we are able to find arbitrary many examples of multi-valued classifiers defined on \(\mathbb{B}^{3}\), such that \(\mathrm{Sv}(1)=0\) and \(\mathrm{Sv}(2)\neq 0\wedge\mathrm{Sv}(3)\neq 0\), and where feature \(1\) is relevant and features \(2\) and \(3\) are irrelevant. (Therefore, and similarly to Figure 3, this example represents in fact a family of multi-valued classifiers.) Finally, we should note that, although the selected instance was \(((1,1,1),\alpha)\), we could have considered other instances and/or function definitions, as long as the computed values/classes were changed accordingly.
### A More Complex Parameterized Discrete Example
**Classifier.** This section studies a parameterized discrete classifier that encompasses the classifier of Figure 2. This parameterized classifier is shown in Figure 5.
From the table, we can conclude that \(\mathcal{F}=\{1,2,3\}\), \(\mathbb{D}_{1}=\{0,1\}\), \(\mathbb{D}_{2}=\mathbb{D}_{3}=\{0,1,2\}\), and so \(\mathbb{F}=\{0,1\}\times\{0,1,2\}^{2}\). Moreover, we also have \(\mathcal{K}=\{\alpha\}\cup\{\sigma_{j}\,|\,j=1,\ldots,9\}\). We will require that \(\alpha\neq\alpha_{j},\,j=1,\ldots,9\); this constraint will be clarified below. Finally, the target instance is \(((1,2,2),\alpha)\).
Figure 4. Tabular representation for \(\kappa_{b}\)
**Feature influence on predicted class.** It is simple to conclude that the analysis applied to the previous examples also holds in this case. Hence, for any point in feature space, the predicted class is 1 if and only if feature 1 is assigned value 1. The remaining features have no influence in predicting class 1 or in changing the predicted class to some other class different from 1.
\begin{table}
\begin{tabular}{c c c c} \hline \hline row \# & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(x_{5}(\mathbf{x})\) \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 7: Computation of Shapley values for \(\kappa_{b}\) in Figure 4
Figure 5: Example parameterized classifier \(\kappa_{c}\)
**Formal explanations & feature relevancy.** Figure 6 summarizes the computation of AXps and CXps for the parameterized classifier in Figure 4(a). (In this case, we opt to also highlight the computation of contrastive explanations.) As can be concluded, feature 1 is relevant (and necessary), whereas features 2 and 3 are irrelevant.
**Features in adversarial examples.**
**Shapley values & feature importance.** Given the average values for each possible set \(\mathcal{S}\) shown in Figure 4(b), the computation of Shapley values (for XAI) is summarized in Figure 7.
Given the computation of the Shapley values in Figure 7, and the goal of have \(\text{Sv}(1)=0\), \(\text{Sv}(2)\neq 0\) and \(\text{Sv}(3)\neq 0\), we obtain the following constraints:
\[\alpha=\nicefrac{{(2\alpha_{1}+2\alpha_{2}+5\sigma_{3}+2\alpha_{4}+2\sigma_ {3}+5\sigma_{6}+5\sigma_{7}+5\sigma_{8}+26\sigma_{9})}}{{}_{54}} \tag{18}\] \[\nicefrac{{(-2\sum_{j+1,2,3}\sigma_{j}-5\sigma_{3}-5\sigma_{6}+4 \sigma_{7}+4\sigma_{8}+10\sigma_{9})}}{{}_{108}}\neq 0\] (19) \[\nicefrac{{(-2\sum_{j+1,2,3}\sigma_{j}+4\sigma_{3}+4\sigma_{8}-5 \sigma_{7}-5\sigma_{8}+10\sigma_{9})}}{{}_{108}}\neq 0 \tag{20}\]
Any pick of values of \(\alpha\), \(\sigma_{j}\), \(j=1,\ldots,9\) that satisfies the constraints above will represent a classifier where the relative order of feature importance obtained with Shapley values is misleading.
**Instantiation.** Let us pick \(\sigma_{1}=\sigma_{3}=\sigma_{4}=\sigma_{6}=\sigma_{7}=\sigma_{9}=0\), \(\sigma_{2}=2\), \(\sigma_{5}=5\) and \(\sigma_{8}=8\), such that \(\alpha=1\). It is easy to conclude that these values satisfy (18), (19), (20). Figures 7(a) and 7(b) show the resulting tabular representation and decision tree for the classifier \(\kappa_{c,1}\). In a similar way, we can pick \(\sigma_{4}=\sigma_{5}=\sigma_{6}=\sigma_{7}=\sigma_{8}=\sigma_{9}=0\), \(\sigma_{1}=3\), \(\sigma_{2}=4\) and \(\sigma_{3}=8\), such that \(\alpha=1\). It is again easy to conclude that these values satisfy (18), (19), (20). Figures 7(a) and 7(c) show the resulting tabular representation and decision tree for the classifier \(\kappa_{c,2}\).
**Assessment.** As the instantiated examples of Figure 8 illustrate, it is simple to generate arbitrary many classifiers, given instance \(((1,2,2),\alpha\), for which only feature 1 bears some influence in predicting class 1, only feature 1 is deemed relevant in terms of explanations, only feature 1 occurs in adversarial examples, but such that the computed Shapley value (for XAI) is 0, and such that the remaining features, which bear no influence in predicting class 1, that are irrelevant in terms of explanations, and that do not occur in (minimal) adversarial examples, are assigned non-zero Shapley values.
### Discussion
As the examples presented in this section reveal, it is straightforward to devise very simple classifiers, and specific instances, for which the computed Shapley values bear no relationship whatsoever with the effective contribution of some features to the predicted class.
Figure 6. Computing AXp’s/CXp’s for the example parameterized classifier shown in Figure 5 and instance \((\mathbf{v},c)=((1,1,2),\alpha)\). All subsets of features are considered. For computing AXp’s, and for some set \(\mathcal{S}\), the features in \(\mathcal{S}\) are fixed to their values as determined by \(\mathbf{v}\). The picked rows, i.e. rows(\(\mathcal{S}\)), are the rows consistent with those fixed values. For example, if \(\mathcal{S}=\{1,2\}\), then only rows 10, 11 and 12 are consistent with having features 1 and 2 assigned value 1. Similarly, for computing CXp’s, and for some set \(\mathcal{S}\), the features in \(\mathcal{F}\setminus\mathcal{S}\) are fixed to their values as determined by \(\mathbf{v}\). The picked rows are again the rows consistent with those fixed values. For example, if \(\mathcal{S}=\{2\}\), then \(\mathcal{F}\setminus\mathcal{S}=\{1,3\}\), and so only rows 9 and 12 are consistent with having feature 1 assigned value 1 and feature 3 assigned value 2. An AXp is an irreducible set of features that is sufficient for the prediction. In this example, only \(\{1\}\) respects the criteria. Moreover, a CXp is an irreducible set of features which, if allowed to take any value from their domain, the prediction changes value. For this example, \(\{1\}\) respect the criteria, i.e. by only changing feature \(\{1\}\), we are able to change the prediction.
In contrast with the contrived examples proposed in this section, the next sections analyze published decision trees, but also OMDD classifiers (which represent a special case of graph-based classifiers) (Huang et al., 2021).
## 5 Classifiers Defined by Decision Trees
This section studies two example DTs. However, in contrast with the classifiers studied earlier in this document, the two DTs have been studied in earlier works (Lelis et al., 2020; Zhou, 2021), and represent concrete use cases. The choice of DTs is motivated by their size, i.e. the DTs are not small and so are not trivial to analyze, and by the fact that they exhibit some of the issues with Shapley values that have been studied in this and earlier reports (Huang and Marques-Silva, 2023, 2023; Marques-Silva and Huang, 2023).
For both DTs, we investigate whether there are instances exhibiting the following issue: \(\mathsf{Irrelevant}(i)\wedge\mathsf{Relevant}(j)\wedge(|\mathsf{Sv}(i)|>| \mathsf{Sv}(j)|)\). For that, we use the polynomial-time algorithm for computing Shapley values for d-DNFFs proposed in recent work (Arenas et al., 2023). Computation of explanations is based on earlier work as well (Huang et al., 2021; Iza et al., 2022). The experiments were performed on a MacBook Pro with a 6-Core Intel Core i7 2.6 GHz processor with 16 GByte RAM, running macOS Ventura.
**Example Decision Trees.** We consider two publicly available decision trees with discrete features and classes, one adapted from (Lelis et al., 2020, Figure 9) and the other from (Zhou, 2021, Figure 4.8). The DTs are shown in Figures 9 and 10. For simplicity, the DTs use set notation for the literals, as proposed in recent work (Izza et al., 2022). Table 8 shows the feature domains of the DT in Figure 9, while Table 9 shows the feature domains of the DT in Figure 10.
**Summary of results.** For each instance, all AXps are enumerated. This serves to decide which features are relevant and which are irrelevant. Then we compute the Shapley values for each feature and analyze whether the issue \(\mathsf{Irrelevant}(i)\wedge\mathsf{Relevant}(j)\wedge(|\mathsf{Sv}(i)|>| \mathsf{Sv}(j)|)\) occurs. If an instance exhibits such an issue, we plot a pair of values \((v_{i},v_{j})\). More specifically, \(v_{i}=\max\{|\mathsf{Sv}(k)|\mid k\notin\mathcal{F}_{A(\mathcal{E})}\}\) and \(v_{j}=\min\{|\mathsf{Sv}(k)|\mid k\in\mathcal{F}_{A(\mathcal{E})}\}\). (Observe that this means that the relative order of feature importance will be misleading.) We then plot \(v_{i}\) in yellow and \(v_{j}\) in blue, these pairs of values are depicted in Figures 11 and 12. Another observation is the occurrence of issues with Shapley values is non-negligible. For the DT in Figure 9, 151 out of 768
Figure 7. Computation of Shapley values for the example parameterized classifier shown in Figure 5 and instance \(((1,1,2),\alpha)\). For each feature \(i\), the sets to consider are all the sets that do not include the feature. The average values are obtained by summing up the values of the classifier in the rows consistent with \(\mathcal{S}\) and dividing by the total number of rows.
instances exhibit the aforementioned issue, i.e. 19.7% of the total. Moreover, for the DT in Figure 10, 82 out of 486 instances exhibit the same issue, i.e. 16.8% of the total.
Moreover, for the DT in Figure 9, we found that for the instance \(((1,0,0,0,0,0,1,1,1),1)\), there exist two AXps: \(\{1,5\}\) and \(\{1,4\}\) and the Shapley values are: \(\text{Sv}(1)=0.3572\), \(\text{Sv}(2)=-0.1428\), \(\text{Sv}(3)=-0.0178\), \(\text{Sv}(4)=0.0449\), \(\text{Sv}(5)=0.0449\), \(\text{Sv}(6)=-0.0029\), \(\text{Sv}(7)=-0.002\), \(\text{Sv}(8)=0.0005\), \(\text{Sv}(9)=0.0005\). As can be concluded, for this instance, feature 2 is irrelevant and feature 3 and 4
Figure 8. Example DTs for instantiated classifiers, given parameterized classifier \(\kappa_{c}\). The paths \((1,3)\) in both DTs, which are consistent with the instance \(((1,2,2),1)\), are highlighted.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Feature Name & Short Name & Original Domain & Feature Number \(i\) & Mapped Domain \\ \hline Age & \(A\) & \(\{A\leq 5,A>5\}\) & 1 & \(\{0,1\}\) \\ Petechiae & \(P\) & \(\{\text{no},\text{yes}\}\) & 2 & \(\{0,1\}\) \\ Neck Stiffness & \(N\) & \(\{\text{no},\text{yes}\}\) & 3 & \(\{0,1\}\) \\ Vomiting & \(V\) & \(\{\text{no},\text{yes}\}\) & 4 & \(\{0,1\}\) \\ Zone & \(Z\) & \(\{\text{rural},\text{peri}-\text{urban},\text{urban}\}\) & 5 & \(\{0,1,2\}\) \\ Seizures & \(S\) & \(\{\text{no},\text{yes}\}\) & 6 & \(\{0,1\}\) \\ Headche & \(H\) & \(\{\text{no},\text{yes}\}\) & 7 & \(\{0,1\}\) \\ Comma & \(C\) & \(\{\text{no},\text{yes}\}\) & 8 & \(\{0,1\}\) \\ Gender & \(G\) & \(\{\text{female},\text{male}\}\) & 9 & \(\{0,1\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 8. Mapping of original features for the DT from (Lelis et al., 2020). The original classes \(\{\text{MD},\text{Non-MD}\}\) are mapped to \(\{\mathbf{Y},\mathbf{N}\}\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline row \# & \(x_{1}\) & \(x_{2}\) & \(x_{3}\) & \(\kappa_{c,1}(\mathbf{x})\) & \(\kappa_{c,2}(\mathbf{x})\) \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 9. Mapping of original features for the DT from (Zhou, 2021). The original classes \(\{\text{ripe},\text{unripe}\}\) are mapped to \(\{\mathbf{Y},\mathbf{N}\}\).
are relevant. However, we have \(|\text{Sv}(2)|>|\text{Sv}(3)|\) and \(|\text{Sv}(2)|>|\text{Sv}(4)|\). Additionally, for the same DT, we found two instances such that relevant features assigned with a Shapley value of 0. Specifically, for the instance \(((1,1,1,0,2,1,1,0,1),1)\), we can compute four AXPs: \(\{2\}\), \(\{1,5,6,7\}\), \(\{1,4\}\), and \(\{1,3\}\). The Shapley values are: \(\text{Sv}(1)=0.1172\), \(\text{Sv}(2)=0.1373\), \(\text{Sv}(3)=0.0123\), \(\text{Sv}(4)=0.0123\), \(\text{Sv}(5)=0\), \(\text{Sv}(6)=0.0016\), \(\text{Sv}(7)=0.0016\), \(\text{Sv}(8)=-0.0003\), \(\text{Sv}(9)=0.0004\). Clearly, feature 5 is relevant but its Shapley value is 0. For another instance \(((1,1,1,0,2,1,1,0),1)\), we can compute four AXPs: \(\{2\}\), \(\{1,5,6,7\}\), \(\{1,4\}\), and \(\{1,3\}\). The Shapley values are: \(\text{Sv}(1)=0.1172\), \(\text{Sv}(2)=0.1373\), \(\text{Sv}(3)=0.0123\), \(\text{Sv}(4)=0.0123\), \(\text{Sv}(5)=0\), \(\text{Sv}(6)=0.0016\), \(\text{Sv}(7)=0.0016\), \(\text{Sv}(8)=0.0004\), \(\text{Sv}(9)=-0.0003\). Clearly, for the relevant feature 5, it has a Shapley value of 0.
## 6. Classifiers Defined by OMDDs
In this section, we consider five publicly available datasets and analyze whether there are instances exhibit the issue Irrelevant(\(i\)) \(\wedge\) Relevant(\(j\)) \(\wedge\) (\(|\text{Sv}(i)|>|\text{Sv}(j)|\)). These five datasets are from the Penn Machine Learning Benchmarks (Olson et al., 2017), with discrete features and classes. For each dataset, we picked a consistent subset of samples (i.e. no two instances are contradictory) for building Ordered Multi-Valued Decision Diagrams (OMDDs) (Kam and Brayton, 1990). For example, for the dataset _postoperative_patient_data_, there are only 88 instances, and a consistent subset of samples include 66 instances. OMDD's were built heuristically using a publicly available package MEDDLY 6, which is implemented in C/C++. For computing
Figure 9. Example DT, adapted from (Lelis et al., 2020)
Shapley values, we assumed uniform data distribution for each dataset. Beside, for each dataset we test randomly picked 200 instances or all instances if there are less than 200 rows in the dataset.
For all the five OMDDs, we investigate whether there are instances exhibiting the following issue: Irrelevant\((i)\wedge\text{Relevant}(j)\wedge(|\text{Sv}(i)|>|\text{Sv}(j)|)\). The method computing Shapley values is based on Equation (12). However, it is known that OMDDs (Niveau et al., 2011) are _deterministic_ and _decomposable_. Moreover, they also supports the query _polytime model counting_, and the transformation _polytime conditioning_(Kam and Brayton, 1990; Niveau et al., 2011). This means the algorithm proposed in (Arenas et al., 2023) for computing Shapley values of d-DNNFs can be extended to the case of OMDDs. Computation of explanations is based on earlier work as well (Huang et al., 2021; Izza et al., 2022). The experiments were performed on a MacBook Pro with a 6-Core Intel Core i7 2.6 GHz processor with 16 GByte RAM, running macOS Ventura.
**Description of the datasets.** Table 10 shows the description of the five OMDDs used in the experiment.
Figure 11. Whether there exist irrelevant features (dots in yellow) with higher scores than relevant features (dots in blue) in absolute value, for the DT in Figure 9.
Figure 10. Example DT, adapted from (Zhou, 2021)
**Summary of results.** For the case of OMDDs, we repeat the experiment conducted in Section 5 and plot their results. These results are depicted in Figure 13.
An observation is the occurrence of issues with Shapley values is non-negligible. For the OMDD in 13a, 23 out of 200 instances (i.e. 11.5%) exhibit the aforementioned issue. For the OMDD in Figure 13b, 49 out of 200 instances (i.e. 24.5%) exhibit the same issue. For the OMDDs in Figures 13c and 13d, 64 out of 200 instances (i.e. 32%) exhibit the same issue. And for the OMDD in Figure 13e, 22 out of 66 instances (i.e. 33.3%) exhibit the same issue.
## 7. Apparent Threats to Validity of Results & Their Rebuttal
This section addresses and rebuts a number of possible criticisms to the results presented in this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023]7.
Footnote 7: In fact, some of the apparent threats to validity discussed in this section represent comments that were made with respect to earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023].
**Definition of (ir)relevant features.** Our definition of (ir)relevant features mirrors the one proposed and studied in logic-based abduction [Eiter and Gottlob 1995] since the early and mid 90s. (Logic-based abduction formalizes the concept of abduction, studied in logic and philosophy for more than a century [Hartshorne and Weiss 1931].) Nevertheless, we explicitly consider subset-minimality for the definition of (abductive) explanation, whereas logic-based abduction contemplates other possible definitions [Eiter and Gottlob 1995]. For example, there are other definitions of (minimal) explanation which involve a user indicating some sort of preference among hypotheses (or features), that can involve some sort of prioritization or penalization [Eiter and Gottlob 1995]. Since Shapley values are not defined in terms of user-specified preferences, this sort of
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Number of Features & Feature Domains & Number of Classes & Number of OMDD Nodes \\ \hline car & 6 & \(4\times 4\times 4\times 3\times 3\times 3\) & 4 & 248 \\ monk1 & 6 & \(3\times 3\times 2\times 3\times 4\times 2\) & 2 & 68 \\ monk2 & 6 & \(3\times 3\times 2\times 3\times 4\times 2\) & 2 & 70 \\ monk3 & 6 & \(3\times 3\times 2\times 3\times 4\times 2\) & 2 & 74 \\ postoperative\_patient & 8 & \(3\times 3\times 2\times 3\times 2\times 3\times 3\times 5\) & 2 & 109 \\ \hline \hline \end{tabular}
\end{table}
Table 10. Description of the OMDDs.
Figure 12. Whether there exist irrelevant features (dots in yellow) with higher scores than relevant features (dots in blue) in absolute value, for the DT in Figure 10.
preference-minimal explanations are inapplicable in our setting. In addition, another definition of explanations involves those that are cardinality-minimal [Eiter and Gottlob 1995]. The following is a straightforward observation.
**Proposition 3**.: Any feature that is deemed irrelevant under a subset-minimal definition of explanation must also be an irrelevant feature under a cardinality-minimal definition of explanation.
Most of the examples in this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] already consider a single explanation which is necessarily cardinality-minimal. Hence, replacing a subset-minimal definition of explanation by a cardinality-minimal definition would not impact the implications of the results presented in this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] in terms of the inadequacy of Shapley values for XAI.
Furthermore, the results presented in this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] demonstrate that Shapley values for XAI do not correlate with the information obtained from adversarial examples. Moreover, some of results in this report demonstrate the inadequacy of Shapley values for XAI simply by analysis of the classifier's function.
**Definition of Shapley values for XAI.** Although this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] consider a well-established definition of Shapley values for XAI, specifically the one proposed in a number of well-known references [Arenas et al. 2021b, 2023; den Broeck et al. 2021, 2022; Lundberg and Lee 2017], one possible criticism to the results in this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] is that there are other definitions of Shapley values besides the one being used. One example is the use of baselines [Janzing et al. 2020; Sundararajan and Najmi 2020]. Our initial experiments suggest that the use of baselines is even more problematic than the original definitions of Shapley for XAI. Concretely, the percentages of detected issues for Boolean classifiers far exceed those reported in earlier work [Huang and Marques-Silva 2023b]. Future work will build on these initial experiments, and will document the issues that are also observed when using Shapley values for XAI based on baselines.
**Evidence from practical examples.** Since this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] study a restricted set of example classifiers, one possible criticism is that counterexamples to the theory of Shapley values for XAI should be drawn from practical examples, including from those representing complex classifiers. The previous two sections (see Sections 5 and 6) shows results on practical DTs and OMDDs, thus confirming the existing of issues with Shapley values in practical classifiers. Moreover, given the complexity of computing Shapley values, in general and for XAI in particular, it is in practice completely unrealistic to obtain exact Shapley values in the case of the complex classifiers used in many practical applications. Nevertheless, such evidence would be beyond the point that is being made, in that _no_ sound theory can withstand a _single_ counterexample. The vast number of counterexamples that this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] have identified already serve as comprehensive evidence to the fact that Shapley values will necessarily provide human decision-makers with misleading information regarding relative feature importance, for arbitrarily many classifiers. If that were not to be the case, then future work should identify the families of classifiers for which Shapley values are provably guaranteed not to provide misleading information to human decision-makers. At present, that is an open research topic.
Furthermore, this report also includes initial experimental results, obtained on publicly available classifiers, that confirm that Shapley values for XAI can produce misleading information regarding relative feature importance.
**Shapley values for XAI unrelated with formal explanations.** One additional criticism to the results in this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023] is that the fact that Shapley values for XAI do not capture feature relevancy is not problematic per se, and it might be the case that we could be talking about different and unrelated measures of feature importance, one provided by feature attribution and the other provided by feature selection. As shown in this report, we can construct classifiers with features that are of paramount importance for a prediction, but that are assigned a Shapley value of 0 (i.e. denoting no importance whatsoever for the prediction). Similarly, we can construct classifiers (actually the same classifier can be used!) with features that serve no purpose in terms of explanations, and that also serve no purpose in terms of creating adversarial examples, but which are assigned the largest absolute Shapley value. In such situations, it would mystify the authors of this report if there could exist some ascribed meaning to computed Shapley values such that the information they convey would not be misleading for human decision-makers. Furthermore, existing interpretations of Shapley values [Strumbelj and Kononenko 2010] are disproved by the results presented in this and earlier reports [Huang and Marques-Silva 2023b,c; Marques-Silva and Huang 2023]. Concretely, the uses of Shapley values in explainability have been justified by very significant and very claims. For example, from [Strumbelj and Kononenko 2010]:
* _"According to the 2nd axiom, if two features values have an identical influence on the prediction they are assigned contributions of equal size. The 3rd axiom says that if a feature has no influence on the prediction it is assigned a contribution of 0."_ (Note: the axioms above refer to the axiomatic characterization of Shapley values in [Strumbelj and Kononenko 2010].)
* _"When viewed together, these properties ensure that any effect the features might have on the classifiers output will be reflected in the generated contributions, which effectively deals with the issues of previous general explanation methods."_ Although it is the case that terms such as "influence" or "effect" are used in earlier work [22] without a formal definition, it is also the case that, by assuming commonly ascribed meanings to these terms, our results prove that Shapley values for XAI do not respect those meanings. Thus, assuming those commonly ascribed meanings, our results disprove the above claims.
## 8. Conclusions
This paper significantly extends earlier evidence on the inadequacy of Shapley values for XAI. Besides the boolean classifiers analyzed in earlier work [22, 23], this paper considers both multi-valued and discrete classifiers, exhibiting additional examples of the issues raised by the use of Shapley values for XAI. Perhaps more importantly, the inadequacy of Shapley values is also demonstrated for DTs published in recent years [19, 20, 21], as well as OMD classifiers [22, 23].
Furthermore, the paper shows that the relative order of feature importance obtained with Shapley values for XAI does not correlate with the features that can serve for producing \(l_{0}\)-minimal adversarial examples, i.e. those that are sufficiently close to the original instance. Thus, besides Shapley values for XAI not being correlated with feature relevancy, it is also the case that Shapley values for XAI do not relate with adversarial examples.
**Acknowledgments.** This work was supported by the AI Interdisciplinary Institute ANITI, funded by the French program "Investing for the Future - PIA3" under Grant agreement no. ANR-19-PI3A-0004, and by the H2020-ICT38 project COALA "Cognitive Assisted agile manufacturing for a Labor force supported by trustworthy Artificial intelligence". This work was motivated in part by discussions with several colleagues including L. Bertossi, A. Ignatiev, N. Narodytska, M. Cooper, Y. Izza, R. Passos, J. Planes and N. Asher. JMS also acknowledges the incentive provided by the ERC who, by not funding this research nor a handful of other grant applications between 2012 and 2022, has had a lasting impact in framing the research presented in this paper.
|
2309.12165 | Analysis of the Error-Correcting Radius of a Renormalisation Decoder for
Kitaev's Toric Code | Kitaev's toric code is arguably the most studied quantum code and is expected
to be implemented in future generations of quantum computers. The
renormalisation decoders introduced by Duclos-Cianci and Poulin exhibit one of
the best trade-offs between efficiency and speed, but one question that was
left open is how they handle worst-case or adversarial errors, i.e. what is the
order of magnitude of the smallest weight of an error pattern that will be
wrongly decoded. We initiate such a study involving a simple hard-decision and
deterministic version of a renormalisation decoder. We exhibit an uncorrectable
error pattern whose weight scales like $d^{1/2}$ and prove that the decoder
corrects all error patterns of weight less than $\frac{5}{6}
d^{\log_{2}(6/5)}$, where $d$ is the minimum distance of the toric code. | Wouter Rozendaal, Gilles Zémor | 2023-09-21T15:23:41Z | http://arxiv.org/abs/2309.12165v1 | # Analysis of the Error-Correcting Radius of a Renormalisation Decoder for Kitaev's Toric Code
###### Abstract
Kitaev's toric code is arguably the most studied quantum code and is expected to be implemented in future generations of quantum computers. The renormalisation decoders introduced by Duclos-Cianci and Poulin exhibit one of the best trade-offs between efficiency and speed, but one question that was left open is how they handle worst-case or adversarial errors, i.e. what is the order of magnitude of the smallest weight of an error pattern that will be wrongly decoded. We initiate such a study involving a simple hard-decision and deterministic version of a renormalisation decoder. We exhibit an uncorrectable error pattern whose weight scales like \(d^{1/2}\) and prove that the decoder corrects all error patterns of weight less than \(\frac{5}{6}d^{\log_{2}(6/5)}\), where \(d\) is the minimum distance of the toric code.
## 1 Introduction
Quantum computers are of major interest as they are capable of efficiently solving certain computational problems that are considered difficult with classical computers. However, the physical implementation of a quantum computer still remains a problem due to decoherence errors. An important challenge in the realisation of a full-scale quantum computer therefore consists in finding ways to protect quantum information from errors.
So-called topological codes, and in particular Kitaev's toric code [14, 1], are expected to be at the core of the first generation of quantum computers that will incorporate error protection. Kitaev's code is one of the oldest quantum error-correcting codes, and by far the most studied one. Although it protects only two logical qubits, its appeal comes in part from its simple structure, its planar layout useful for physical implementations, and parity-check operators of low weight that only involve neighbouring qubits. Indeed, reliable syndrome measurements need low-weight check operators.
Quantum error-correcting codes need a fast decoder that will process the classical information obtained from quantum syndrome measurements, so as to be able to regularly put arrays of qubits back into their intended states. To make full use of the toric code, one therefore also requires an efficient decoding scheme. Many decoding algorithms have been put forward, for a comprehensive list see for example the introduction of [10].
The Minimum Weight Perfect Matching algorithm [1], based upon Edmonds' blossom algorithm [1], is the standard reference since it outputs optimal solutions on bit-flip channels. However, it has a high time-complexity of \(O(n^{3})\)[15], where \(n\) is the length of the toric quantum code. At the other end, the recent union-find decoder of Delfosse and Nickerson [10] is in quasi-linear time and is a remarkable compromise between performance and speed. However, the decoder of [10] does not lend itself to parallelisation, and in that respect, the decoder that stands out is arguably the renormalisation decoder of Duclos-Cianci and Poulin [14, 15] that can be made to run in time \(O(\log_{2}n)\), with overall complexity \(O(n\log_{2}n)\).
The focus of the present paper is the renormalisation decoder of Duclos-Cianci and Poulin, or rather decoders, since the renormalisation idea is quite versatile and lends itself to many variations. In the Kitaev code, qubits are indexed by the \(n\) edges of a square tiling of a torus. The renormalisation idea stems from the fact that this square tiling contains a tower of \(\frac{1}{2}\log_{2}n\) subtilings, each isomorphic to the original tiling. Each subtiling is contained in the previous one and has four times fewer edges than the latter. Local transformations are then applied to the error pattern, so that the original decoding problem becomes an instance of the decoding problem on a Kitaev code of length \(n/4\). A decoder then applies this procedure recursively \(\frac{1}{2}\log_{2}n\) times.
Renormalisation decoders were tested in [1, 1] over depolarising and bit-flip channels, and found to exhibit very good thresholds when combined with some message-passing techniques. One feature that remained a mystery however, is their behaviour over adversarial channels, i.e. their worst-case behaviour. Worst-case behaviour is not necessarily the prime feature of a decoder, but it is important nevertheless because it is difficult to establish a precise model for errors that occur during quantum computations and also because it governs the speed with which decoding error probabilities tend to zero in the low channel error regime.
In the present paper we study this worst-case behaviour issue. To this end we introduce a simple and relatively natural version of a renormalisation decoder, that in particular does not use a priori error probabilities and acts deterministically, basing its decisions purely on local syndrome values. We find that the renormalisation decoder allows for "fractal-like" error patterns that are wrongly decoded and scale as \(d^{1/2}\), where \(d\) is the code's minimum distance. We also prove a lower bound of the form \(\frac{5}{6}d^{\log_{2}(6/5)}\) for the weight of an uncorrectable error pattern. Finally, we argue that the sub-linear behaviour of the minimum distance will be a feature of any renormalisation decoder.
The paper is organised as follows. Section 2 recalls the construction of Kitaev's toric quantum code and its decoding. Section 3 describes the renormalisation decoder that will be studied throughout this paper. Section 4 analyses the decoder's performance on the bit-flip channel and gives a numerically established threshold probability. Section 5 is the core of the paper, bounding the error-correcting radius of the renormalisation decoder. Section 6 gives fractal-like wrongly decoded error patterns for more general deterministic renormalisation decoders.
## 2 Kitaev's toric quantum code
An important class of quantum error-correcting codes is the class of CSS codes introduced independently by Calderbank-Shor and Steane [14, 15]. A CSS code can be described by two binary matrices \(\mathbf{H}_{X}\) and \(\mathbf{H}_{Z}\) of size \(r_{X}\times n\) and \(r_{Z}\times n\) respectively, where \(r_{X},r_{Z}\) and \(n\) are positive integers. The row spaces of \(\mathbf{H}_{X}\) and \(\mathbf{H}_{Z}\) should be orthogonal, in other words \(\mathbf{H}_{X}\mathbf{H}_{Z}^{\mathsf{T}}=\mathbf{0}\). The matrices \(\mathbf{H}_{X}\) and \(\mathbf{H}_{Z}\) should be thought of as the parity-check matrices of two binary codes \(C_{X}\) and \(C_{Z}\) satisfying \(C_{Z}^{\perp}\subset C_{X}\). The length of the quantum code, i.e. the number of physical qubits of the code, equals \(n\). It's dimension, i.e. the number of logical qubits, equals \(n-\operatorname{rank}\mathbf{H}_{X}-\operatorname{rank}\mathbf{H}_{Z}\). The minimum distance \(d\) of the code is equal to \(\min(d_{X},d_{Z})\), where \(d_{X}\) (resp. \(d_{Z}\)) is equal to the smallest Hamming weight of a non-zero codeword of \(C_{X}\) (resp. \(C_{Z}\)) that is not in the row space of \(\mathbf{H}_{Z}\) (resp. \(\mathbf{H}_{X}\)).
The toric code [13, 1] proposed by Kitaev is a CSS code that encodes 2 logical qubits into \(n\) physical qubits and achieves a distance of \(\sqrt{n/2}\) (a variant of the toric code [1] exists with improved parameters \([[n,2,\sqrt{n}]]\)). We briefly review the construction of the Kitaev code and its decoding.
### The toric code
For \(m\in\mathbb{N}\), consider the graph \(\mathbf{T}=(V,E)\) that is the Cayley graph of the additive group \(\mathbb{Z}/m\mathbb{Z}\times\mathbb{Z}/m\mathbb{Z}\) with generators \((\pm 1,0)\) and \((0,\pm 1)\). Specifically, for \(m\geq 2\), this is a 4-regular graph that tiles a 2-dimensional torus by squares. The graph has \(|V|=m^{2}\) vertices and \(|E|=n=2m^{2}\) edges.
The two matrices \(\mathbf{H}_{X}\) and \(\mathbf{H}_{Z}\) are of size \(n/2\times n\) and their columns and rows should be thought of as indexed respectively by the edges and vertices of the graph \(\mathbf{T}\). The matrix \(\mathbf{H}_{X}\) is simply the vertex-edge incidence matrix of \(\mathbf{T}\) and its rows describe elementary cocycles of the graph. Spelling it out, every vertex \((x,y)\in\mathbb{Z}/m\mathbb{Z}\times\mathbb{Z}/m\mathbb{Z}\) gives rise to a row of \(\mathbf{H}_{X}\) that is a binary vector of weight 4, and whose 1-coordinates are indexed by the edges that connect \((x,y)\) to \((x+1,y),(x,y+1),(x-1,y),(x,y-1)\). The rows of \(\mathbf{H}_{Z}\) correspond to elementary cycles, or faces of the graph, meaning all 4-cycles of the form \((x,y)-(x,y+1)-(x+1,y+1)-(x+1,y)-(x,y)\).
Any elementary cocycle has an even (0 or 2) number of edges in common with a face, which means that every row of \(\mathbf{H}_{X}\) is orthogonal to every row of \(\mathbf{H}_{Z}\). This is exactly the property we need for \(\mathbf{H}_{X}\) and \(\mathbf{H}_{Z}\) to define a quantum CSS code.
**Code dimension.** Since every column of \(\mathbf{H}_{X}\) has weight 2, the sum of all the rows of \(\mathbf{H}_{X}\) is 0. The same argument applies to \(\mathbf{H}_{Z}\) and one can check that the dimension of both the row spaces of \(\mathbf{H}_{X}\) and \(\mathbf{H}_{Z}\) is \(|V|-1\). This yields a dimension of \(|E|-2|V|+2=2\) for the Kitaev code.
**Minimum distance.** The \(X\)-minimum distance \(d_{X}\) is equal to the smallest length of a cycle of \(\mathbf{T}\) that is not a sum of elementary faces. The smallest length of such a homologically non-trivial cycle is seen to be \(m\) and is given by a cycle that keeps one coordinate constant. The \(Z\)-minimum distance \(d_{Z}\) is to equal \(d_{X}\) since elementary faces are elementary cocycles of the dual graph of \(\mathbf{T}\), which is isomorphic to \(\mathbf{T}\). Hence, the minimum distance of the code is \(d=m\).
Summarising, Kitaev's toric code is a quantum error-correcting CSS code with geometrically local parity-check operators of weight 4 and parameters \([[2m^{2},2,m]]=[[n,2,\sqrt{n/2}]]\).
### Decoding the toric code
The graph \(\mathbf{T}\) and its dual are isomorphic so the decoding problem for \(X\)-errors is identical as that for \(Z\)-errors, and we will just focus on one type of error. A \(Z\)-error pattern is described by a binary vector of length \(n\), \(\mathbf{e}\in\mathbb{F}_{2}^{n}\). A decoder takes as input the _syndrome_ of the error \(\mathbf{e}\), \(\sigma(\mathbf{e}):=\mathbf{H}_{X}\mathbf{e}^{\intercal}\), and outputs a binary vector \(\hat{\mathbf{e}}\in\mathbb{F}_{2}^{n}\). The decoder succeeds if \(\hat{\mathbf{e}}=\mathbf{e}+\mathbf{f}\), where \(\mathbf{f}\) is in the row space of \(\mathbf{H}_{Z}\), equivalently, if \(\mathbf{f}\) is a homologically trivial cycle, i.e. is a sum of faces of \(\mathbf{T}\).
Note that the decoder should not require further input, since we are interested in adversarial errors, for which there is no a priori information on the error \(\mathbf{e}\) which would come from a channel model for example.
One should keep in mind that the syndrome vector \(\mathbf{s}=\sigma(\mathbf{e})\) has its coordinates indexed by the vertex set \(V\) and can therefore be identified with a subset \(S\) of vertices, of even cardinality. The decoder output \(\hat{\mathbf{e}}\) can always be thought of, up to an addition of trivial cycles, as a union of edge-disjoint paths whose endpoints lie in \(S\). In particular, what the Minimum Weight Perfect Matching decoder does, is find \(\hat{\mathbf{e}}\) of minimum Hamming weight such that \(\sigma(\hat{\mathbf{e}})=\sigma(\mathbf{e})\). Renormalisation decoders use a different approach to output a solution \(\hat{\mathbf{e}}\). We describe their general strategy in the next section and introduce a simple hard-decision and deterministic version of a renormalisation decoder.
## 3 A renormalisation decoder
The renormalisation idea presented by Duclos-Cianci and Poulin in [1, 1] amounts to have a layer of codes together with a simple procedure that passes the error model from one level to the next. One is then able to recursively solve the decoding problem by _renormalising_ the error model at each step. The decoders of Duclos-Cianci and Poulin are somewhat loosely defined in the sense that they allow local procedures to adapt to the channel model. To study worst-case behaviour, we need a deterministic version of the decoder that is defined independently of any channel. Such a renormalisation decoder is described bellow.
Figure 1: Cayley graph \(\mathbf{T}\) of the group \(\mathbb{Z}/5\mathbb{Z}\times\mathbb{Z}/5\mathbb{Z}\) tiling a 2-dimensional torus. The thick edges represent an elementary cocycle (left) and an elementary cycle (right).
### High-level description of the decoder
For \(k\in\mathbb{N}\), consider the group \(V_{k}=\mathbb{Z}/2^{k}\mathbb{Z}\times\mathbb{Z}/2^{k}\mathbb{Z}\) and the graph \(\mathbf{T}_{k}=(V_{k},E_{k})\) used to construct Kitaev's code. For \(i\in[[0,k-1]]\), consider the Cayley graph \(\mathbf{T}_{i}=(V_{i},E_{i})\), where \(V_{i}\) is the subgroup of \(\mathbb{Z}/2^{k}\mathbb{Z}\times\mathbb{Z}/2^{k}\mathbb{Z}\) generated by \((\pm 2^{k-i},0)\) and \((0,\pm 2^{k-i})\). The vertices \(V_{i}\) of \(\mathbf{T}_{i}\) can be thought of as a sublattice of \(V_{i+1}\), and if we connect vertices of \(V_{i}\) that are at distance \(2\) in the graph \(\mathbf{T}_{i+1}\), we obtain the graph \(\mathbf{T}_{i}\). In this process we have _renormalised_ paths of length \(2\) to paths of length \(1\).
Consider the syndrome vector \(\mathbf{s}_{k}=\sigma(\mathbf{e})\) of an error \(\mathbf{e}\), corresponding to a set \(S_{k}\subset V_{k}\) containing an even number of vertices of \(\mathbf{T}_{k}\). The decoding problem consists in finding a vector \(\hat{\mathbf{e}}\), identified by a set of edges of \(\mathbf{T}_{k}\), satisfying \(\sigma(\hat{\mathbf{e}})=\mathbf{s}_{k}\). Suppose first that \(S_{k}\) is a subset of \(V_{k-1}\). Since an edge of \(\mathbf{T}_{k-1}\) corresponds to a path of length \(2\) in \(\mathbf{T}_{k}\), if there exists an edge-pattern in \(\mathbf{T}_{k-1}\) that is consistent with \(\mathbf{s}_{k}\), then it induces an edge-pattern in \(\mathbf{T}_{k}\) whose characteristic vector \(\hat{\mathbf{e}}\) satisfies \(\sigma(\hat{\mathbf{e}})=\mathbf{s}_{k}\). As a result, we can reduce the original decoding problem to a decoding problem in the smaller graph \(\mathbf{T}_{k-1}\). Suppose now that \(S_{k}\) is not included in \(V_{k-1}\). Then we locally pair up certain vertices of \(S_{k}\) and shift others when needed so that the new error syndrome becomes a subset of \(V_{k-1}\).
The edges used in the reduction process are given by a vector \(\hat{\mathbf{e}}_{k}\) satisfying \(\mathbf{s}_{k-1}:=\sigma(\hat{\mathbf{e}}_{k})+\mathbf{s}_{k}\subset V_{k-1}\), where we identify the syndrome \(\mathbf{s}_{k-1}\) and its corresponding set of vertices. This reduction process allows us to recursively solve the decoding problem. The complete decoding procedure on the original graph \(\mathbf{T}_{k}\) is given by the following iterative algorithm.
```
Input:\(\mathbf{s}_{k}=\sigma(\mathbf{e})\in\mathbb{F}_{2}^{n}\), the syndrome of the error \(\mathbf{e}\) Output:\(\hat{\mathbf{e}}\in\mathbb{F}_{2}^{2}\) satisfying \(\sigma(\hat{\mathbf{e}})=\mathbf{s}_{k}\) Initialisation: set \(\hat{\mathbf{e}}\leftarrow\mathbf{0}\), \(\mathbf{s}\leftarrow\mathbf{s}_{k}\), \(i\gets k\) while\(\mathbf{s}\neq\mathbf{0}\) and \(i>0\)do execute the reduction procedure on the graph \(\mathbf{T}_{i}\) get \(\hat{\mathbf{e}}_{i}\) satisfying \(\mathbf{s}_{i-1}:=\sigma(\hat{\mathbf{e}}_{i})+\mathbf{s}\subset V_{i-1}\) set \(\hat{\mathbf{e}}\leftarrow\hat{\mathbf{e}}+\hat{\mathbf{e}}_{i}\), \(\mathbf{s}\leftarrow\mathbf{s}_{i-1}\), \(i\gets i-1\) endwhile return\(\hat{\mathbf{e}}\)
```
**Algorithm 1** Renormalisation decoder
After \(k\) steps, the decoder outputs \(\hat{\mathbf{e}}=\sum_{i=1}^{k}\hat{\mathbf{e}}_{i}\). It remains to check that the output vector \(\hat{\mathbf{e}}\) has the same syndrome has the error vector \(\mathbf{e}\), that is to say \(\sigma(\hat{\mathbf{e}})=\mathbf{s}_{k}\). Observe that for every \(j\in[[1,k]]\),
\[\mathbf{s}_{j}+\sum_{i=1}^{j}\sigma(\hat{\mathbf{e}}_{i})=\mathbf{s}_{j}+\sigma (\hat{\mathbf{e}}_{j})+\sum_{i=1}^{j-1}\sigma(\hat{\mathbf{e}}_{i})=\mathbf{s} _{j-1}+\sum_{i=1}^{j-1}\sigma(\hat{\mathbf{e}}_{i}).\]
Hence, we have \(\mathbf{s}_{k}+\sigma(\hat{\mathbf{e}})=\mathbf{s}_{k}+\sum_{i=1}^{k}\sigma( \hat{\mathbf{e}}_{i})=\mathbf{s}_{0}\subset V_{0}\). The set \(V_{0}\) has only one vertex and since syndrome vectors have even weight, we conclude that \(\mathbf{s}_{0}=\mathbf{0}\) and so \(\sigma(\hat{\mathbf{e}})=\mathbf{s}_{k}\).
### Precise description of the decoder
We now describe how to obtain the vectors \(\hat{\mathbf{e}}_{i}\) from the syndrome vectors \(\mathbf{s}_{i}\). The original procedure of [1, 1] relied on a priori probabilities for coordinates of the original error vector \(\mathbf{e}\). Since we are interested in worst-case behaviour, we want a purely deterministic procedure that will work independently of any channel model. We propose the algorithm described below, that is arguably close to the simplest possible from a computational complexity point of view.
**Blocks and cells.** In order to precisely define the reduction process of the decoder, we introduce the following notions of blocks and cells. A _cell_ of \(\mathbf{T}_{i}\) is defined as an elementary cycle, or face of the graph. A _block_ of \(\mathbf{T}_{i}\) is defined as a cell of the graph \(\mathbf{T}_{i-1}\). By construction of the graphs \(\mathbf{T}_{i}\), each block is composed of exactly \(4\) cells denoted \(A,B,C\) and \(D\). For each cell of the tiling, we denote by \(l,r,t\) and \(b\) the left, right, top and bottom edges of the cell. Moreover, we respectively denote by \(\alpha,\beta,\gamma\) and \(\delta\) the top-left, top-right, bottom-left and bottom-right vertices of the given cell. These definitions and there relations are shown explicitly in Figure 2.
Starting from the graph \(\mathbf{T}_{k}\), we can iteratively consider the blocks of each subtiling \(\mathbf{T}_{i}\) and view them as the cells of \(\mathbf{T}_{i-1}\). This allows one to perceive the tower of subtilings that lies at the heart of the renormalisation idea.
**Reduction procedure.** To reduce the decoding problem from a syndrome set \(S_{i}\subset V_{i}\) to a syndrome set \(S_{i-1}\subset V_{i-1}\), we proceed in three steps. We start by locally pairing up certain syndrome vertices so as to remove them from the syndrome set. Then, as they may remain vertices in \(V_{i}\setminus V_{i-1}\), we shift those syndrome vertices to move them to the next sublattice.
In step 1, we go through all the blocks of \(\mathbf{T}_{i}\) and locally pair up diagonally opposed syndrome vertices in \(D\) cells. This is depicted in Figure 3.
In step 2, we go through all the blocks of \(\mathbf{T}_{i}\) again and locally pair up certain neighbouring syndrome vertices in \(B\) and \(C\) cells as shown in Figure 4.
In step 3, we go through all the blocks of \(\mathbf{T}_{i}\) once more and shift any remaining syndrome vertex in \(A\) cells to their top-left corner as shown in Figure 5.
Note that the choice of edges that pair up diagonally opposed vertices in \(D\) cells or diagonally shift vertices in \(A\) cells is somewhat arbitrary. This doesn't matter however, since other shortest paths are equivalent to the ones we chose.
All the edges we used in these three steps are given by a vector \(\hat{\mathbf{e}}_{i}\in\mathbb{F}_{2}^{n}\). More precisely, starting from the zero vector, the construction of \(\hat{\mathbf{e}}_{i}\) is done by flipping the bits of \(\hat{\mathbf{e}}_{i}\) that are indexed by those edges.
Figure 4: Instructions in \(B\) and \(C\) cells. Syndrome vertices and their pairing are represented in thicker font. In the cell \(C\), if \(\alpha\) and \(\gamma\) are in \(S_{i}\), then then pair them up via \(l\), if \(\beta\) and \(\delta\) are in \(S_{i}\), then then pair them up via \(r\). In the cell \(B\), if \(\alpha\) and \(\beta\) are in \(S_{i}\), then then pair them up via \(t\) and if \(\gamma\) and \(\delta\) are in \(S_{i}\), then then pair them up via \(b\).
Figure 5: Instructions in \(A\) cells. Syndrome vertices and their travel path are represented in thicker font. If \(\beta\) is in \(S_{i}\), then shift it to \(\alpha\) via \(t\). If \(\gamma\) is in \(S_{i}\), then shift it to \(\alpha\) via \(l\). If \(\delta\) is in \(S_{i}\), then shift it to \(\alpha\) via \(l\) and \(b\).
Figure 3: Instructions in \(D\) cells. Syndrome vertices and their pairing are represented in thicker font. If \(\alpha\) and \(\delta\) are in \(S_{i}\), then pair them up via \(l\) and \(b\). If \(\beta\) and \(\gamma\) are in \(S_{i}\), then pair them up via \(b\) and \(r\).
Figure 2: Division of a graph \(\mathbf{T}_{i}\) into blocks and cells.
The reduction procedure is then given by the following iterative algorithm where we go through all the blocks of \(\mathbf{T}_{i}\) three times.
```
Input:\(\mathbf{s}_{i}\in\mathbb{F}_{2}^{n}\) corresponding to a set of syndrome vertices \(S_{i}\subset V_{i}\) Output:\(\hat{\mathbf{e}}_{i}\in\mathbb{F}_{2}^{n}\) such that \(\mathbf{s}_{i-1}:=\sigma(\hat{\mathbf{e}}_{i})+\mathbf{s}_{i}\) corresponds to a set of vertices \(S_{i-1}\subset V_{i-1}\) Initialisation: set \(\hat{\mathbf{e}}_{i}\leftarrow\mathbf{0}\), \(\mathbf{s}\leftarrow\mathbf{s}_{i}\) and let \(S\) be the vertex set corresponding to \(\mathbf{s}\) for every block in \(\mathbf{T}_{i}\)do\(\triangleright\) Step 1 In the \(D\) cell: If \(\alpha\) and \(\delta\) are in \(S\), then flip the bits of \(\hat{\mathbf{e}}_{i}\) indexed by \(l\) and \(b\) If \(\beta\) and \(\gamma\) are in \(S\), then flip the bits of \(\hat{\mathbf{e}}_{i}\) indexed by \(b\) and \(r\) Remark: if \(\alpha,\beta,\gamma,\delta\) are all in \(S\), then we only flip the bits indexed by \(l\) and \(r\) endfor Update the syndrome: add to \(\mathbf{s}\) the characteristic vectors of all the syndrome vertices that have been paired up in order to remove them from the syndrome set \(S\) for every block in \(\mathbf{T}_{i}\)do\(\triangleright\) Step 2 In the cell \(C\): If \(\alpha\) and \(\gamma\) are in \(S\), then flip the bit of \(\hat{\mathbf{e}}_{i}\) indexed by \(l\) If \(\beta\) and \(\delta\) are in \(S\), then flip the bit of \(\hat{\mathbf{e}}_{i}\) indexed by \(r\) In the cell \(B\): If \(\alpha\) and \(\beta\) are in \(S\), then flip the bit of \(\hat{\mathbf{e}}_{i}\) indexed by \(t\) If \(\gamma\) and \(\delta\) are in \(S\), then flip the bit of \(\hat{\mathbf{e}}_{i}\) indexed by \(b\) endfor Update the syndrome: add to \(\mathbf{s}\) the characteristic vectors of all the syndrome vertices that have been paired up in order to remove them from the syndrome set \(S\) for every block in \(\mathbf{T}_{i}\)do\(\triangleright\) Step 3 In the \(A\) cell: If \(\beta\) is in \(S\), then flip the bit of \(\hat{\mathbf{e}}_{i}\) indexed by \(t\) If \(\gamma\) is in \(S\), then flip the bit of \(\hat{\mathbf{e}}_{i}\) indexed by \(l\) If \(\delta\) is in \(S\), then flip the bits of \(\hat{\mathbf{e}}_{i}\) indexed by \(l\) and \(b\) endfor return\(\hat{\mathbf{e}}_{i}\)
```
**Algorithm 2** Reduction procedure
Note that at the start of each step, the current syndrome \(\mathbf{s}\) is equal to \(\sigma(\hat{\mathbf{e}}_{i})+\mathbf{s}_{i}\). This equality is obvious after the initialisation since \(\hat{\mathbf{e}}_{i}=\mathbf{0}\) and \(\mathbf{s}=\mathbf{s}_{i}\). After step 1, certain syndrome vertices have been paired up together and the edges that make up the paths are added to \(\hat{\mathbf{e}}_{i}\). The syndrome of \(\hat{\mathbf{e}}_{i}\) at this stage thus corresponds exactly to the set of paired up syndrome vertices. Hence, we have that \(\mathbf{s}\), which corresponds to the set of syndrome vertices minus the ones that have been paired up, is equal to \(\mathbf{s}_{i}+\sigma(\hat{\mathbf{e}}_{i})\). The same argument shows that the equality remains true after step 2.
**Proposition 3.1**.: _The reduction procedure is well-defined and reduces the decoding problem from a sub-tiling to the next one._
Proof.: To verify that the reduction process is indeed well-defined, we need to check that there are no conflicting instructions. More precisely, we need to verify that, given a syndrome set at the beginning of a step, we don't pair a syndrome vertex with more than one other vertex or that we don't shift it several times.
In step 1 the instructions are well-defined since \(D\) cells of different blocks share no vertices. Hence, we only pair up a syndrome vertex at most once as shown in Figure 3. At step 2 one has to be a bit more careful since \(B\) and \(C\) cells share vertices within a block and also between different blocks. Suppose that a shared syndrome vertex \(v\) is affected by both instructions in a \(B\) and a \(C\) cell. This means that there exists two adjacent syndrome vertices to \(v\), \(v_{B}\) and \(v_{C}\), such that \(v_{B}\) belongs to \(B\) and \(v_{C}\) belongs to \(C\). Then \(v_{B}\) and \(v_{C}\) are two diagonally opposite syndrome vertices of a \(D\) cell. This is impossible since we removed all such syndrome vertices in step 1. The multi-step schedule thus ensures that the instructions are done as intended and we pair up a syndrome vertex at most once as shown in Figure 4. Finally, the instructions are also well-defined in step 3 since \(A\) cells of different blocks share no vertices. Thus, we shift a syndrome vertex only once as shown in Figure 5.
It now remains to verify that the reduction procedure reduces the decoding from a subtiling to the next one. We thus have to check that given the syndrome vector \(\mathbf{s}_{i}\), corresponding to a syndrome set in \(V_{i}\), we obtain an output vector \(\hat{\mathbf{e}}_{i}\) such that \(\mathbf{s}_{i-1}:=\sigma(\hat{\mathbf{e}}_{i})+\mathbf{s}_{i}\) corresponds to a syndrome set in \(V_{i-1}\).
After steps 1 and 2, certain syndrome vertices have been paired up together and the edges that make up the paths are added to \(\hat{\mathbf{e}}_{i}\). The syndrome of \(\hat{\mathbf{e}}_{i}\) at this stage thus corresponds exactly to the set of paired up syndrome vertices. Let us now see what happens in step 3. Observe that the set of all the vertices we consider in \(A\) cells corresponds exactly to \(V_{i}\setminus V_{i-1}\), i.e. to all the vertices that are not corner vertices of a block. In step 3, we thus shift any remaining syndrome vertex that is not in \(V_{i-1}\) to the top-left corner of its corresponding block. The edges that make up the paths that connect the syndrome vertices to their corresponding corner vertices are then added to \(\hat{\mathbf{e}}_{i}\). Since the syndrome vertices have all been pushed into the vertex set of the next subtiling, the sum \(\mathbf{s}_{i-1}:=\sigma(\hat{\mathbf{e}}_{i})+\mathbf{s}_{i}\) indeed corresponds to a vertex subset of \(V_{i-1}\).
Note that in the reduction procedure, we update the current syndrome between each step. One might ask why we don't update it immediately, after having paired up or shifted a syndrome vertex, or after having visited a block. While this doesn't affect the outcome of the algorithm, it is relevant from a computational complexity point of view and allows the procedure to be parallelised.
**Proposition 3.2**.: _The decoder's time-complexity is \(O(n\log_{2}n)\) and can be parallelised to \(O(\log_{2}n)\), where \(n\) is the length of the toric code._
Proof.: For every \(i\in[[1,k]]\), the reduction procedure considers every block of \(\mathbf{T}_{i}\) three times. Within a block, a constant number of operations are executed on the vertices, to test if they belong to the error syndrome, and on the edges, to flip the corresponding bits if needed. The total number of operations in each reduction procedure is therefore linearly bounded by \(n\), the size of the graph \(\mathbf{T}_{k}\). Since the recursion ends after \(\frac{1}{2}\log_{2}n\) iterations, the overall time-complexity is \(O(n\log_{2}n)\).
The multi-step schedule guarantees that the instructions of the reduction procedure act independently of one another. Moreover, since they are given locally inside blocks and since the current syndrome is updated only between steps of a reduction stage, the instructions can be executed simultaneously. As a result, the decoding scheme can be parallelised and one can achieve a running time proportional to \(\log_{2}n\), the number of reductions.
Finally, we remark that one should not define the decoder in too simple a way. For example, we could question the need for steps 1 and 2 in the reduction procedure since step 3 alone ensures the reduction process by pushing all improperly placed syndrome vertices of a block towards its top-left corner. The reason is that omitting steps 1 and 2 would allow a degenerate propagation of error patterns of small weight. We have not found a reduction procedure in less than 3 steps that avoids conflicting instructions between blocks and avoids wrongly decoded constant-weight error patterns.
For instance, suppose that we do not follow the instructions given in step 2. Consider the error \(\mathbf{e}\), in the graph \(\mathbf{T}_{k}\), consisting only of the top edge \(t\) of a \(B\) cell of some block. Then the syndrome set \(S_{k}\) consists of the end-vertices of that edge. Instead of pairing these two syndrome vertices together immediately, the instructions in step 3 shift the left end-vertex of the edge to the vertex \(\alpha\) of the \(A\) cell of that block. Hence, the sum of the error pattern and of the pattern given by the decoder is now composed of the two edges that connect the top vertices of the given block. After renormalisation, we then have a combined pattern of weight 2 that is the edge of a cell in \(\mathbf{T}_{k-1}\). If this edge happens to be the top edge \(t\) of some \(B\) cell, then we can repeat the previous argument. We thus can exhibit an error pattern of weight 1 that, if placed in the correct position, doubles in size at every reduction step. This is problematic since it allows for uncorrectable error patterns of constant weight. Similarly, if we omit step 1 from the instructions, then one can find an example of an error pattern of weight 2 that expands and is wrongly decoded.
## 4 Analysis over the bit-flip channel
Before analysing our simple hard-decision renormalisation decoder over the adversarial channel, it is natural to ask how it performs against random errors. We test its performance on the bit-flip channel and numerically determine the threshold probability \(p_{0}\) of our decoder using Monte Carlo simulations.
The results of the simulations are presented in Figure 6 and we estimate that the threshold probability is \(p_{0}\sim 4.2\%\). As a comparison, the renormalisation group decoder of Duclos-Cianci and Poulin in its simplest form, without being enhanced by belief propagation techniques, yields a threshold of \(p_{0}\sim 7.8\%\) over the depolarisation channel [1], which translates into a threshold probability for the bit-flip channel of \(p_{0}\sim 5.2\%\). The performance of the present decoder is therefore reasonably close, and the discrepancy can be explained in part by the fact that our simplified decoder does not use the a priori transition probability of the channel. We conclude that our simple version has captured the essence of a renormalisation decoder.
## 5 Analysis over the adversarial channel
We now turn to analysing worst-case errors. More precisely, we are interested in finding the _error-correcting radius_ of the decoder, i.e. the largest integer \(\omega\) such that any error pattern \(\mathbf{e}\) of Hamming weight \(\mathsf{wt}(\mathbf{e})\leq\omega\) is decoded correctly.
For \(k\in\mathbb{N}\), consider the tiling \(\mathbf{T}_{k}\) and for \(i\in[[0,k-1]]\), the subtilings \(\mathbf{T}_{i}\). Suppose we are given an error vector \(\mathbf{e}\) and consider its error syndrome \(\mathbf{s}_{k}\subset V_{k}\). Recall that the decoding algorithm defines \(k\) intermediate vectors \(\hat{\mathbf{e}}_{k},\dots,\hat{\mathbf{e}}_{1}\) which it then adds up to produce the output vector \(\hat{\mathbf{e}}\). Let us define \(\mathbf{e}_{k}=\mathbf{e}\) to be the original error vector, and inductively, for \(i=k-1,\dots,0\), \(\mathbf{e}_{i}:=\mathbf{e}_{i+1}+\hat{\mathbf{e}}_{i+1}\).
First note that one obtains \(\mathbf{e}_{i}\) from \(\mathbf{e}_{i+1}\) after one reduction stage since by definition \(\mathbf{e}_{i}=\mathbf{e}_{i+1}+\hat{\mathbf{e}}_{i+1}\). We say that \(\mathbf{e}_{i+1}\) is a _preimage_ of \(\mathbf{e}_{i}\). Secondly, remark that an "intermediate" error vector \(\mathbf{e}_{i}\) has its syndrome in the vertex set \(V_{i}\) since
\[\sigma(\mathbf{e}_{i})=\sigma(\mathbf{e}_{k}+\sum_{j=i+1}^{k}\hat{\mathbf{e} }_{j})=\mathbf{s}_{k}+\sum_{j=i+1}^{k}\sigma(\hat{\mathbf{e}}_{j})=\mathbf{s }_{i}\subset V_{i}.\]
Finally, note that the edges corresponding to \(\mathbf{e}_{0}=\mathbf{e}+\hat{\mathbf{e}}\) form a cycle since at the end of the decoding scheme, both the vectors \(\mathbf{e}\) and \(\hat{\mathbf{e}}\) have the same syndrome. Moreover, this cycle is homologically non-trivial if and only if the decoder wrongly decodes the original error vector \(\mathbf{e}_{k}\).
To analyse the error-correcting radius of the decoder, we will work by starting from a non-trivial cycle \(\mathbf{e}_{0}\) and track how slowly the weights of the errors \(\mathbf{e}_{i}\) can grow when we go back in time to reverse-engineer the decoder's actions (hence the choice of indexation).
Figure 6: Results of the Monte Carlo simulations over the bit-flip channel. The failure probability of our decoder is computed by simulating 5000 decoding cycles for each \(k\in\{4,5,6,7,8\}\) and for each \(p\in\{0.035,0.04,0.045,0.05,0.055\}\).
### A wrongly decoded error pattern of weight \(\sim d^{1/2}\)
We derive an upper bound on the error-correcting radius \(\omega\) by finding a small-weight error pattern \(\mathbf{e}_{k}\) that is decoded incorrectly, i.e. such that \(\mathbf{e}_{0}\) is a homologically non-trivial cycle.
It is somewhat natural to look for such an error pattern within a minimal cycle of the graph \(\mathbf{T}_{k}\), so we construct it on the first horizontal "line" of the graph. If we restrict ourselves to this line, we only consider steps 2 and 3 of the reduction process. Moreover, since the error syndrome will be a subset of the vertices of the first line, the decoder only adds edges of this line to the output vector \(\hat{\mathbf{e}}\).
The idea is to define the vector \(\mathbf{e}_{1}\) as a half a cycle of \(\mathbf{T}_{1}\) (which is liable to be wrongly decoded by any decoder), then, as we move up the indexes \(i\), this half-cycle is expanded into half-cycles of \(\mathbf{T}_{i}\) but we may regularly introduce "holes" in them, creating a fractal-like structure. The construction of \(\mathbf{e}_{5}\) starting from a non-trivial cycle \(\mathbf{e}_{0}\) is shown in Figure 7 and it is easily checked that we indeed obtain \(\mathbf{e}_{0}\) when we apply the renormalisation decoder to \(\mathbf{e}_{5}\).
Precisely, the first steps of the construction are as follows. Start from the non-trivial cycle \(\mathbf{e}_{0}\) that is located on the first horizontal line of the graph \(\mathbf{T}_{k}\). This corresponds to the first horizontal edge of the graph \(\mathbf{T}_{0}\). The first horizontal line of \(\mathbf{T}_{1}\) is composed of 2 edges and we define \(\mathbf{e}_{1}\) by the \(1^{st}\) edge. Note that if we decode \(\mathbf{e}_{1}\) in the graph \(\mathbf{T}_{1}\), then we obtain \(\mathbf{e}_{0}\) since at step 2, the decoder pairs the two endpoints of \(\mathbf{e}_{1}\) together using the \(2^{nd}\) edge. The first horizontal line of \(\mathbf{T}_{2}\) is composed of 4 edges and we define \(\mathbf{e}_{2}\) by the first two edges. Observe that if we decode \(\mathbf{e}_{2}\) in the graph \(\mathbf{T}_{2}\), then we trivially obtain \(\mathbf{e}_{1}\) by renormalisation.
Let us now see how to construct the errors \(\mathbf{e}_{i}\) for \(i\geq 5\). Note that \(\mathbf{e}_{4}\) is composed of 2 paths, each of weight 2. By repeating the same method that is used to obtain \(\mathbf{e}_{3}\) from \(\mathbf{e}_{2}\) to each path of \(\mathbf{e}_{4}\), we obtain the error \(\mathbf{e}_{5}\). This results in a pattern on the first horizontal line of \(\mathbf{T}_{5}\) with 2 paths, each of weight 3. Similarly, we obtain \(\mathbf{e}_{6}\) by repeating the same procedure that is used to obtain \(\mathbf{e}_{4}\) from \(\mathbf{e}_{3}\) to each path of \(\mathbf{e}_{5}\). This results in a pattern on the first horizontal line of \(\mathbf{T}_{6}\) with 4 paths, each of weight 2. We thus can iteratively construct the following error patterns \(\mathbf{e}_{i}\) by transforming each path of weight 2 to a path of weight 3, and each path of weight 3 to 2 paths of weight 2.
Computing the weights of the constructed error patterns \(\mathbf{e}_{k}\) for \(k\in\mathbb{N}\) leads to the following statement.
**Proposition 5.1**.: _Consider \((u_{k})_{k\in\mathbb{N}}\) the sequence defined by \(u_{k}=k\) for \(k\in\{1,2,3\}\) and \(u_{k}=2u_{k-2}\) for \(k\geq 4\). For \(k\in\mathbb{N}\), let \(\omega_{k}\) be the largest integer such that any error pattern \(\mathbf{e}_{k}\) in the graph \(\mathbf{T}_{k}\) of weight \(\mathsf{wt}(\mathbf{e}_{k})\leq\omega_{k}\) is decoded correctly. Then \(\forall\,k\in\mathbb{N},\ \omega_{k}\leq u_{k}-1\)._
Proof.: We compute the weight of the errors \(\mathbf{e}_{k}\) constructed above and show by induction that \(\forall\,k\in\mathbb{N}\), \(\mathsf{wt}(\mathbf{e}_{k})=u_{k}\). Since the errors \(\mathbf{e}_{k}\) are wrongly decoded, it will follow that \(\forall\,k\in\mathbb{N}\), \(\omega_{k}<u_{k}\).
The equality \(\mathsf{wt}(\mathbf{e}_{k})=u_{k}\) is immediately verified for \(k\in\{1,2,3\}\). Now consider the constructed error \(\mathbf{e}_{k}\) for \(k\geq 4\). Observe that either \(\mathbf{e}_{k-2}\) is composed only of paths of weight 2 or \(\mathbf{e}_{k-2}\) is composed only of paths of weight 3. In the first case, we have that by construction every path is transformed into a path of weight 3, which is in turn transformed into 2 paths of weight 2. Hence, \(\mathbf{e}_{k}\) has exactly twice as much paths of weight 2 as \(\mathbf{e}_{k-2}\). In the second case, we observe similarly that \(\mathbf{e}_{k}\) has exactly twice as many paths of weight 3 as \(\mathbf{e}_{k-2}\). In both cases \(\mathsf{wt}(\mathbf{e}_{k})=2\mathsf{wt}(\mathbf{e}_{k-2})\) and so we can conclude that \(\mathsf{wt}(\mathbf{e}_{k})=2\mathsf{wt}(\mathbf{e}_{k-2})=2u_{k-2}=u_{k}\).
We observe that the sequence \((u_{k})_{k\in\mathbb{N}}\) scales as \((\sqrt{2^{k}})_{k\in\mathbb{N}}\) since for every \(k\geq 4,\ u_{k}=2u_{k-2}\). Given that \(\sqrt{2^{k}}=d^{1/2}\), the error-correcting radius \(\omega\) is bounded from above by a quantity that scales as \(d^{1/2}\).
Figure 7: Construction of the wrongly decoded error pattern. For \(i\in[[0,5]]\), the error \(\mathbf{e}_{i}\) is shown in the first horizontal line of the graph \(\mathbf{T}_{i}\).
### \(1\)-dimensional study of the error-correcting radius
Finding a lower bound for the error-correcting radius \(\omega\) is not as straightforward as it might initially seem. Let us therefore start by considering the \(1\)-dimensional case where we only allow errors to be placed on a single line of the toric graphs.
In this study, cells are reduced to single edges and blocks to two neighbouring edges. Moreover, the renormalisation decoder is simplified and we only consider two steps in each reduction stage. The precisely reduction procedure in the \(1\)-dimensional case is as follows. We first go through all the blocks of the graph and in each block, if the end vertices of the right edge of the block are in the error syndrome, then we pair them up with this right edge (Figure 4). We then go through all the blocks of the graph again and in each block, if the middle vertex of the block is in the error syndrome, then we link it to the left vertex of the block with the left edge of that block (Figure 5).
If we restrict ourselves to this \(1\)-dimensional case, then one can determine the exact value of the error-correcting radius \(\omega\).
**Proposition 5.2**.: _Consider \((u_{k})_{k\in\mathbb{N}}\) the sequence defined by \(u_{k}=k\) for \(k\in\{1,2,3\}\) and \(u_{k}=2u_{k-2}\) for \(k\geq 4\). For \(k\in\mathbb{N}\), let \(\omega_{k}\) be the largest integer such that any 1-dimensional error pattern \(\mathbf{e}_{k}\) in the graph \(\mathbf{T}_{k}\) of weight \(\mathsf{wt}(\mathbf{e}_{k})\leq\omega_{k}\) is decoded correctly. Then \(\forall k\in\mathbb{N},\ \omega_{k}=u_{k}-1\)._
By Proposition 5.1, \(\forall\,k\in\mathbb{N},\ \omega_{k}\leq u_{k}-1\) so it remains to show that we have \(\forall\,k\in\mathbb{N},\ \omega_{k}\geq u_{k}-1\). To achieve this lower bound on the error-correcting radius, recall the notion of _preimage_ of an error pattern \(\mathbf{e}_{i}\) on the graph \(\mathbf{T}_{i}\). A preimage of \(\mathbf{e}_{i}\) is an error pattern \(\mathbf{e}_{i+1}\) on the graph \(\mathbf{T}_{i+1}\) such that one obtains \(\mathbf{e}_{i}\) from \(\mathbf{e}_{i+1}\) after one reduction stage of the decoding process. The main idea is then to show that the weight of any \(1\)-dimensional second preimage of a given \(1\)-dimensional error is twice as large as the weight of the error itself. This is proved in the following three lemmas.
**Lemma 5.3**.: _Let \(i\in[[0,k-1]]\) and let \(\mathbf{e}_{i}\) be a 1-dimensional error pattern on the graph \(\mathbf{T}_{i}\). Let \(P_{i}\) denote the number of paths of \(\mathbf{e}_{i}\) and let \(\mathbf{e}_{i+1}\) be a 1-dimensional preimage of \(\mathbf{e}_{i}\). Then \(\mathsf{wt}(\mathbf{e}_{i+1})\geq\mathsf{wt}(\mathbf{e}_{i})+P_{i}\)._
Proof.: Observe that in the \(1\)-dimensional case, during the reduction process, the decoder modifies the edges of a block of \(\mathbf{T}_{i+1}\) only if the central vertex of that block is in the error syndrome. This happens if exactly one of the two edges of that block is an error edge, i.e. an edge of \(\mathbf{e}_{i+1}\). It follows that if no error edge is present in a given block of \(\mathbf{T}_{i+1}\), then after one reduction stage the corresponding edge of \(\mathbf{T}_{i}\) is not an error edge, i.e. not an edge of \(\mathbf{e}_{i}\). Since \(\mathbf{e}_{i+1}\) is a preimage of \(\mathbf{e}_{i}\), we thus need at least one error edge of \(\mathbf{e}_{i+1}\) in each block defined by an error edge of \(\mathbf{e}_{i}\). Hence, we have that \(\mathsf{wt}(\mathbf{e}_{i+1})\geq\mathsf{wt}(\mathbf{e}_{i})\).
Note that in the \(1\)-dimensional study, an error pattern \(\mathbf{e}_{i}\) is either a non-trivial cycle or a disjoint union of paths. If \(\mathbf{e}_{i}\) is a non-trivial cycle, then \(P_{i}=0\) and so the desired inequality is satisfied since we just showed that \(\mathsf{wt}(\mathbf{e}_{i+1})\geq\mathsf{wt}(\mathbf{e}_{i})\). Let us now suppose that \(\mathbf{e}_{i}\) is a disjoint union of paths. If \(\mathbf{e}_{i}\) consists of more than one path, then two distinct paths are separated by at least two edges in \(\mathbf{T}_{i+1}\) and an error edge of \(\mathbf{e}_{i+1}\) located in between the two paths contributes to at most one of them. It follows that we can partition the error edges of \(\mathbf{e}_{i+1}\) according to which of the paths of \(\mathbf{e}_{i}\) they contribute to. We may thus without loss of generality assume that \(\mathbf{e}_{i}\) consists of a single path and prove that \(\mathsf{wt}(\mathbf{e}_{i+1})\geq\mathsf{wt}(\mathbf{e}_{i})+1\).
Note that if we put exactly one error edge of \(\mathbf{e}_{i+1}\) in each block defined by an error edge of \(\mathbf{e}_{i}\) and no error edges elsewhere, then \(\mathbf{e}_{i+1}\) is not a preimage of \(\mathbf{e}_{i}\). Indeed, consider the error edge of \(\mathbf{e}_{i}\) that has no error edge to its right. Note this edge exists and is unique since \(\mathbf{e}_{i}\) is not a cycle by assumption. Suppose that the corresponding block of the graph \(\mathbf{T}_{i+1}\) only contains one error edge of \(\mathbf{e}_{i+1}\). If this is the right edge of the block, then it is removed since the edge is isolated, i.e. there is no error edge to its left or to its right. There is no error edge to its left because we supposed that there is only error edge in the block and there is no error edge to its right because we supposed that there are no error edges outside the blocks defined by the error edges of \(\mathbf{e}_{i}\). If the error edge of \(\mathbf{e}_{i+1}\) is the left edge of the block, then it is also removed since its right vertex is in the error syndrome and has not been paired up with an other vertex. This is because we supposed that it is the only error edge in the block and that there are no error edges outside the blocks defined by the error edges of \(\mathbf{e}_{i}\). Hence, we cannot put only one error edge of \(\mathbf{e}_{i+1}\) in each block defined by an error edge of \(\mathbf{e}_{i}\) and no error edges elsewhere. This means that we need at least one error edge more and so \(\mathsf{wt}(\mathbf{e}_{i+1})\geq\mathsf{wt}(\mathbf{e}_{i})+1\).
**Lemma 5.4**.: _Let \(i\in[[0,k-1]]\) and let \(\mathbf{e}_{i}\) be a 1-dimensional error pattern on the graph \(\mathbf{T}_{i}\). Let \(\mathbf{e}_{i+1}\) be a 1-dimensional preimage of \(\mathbf{e}_{i}\) and let \(P_{i+1}\) denote the number of its paths. Then \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\geq\mathsf{wt}(\mathbf{e}_{i})\)._
Proof.: Note that an error edge of \(\mathbf{e}_{i+1}\) that is not in a block defined by an error edge of \(\mathbf{e}_{i}\) disappears after the reduction process. If such an error edge doesn't contribute to the error \(\mathbf{e}_{i}\), then it can simply be removed to decrease both \(\mathsf{wt}(\mathbf{e}_{i+1})\) and \(P_{i+1}\). Similarly, if such an error edge does contribute to \(\mathbf{e}_{i}\), then it can be moved inside a block defined by an error edge of \(\mathbf{e}_{i}\) to decrease \(P_{i+1}\). We may thus assume that every error edge of \(\mathbf{e}_{i+1}\) is in a block defined by an error edge of \(\mathbf{e}_{i}\).
To prove the desired inequality, let us construct \(\mathbf{e}_{i+1}\) from \(\mathbf{e}_{i}\) and try to minimise \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\). We start by supposing that both edges of each block defined by an error edge of \(\mathbf{e}_{i}\) are error edges of \(\mathbf{e}_{i+1}\). Then \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\geq\mathsf{wt}(\mathbf{e}_{i+1})=2 \mathsf{wt}(\mathbf{e}_{i})\). We now consider the two following cases where either the error pattern \(\mathbf{e}_{i}\) is a non-trivial cycle or a disjoint union of paths, and see what happens if we start removing error edges from \(\mathbf{e}_{i+1}\).
Case 1: \(\mathbf{e}_{i}\) is a non-trivial cycle. Since \(\mathbf{e}_{i}\) is a non-trivial cycle, \(\mathbf{e}_{i+1}\) is also a non-trivial cycle at the start of our construction. We thus have \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}=\mathsf{wt}(\mathbf{e}_{i+1})=2\mathsf{ wt}(\mathbf{e}_{i})\). Let us now see what happens if we start removing error edges from \(\mathbf{e}_{i+1}\). First, remove any error edge of \(\mathbf{e}_{i+1}\). Then the weight of \(\mathbf{e}_{i+1}\) is reduced by one but the number of paths \(P_{i+1}\) is increased by one since \(\mathbf{e}_{i+1}\) is now a path. Hence, the sum \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\) is unchanged. Note that we cannot remove the rightmost or leftmost error edge of the new error \(\mathbf{e}_{i+1}\), i.e. the error edges neighbouring the error edge that was just removed. Indeed, if the removed edge was the left edge of a block, then we cannot remove the edge to its right since there needs to be at least one error edge in the given block. We can also not remove the edge to its left since \(\mathbf{e}_{i+1}\) would no longer be a preimage of \(\mathbf{e}_{i}\). Similarly, if the removed edge was the right edge of a block, then we cannot remove its neighbouring edges for the same reasons. Consider now removing an error edge of \(\mathbf{e}_{i+1}\) that is not an outer edge, i.e. an error edge that has an error edge to its left and to its right. If we remove one of these edges, then the weight of \(\mathbf{e}_{i+1}\) is reduced by one but the number of paths \(P_{i+1}\) is increased by one. Hence, the sum \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\) is again unchanged. We thus have \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}=2\mathsf{wt}(\mathbf{e}_{i})\).
Case 2: \(\mathbf{e}_{i}\) is a disjoint union of paths. As we assumed that every error edge of \(\mathbf{e}_{i+1}\) is in a block defined by an error edge of \(\mathbf{e}_{i}\), we can partition the error edges of \(\mathbf{e}_{i+1}\) according to which of the paths of \(\mathbf{e}_{i}\) they contribute to. We may thus without loss of generality assume that \(\mathbf{e}_{i}\) consists of a single path. Then \(\mathbf{e}_{i+1}\) consists also of a single path at the start of our construction and \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}=2\mathsf{wt}(\mathbf{e}_{i})+1\). Let us now see what happens if we start removing error edges from \(\mathbf{e}_{i+1}\). First, consider the rightmost error edge of \(\mathbf{e}_{i+1}\), i.e. the one that has no error edge to its right. Note that this error edge exists and is unique since \(\mathbf{e}_{i+1}\) is not a cycle by assumption. We cannot remove this error edge since otherwise \(\mathbf{e}_{i+1}\) is no longer a preimage of \(\mathbf{e}_{i}\). Consider now the leftmost error edge of \(\mathbf{e}_{i+1}\), i.e. the one that has no error edge to its left. If we remove this error edge, then the weight of \(\mathbf{e}_{i+1}\) is reduced by one and the number of paths \(P_{i+1}\) is unchanged. Hence, we have that \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}=2\mathsf{wt}(\mathbf{e}_{i})\). We cannot remove the leftmost error edge of our new \(\mathbf{e}_{i+1}\) since recall that we need at least one error edge of \(\mathbf{e}_{i+1}\) in each block defined by an error edge of \(\mathbf{e}_{i}\). Finally, consider an error edge of \(\mathbf{e}_{i+1}\) that is not an outer edge, i.e. an edge that has an error edge to its left and to its right. If we remove one of these edges, then the weight of \(\mathbf{e}_{i+1}\) is reduced by one but the number of paths \(P_{i+1}\) is increased by one. Hence, the sum \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\) is unchanged. We thus have \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\geq 2\mathsf{wt}(\mathbf{e}_{i})\).
**Lemma 5.5**.: _Let \(i\in[[0,k-2]]\) and let \(\mathbf{e}_{i}\) be a 1-dimensional error pattern on the graph \(\mathbf{T}_{i}\). Let \(\mathbf{e}_{i+2}\) be a 1-dimensional second preimage of the error \(\mathbf{e}_{i}\). Then \(\mathsf{wt}(\mathbf{e}_{i+2})\geq 2\mathsf{wt}(\mathbf{e}_{i})\)._
Proof.: Consider the image of the error \(\mathbf{e}_{i+2}\) after one reduction stage. This image is a 1-dimensional error pattern which we denote by \(\mathbf{e}_{i+1}\). Then \(\mathbf{e}_{i+2}\) is clearly a preimage of \(\mathbf{e}_{i+1}\) so by Lemma 5.3, \(\mathsf{wt}(\mathbf{e}_{i+2})\geq\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\). Now \(\mathbf{e}_{i+1}\) is also a preimage of \(\mathbf{e}_{i}\) so by Lemma 5.4, \(\mathsf{wt}(\mathbf{e}_{i+1})+P_{i+1}\geq 2\mathsf{wt}(\mathbf{e}_{i})\). Combining the two inequalities, we obtain \(\mathsf{wt}(\mathbf{e}_{i+2})\geq 2\mathsf{wt}(\mathbf{e}_{i})\).
We now are able to prove Proposition 5.2 that states that \(\forall k\in\mathbb{N},\ \omega_{k}=u_{k}-1\), where \((u_{k})_{k\in\mathbb{N}}\) is the sequence defined by \(u_{k}=k\) for \(k\in\{1,2,3\}\) and \(u_{k}=2u_{k-2}\) for \(k\geq 4\).
Proof of Proposition 5.2.: By Proposition 5.1, \(\forall\,k\in\mathbb{N},\ \omega_{k}\leq u_{k}-1\) so it remains to show that we have \(\forall\,k\in\mathbb{N},\ \omega_{k}\geq u_{k}-1\). We thus need to show that if a 1-dimensional error pattern \(\mathbf{e}_{k}\) in the graph \(\mathbf{T}_{k}\) has weight \(\mathsf{wt}(\mathbf{e}_{k})<u_{k}\), then \(\mathbf{e}_{k}\) is decoded correctly. Equivalently, consider a 1-dimensional error pattern \(\mathbf{e}_{k}\) in the graph \(\mathbf{T}_{k}\) that is decoded incorrectly and let us show by strong induction that \(\mathsf{wt}(\mathbf{e}_{k})\geq u_{k}\).
By exhausting all the possible \(1\)-dimensional patterns, one sees that for \(k\in\{1,2,3\}\) the minimal weight of an error pattern in \(\mathbf{T}_{k}\) that is decoded incorrectly is equal to \(u_{k}\). Hence, for every \(k\in\{1,2,3\}\), \(\mathsf{wt}(\mathbf{e}_{k})\geq u_{k}\). For \(k\geq 4\), consider a wrongly decoded \(1\)-dimensional error \(\mathbf{e}_{k}\) and the \(1\)-dimensional pattern \(\mathbf{e}_{k-2}\) obtained after two reduction stages. Since \(\mathbf{e}_{k-2}\) is also a wrongly decoded \(1\)-dimensional error pattern, \(\mathsf{wt}(\mathbf{e}_{k-2})\geq u_{k-2}\). By Lemma 5.5, we thus have \(\mathsf{wt}(\mathbf{e}_{k})\geq\mathsf{2wt}(\mathbf{e}_{k-2})\geq 2u_{k-2}=u_{k}\).
### Lower bound on the error-correcting radius
In the \(2\)-dimensional case, we are not able to determine the exact value of the error-correcting radius but we prove the following lower bound which scales as \(\frac{5}{6}d^{\log_{2}\frac{5}{6}}\sim\frac{5}{6}d^{0.263}\).
**Theorem 5.6**.: _Consider \((v_{k})_{k\in\mathbb{N}}\) the sequence defined by \(v_{k}=\left(\frac{6}{5}\right)^{k-1}\). For \(k\in\mathbb{N}\), let \(\omega_{k}\) be the largest integer such that any error \(\mathbf{e}_{k}\) of weight \(\mathsf{wt}(\mathbf{e}_{k})\leq\omega_{k}\) is decoded correctly. Then \(\forall k\in\mathbb{N},\ \omega_{k}\geq v_{k}-1\)._
In the \(1\)-dimensional case, we showed in Section 5.2 that the weight of any second preimage of an error is twice as large as the weight of the error itself. This technique doesn't extend directly to the \(2\)-dimensional case because there exist error patterns whose preimages have smaller weight. To avoid this issue, we introduce a notion of _reduced weight_ for the intermediate errors \(\mathbf{e}_{i}\).
**Reduced weight.** To define the reduced weight of an error pattern, we first partition the original error \(\mathbf{e}_{k}\) into a union of edge-disjoint paths, whose endpoints are syndrome vertices, and into a union of edge-disjoint cycles. There may be several ways to partition the error \(\mathbf{e}_{k}\) but we just choose any one of them. In the first reduction stage, the decoder adds the edges given by \(\hat{\mathbf{e}}_{k}\) to the current error. This modifies the paths and cycles of the combined error \(\mathbf{e}_{k-1}\). For example, certain paths may be joined together, some may have their endpoints shifted, and others can disappear or, equivalently, form trivial cycles. Regardless, the added edges given by \(\hat{\mathbf{e}}_{k}\) induce a partition of \(\mathbf{e}_{k-1}\) into edge-disjoint paths and cycles. The decoder therefore inductively defines a sequence of partitions of the intermediate errors \(\mathbf{e}_{i}\). We now can define the reduced weight of an error pattern \(\mathbf{e}_{i}\). For any path \(\mathbf{p}\) in the partition of \(\mathbf{e}_{i}\) into paths, the reduced weight \(\mathsf{wt}_{\mathsf{r}}(\mathbf{p})\) of \(\mathbf{p}\) is the weight of a shortest path connecting the endpoints of \(\mathbf{p}\). For any cycle \(\mathbf{c}\) in the partition of \(\mathbf{e}_{i}\) into cycles, the reduced weight \(\mathsf{wt}_{\mathsf{r}}(\mathbf{c})\) of \(\mathbf{c}\) is the weight of a shortest equivalent cycle. Hence, the reduced weight of a trivial cycle is \(0\) and the reduced weight of a non-trivial cycle is \(d=2^{i}\), the girth of the graph \(\mathbf{T}_{i}\). The reduced weight \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\) of \(\mathbf{e}_{i}\) is then defined as the sum of the reduced weights of all the paths and cycles composing \(\mathbf{e}_{i}\).
This notion is still not enough, for it is difficult to control the growth of the reduced weight \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\) for increasing indexes. But whenever the increase from \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\) to \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) is too small, it is because the number of paths that make up \(\mathbf{e}_{i+1}\) has increased significantly. What we therefore do is track the growth of the combined quantity \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})+P_{i}\) where \(P_{i}\) denotes the number of paths in the partition of \(\mathbf{e}_{i}\). The core of the proof of Theorem 5.6 lies then in the following lemma.
**Lemma 5.7**.: _For \(i\in[[0,k-1]]\), let \(\mathbf{e}_{i}\) be an error pattern on the graph \(\mathbf{T}_{i}\) and denote by \(P_{i}\) the number of its paths. Let \(\mathbf{e}_{i+1}\) be a preimage of \(\mathbf{e}_{i}\) and let \(P_{i+1}\) denote the number of its paths. Then \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\geq\frac{6}{5}(\mathsf{wt }_{\mathsf{r}}(\mathbf{e}_{i})+P_{i})\)._
Proof.: Let us choose a partition of \(\mathbf{e}_{i+1}\) into a union of edge-disjoint paths and into a union of edge-disjoint cycles and consider the induced partition of \(\mathbf{e}_{i}\). Note that an error edge of \(\mathbf{e}_{i+1}\) may contribute to only one path or cycle of \(\mathbf{e}_{i}\) so we can partition the error edges of \(\mathbf{e}_{i+1}\) according to which of the paths or cycles of \(\mathbf{e}_{i}\) they contribute to. We may thus without loss of generality assume that \(\mathbf{e}_{i}\) either consists of a single path or of a single cycle.
Case 1: \(\mathbf{e}_{i}\) is a cycle. If \(\mathbf{e}_{i}\) is a homologically trivial cycle, then \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})=P_{i}=0\) and so the inequality \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\geq\frac{6}{5}(\mathsf{wt }_{\mathsf{r}}(\mathbf{e}_{i})+P_{i})\) is clearly satisfied. Let us now suppose that \(\mathbf{e}_{i}\) is a non-trivial cycle. We want to prove that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\geq\frac{6}{5}\mathsf{wt} _{\mathsf{r}}(\mathbf{e}_{i})\). Note that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\) is the weight of a minimal non-trivial cycle on the graph \(\mathbf{T}_{i}\) equivalent to \(\mathbf{e}_{i}\). Hence, the inequality is independent of the cycle. We may thus think of \(\mathbf{e}_{i}\) as any non-trivial cycle that loops the torus along one of its two principal directions. To prove the desired inequality, let us construct the error pattern \(\mathbf{e}_{i+1}\) in such a way that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\) is minimal. To do this, we start from any minimal non-trivial cycle \(\mathbf{c}\) and construct \(\mathbf{e}_{i+1}\) by reversing the decoding process. Hence, we start with \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})=\mathsf{2wt}(\mathbf{c})\) and \(P_{i+1}=0\). The situation is now similar to the \(1\)-dimensional case since the decoder may only remove single-edges of \(\mathbf{c}\) (step \(2\) and \(3\) of the decoder) or move its endpoints by one edge (step \(3\) of the decoder). The same arguments as the ones found in the proof of Lemma 5.4 then show that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\geq\mathsf{2wt}_{\mathsf{r}}( \mathbf{e}_{i})\) and so desired inequality is satisfied.
Case 2: \(\mathbf{e}_{i}\) is a single path joining its two endpoints, say \(a\) and \(b\). Let us prove that the inequality \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\geq\frac{6}{5}(\mathsf{wt}_{ \mathsf{r}}(\mathbf{e}_{i})+1)\) is satisfied. Note that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\) is just the graph distance between the vertices \(a\) and \(b\) in the graph \(\mathbf{T}_{i}\), denoted \(\mathsf{d}_{\mathsf{T}}(a,b)\). Hence, the inequality is independent of the actual path, as long as it joins \(a\) to \(b\). We may thus think of \(\mathbf{e}_{i}\) as any path from \(a\) to \(b\). To prove the desired inequality, let us construct the error pattern \(\mathbf{e}_{i+1}\) in such a way that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\) is minimal. To do this, we start from any minimal path \(\mathbf{p}\) joining \(a\) to \(b\) and we construct \(\mathbf{e}_{i+1}\) by reversing the decoding process. Hence, we start with \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})=2\mathsf{d}_{\mathsf{T}}(a,b)\) and \(P_{i+1}=1\). The decoder may remove double-edges of \(\mathbf{p}\) (steps 1 and 3 of the decoder), remove single-edges of \(\mathbf{p}\) (step 2 and 3 of the decoder) and move its endpoints by one or two edges (step 3 of the decoder). If we remove a single-edge, then \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) decreases by 1 but \(P_{i+1}\) increases by 1. Hence, removing a single-edge doesn't affect \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\) and so we don't use this to construct \(\mathbf{e}_{i+1}\). If we remove a double-edge, then \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) decreases by 2 but \(P_{i+1}\) increases by only 1. Hence, removing a double-edge decreases the sum \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\) by 1. As a result, we try to remove as many double-edges of \(\mathbf{p}\) as we can. Let \(\delta\) denote the number of double-edges we remove. Note that if we remove a double-edge from \(\mathbf{p}\), we should at least keep in \(\mathbf{e}_{i+1}\) the two edges it is connected to, otherwise we would have removed at least three connected edges. Moreover, it must be that between two removed double-edges, we should keep at least two edges in \(\mathbf{e}_{i+1}\). Indeed, consider a section of the path \(\mathbf{p}\) where we only keep one edge between two removed double-edges. Two examples of this situation are shown in Figure 8. It is easily checked that these double-edges cannot be parallel (where we identify a double-edge with the pair of its endpoints). This contradicts the minimality of the path \(\mathbf{p}\) (we assumed that \(\mathbf{p}\) is a minimal path joining \(a\) to \(b\)). We could of course construct \(\mathbf{e}_{i+1}\) from a locally modified version of \(\mathbf{p}\) to account for this but one can verify that this doesn't less \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\).
If we move the endpoints of \(\mathbf{p}\), then we can change the distance \(2\mathsf{d}_{\mathsf{T}}(a,b)\) by at most 2 since the instructions in \(A\) cells move the endpoints in the same direction. We can use this to reduce \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) but moving the endpoints may also affect the number \(\delta\) of double-edges we can remove. Let \(\mu\in\{-2,-1,0,1,2\}\) denote by how much we have modified \(2\mathsf{d}_{\mathsf{T}}(a,b)\). For each value of \(\mu\) we show that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}=2\mathsf{d}_{\mathsf{T}}(a,b )+\mu+1-\delta\geq\frac{6}{5}(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})+1)\).
Case \(\mu=2\): we have moved the endpoints of \(\mathbf{p}\) further apart such that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) is increased by 2. Since we keep at least two edges in \(\mathbf{e}_{i+1}\) between two removed double-edges, \(4\delta\leq 2\mathsf{d}_{\mathsf{T}}(a,b)+2\). Hence,
\[\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}=2\mathsf{d}_{\mathsf{T}}(a, b)+2+1-\delta\geq\frac{3}{2}\mathsf{d}_{\mathsf{T}}(a,b)+\frac{5}{2}=\frac{3}{2} \mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})+\frac{5}{2}\geq\frac{6}{5}(\mathsf{ wt}_{\mathsf{r}}(\mathbf{e}_{i})+1).\]
Case \(\mu=1\): we have moved the endpoints of \(\mathbf{p}\) further apart such that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) is increased by 1. The same argument as above shows that \(4\delta\leq 2\mathsf{d}_{\mathsf{T}}(a,b)+1\) and so
\[\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}=2\mathsf{d}_{\mathsf{T}}(a, b)+1+1-\delta\geq\frac{3}{2}\mathsf{d}_{\mathsf{T}}(a,b)+\frac{7}{4}=\frac{3}{2} \mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})+\frac{7}{4}\geq\frac{6}{5}(\mathsf{ wt}_{\mathsf{r}}(\mathbf{e}_{i})+1).\]
Case \(\mu=0\): the reduced weight \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) is unchanged. We have that \(4\delta\leq 2\mathsf{d}_{\mathsf{T}}(a,b)\) and so
\[\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}=2\mathsf{d}_{\mathsf{T}}(a, b)+1-\delta\geq\frac{3}{2}\mathsf{d}_{\mathsf{T}}(a,b)+1\geq\frac{5}{4}(\mathsf{ wt}_{\mathsf{r}}(\mathbf{e}_{i})+1)\geq\frac{6}{5}(\mathsf{wt}_{\mathsf{r}}( \mathbf{e}_{i})+1).\]
Indeed, the function, \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\mapsto(\frac{3}{2}\mathsf{wt}_{ \mathsf{r}}(\mathbf{e}_{i})+1)(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})+1)^{-1}\) is minimal for \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})=1\) and equals \(\frac{5}{4}\).
Case \(\mu=-1\): we have moved the endpoints of \(\mathbf{p}\) closer together such that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) is decreased by 1. Note that this requires that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\) is at least 2. To achieve the desired bound, one has to be more careful when deriving an upper bound for \(\delta\). First, observe that by a parity argument, we have that \(4\delta\) is bounded by \(2\mathsf{d}_{\mathsf{T}}(a,b)-2\) instead of \(2\mathsf{d}_{\mathsf{T}}(a,b)-1\). Secondly, since \(\delta\) is a non-negative integer, we actually have \(4\delta\leq\lfloor 2\mathsf{d}_{\mathsf{T}}(a,b)-2\rfloor\). Therefore, we obtain
Figure 8: Sections of paths where we only keep one edge between two removed double-edges (dashed edges). The removed double-edges are non non-parallel resulting in non-minimal paths.
\[\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1} =2\mathsf{d}_{\mathsf{T}}(a,b)-1+1-\delta\geq 2\mathsf{d}_{ \mathsf{T}}(a,b)-\lfloor\frac{\mathsf{d}_{\mathsf{T}}(a,b)-1}{2}\rfloor\] \[=2\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})-\lfloor\frac{\mathsf{ wt}_{\mathsf{r}}(\mathbf{e}_{i})-1}{2}\rfloor\geq\frac{5}{4}(\mathsf{wt}_{ \mathsf{r}}(\mathbf{e}_{i})+1)\geq\frac{6}{5}(\mathsf{wt}_{\mathsf{r}}( \mathbf{e}_{i})+1).\]
Indeed, \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\mapsto(2\mathsf{wt}_{\mathsf{r}}( \mathbf{e}_{i})-\lfloor\frac{\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})-1}{2} \rfloor)(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})+1)^{-1}\) is minimal for \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})=3\) and equals \(\frac{5}{4}\).
Case \(\mu=-2\): we have moved the endpoints of \(\mathbf{p}\) closer together such that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})\) is decreased by \(2\). Note that this requires that \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\) is at least \(3\). To achieve the desired bound, one should again be careful when bounding \(\delta\). To avoid edges that are going in opposite directions, we cannot remove a double-edge near one of the extremities of the path \(\mathbf{p}\). It follows that \(4\delta\) is bounded above by \(\lfloor 2\mathsf{d}_{\mathsf{T}}(a,b)-4\rfloor\). Hence,
\[\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1} =2\mathsf{d}_{\mathsf{T}}(a,b)-2+1-\delta\geq 2 \mathsf{d}_{\mathsf{T}}(a,b)-1-\lfloor\frac{\mathsf{d}_{\mathsf{T}}(a,b)}{2} \rfloor+1\] \[=2\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})-\lfloor\frac{\mathsf{ wt}_{\mathsf{r}}(\mathbf{e}_{i})}{2}\rfloor\geq\frac{6}{5}(\mathsf{wt}_{ \mathsf{r}}(\mathbf{e}_{i})+1).\]
Indeed, \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})\mapsto(2\mathsf{wt}_{\mathsf{r}}( \mathbf{e}_{i})-\lfloor\frac{\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})}{2} \rfloor)(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})+1)^{-1}\) is minimal for \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})=4\) and equals \(\frac{6}{5}\).
Note that the inequality in Lemma 5.7 cannot be improved upon since it is actually possible to find error patterns \(\mathbf{e}_{i}\) and their preimages such that equality is achieved. Two examples are shown in Figure 9.
We now prove that the error-correcting radius \(\omega\) is bounded below in terms of \(d^{\log_{2}(6/5)}\).
Proof of Theorem 5.6.: We need to show that if an error pattern \(\mathbf{e}_{k}\) in the graph \(\mathbf{T}_{k}\) is decoded incorrectly, then its weight satisfies \(\mathsf{wt}(\mathbf{e}_{k})\geq v_{k}\), where \((v_{k})_{k\in\mathbb{N}}\) is the sequence defined by \(v_{k}=\left(\frac{6}{5}\right)^{k-1}\).
By exhausting all the possible error patterns, one can show that for \(k=1\), the minimal weight of an error pattern in \(\mathbf{T}_{1}\) that is decoded incorrectly is equal to \(v_{1}\). Since \(\mathbf{e}_{1}\) is decoded incorrectly, we thus have that \(\mathsf{wt}(\mathbf{e}_{1})\geq v_{1}\). Suppose now that \(k\geq 2\). By Lemma 5.7, we have that \(\forall i\in[[1,k]]\), \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}\geq\frac{6}{5}(\mathsf{wt} _{\mathsf{r}}(\mathbf{e}_{i})+P_{i})\). Hence by repeatedly applying Lemma 5.7, we obtain
\[\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{k})+P_{k}\geq\left(\frac{6}{5}\right)^{k -1}(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{1})+P_{1})\geq\ 2\left(\frac{6}{5}\right)^{k-1}.\]
Since each path has at least one edge, it follows that the Hamming weight of the error \(\mathbf{e}_{k}\) is at least as large as its number of paths, \(\mathsf{wt}(\mathbf{e}_{k})\geq P_{k}\). Moreover, the Hamming weight of the error \(\mathbf{e}_{k}\) is greater or equal than its reduced weight, \(\mathsf{wt}(\mathbf{e}_{k})\geq\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{k})\). Hence, we have that \(\mathsf{wt}(\mathbf{e}_{k})\geq\frac{\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{k}) +P_{k}}{2}\geq\left(\frac{6}{5}\right)^{k-1}=v_{k}\).
## 6 Fractal-like errors for more general renormalisation decoders
The analysis of the error-correcting radius of the renormalisation decoder depends of course on the precise instructions defined in Section 3.2. We would like to stress that the existence of fractal-like wrongly decoded sublinear error patterns similar to that of Section 5.1 will be a feature of any deterministic renormalisation decoder and not just of our particular decoder choice. In this section, we consider more general deterministic and hard-decision renormalisation decoders and show they suffer from wrongly decoded error patterns of weight \(d^{\alpha}\) inside minimal non-trivial cycles.
Figure 9: Two error patterns (dashed and thick edges) and their preimages (thick edges) such that, in both cases, we have \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i})+P_{i}=4+1=5\) and \(\mathsf{wt}_{\mathsf{r}}(\mathbf{e}_{i+1})+P_{i+1}=4+2=6\).
### Renormalisation decoders with larger blocks
A first approach to generalise the decoder we introduced in Section 3, is to subdivide the toric graphs with larger blocks. If \(b\geq 2\) denotes the size of the blocks of the code, then the code has length \(n=2b^{2k}\) and distance \(d=b^{k}\).
We could define new sets of instructions for these decoders but to illustrate our point, we may just ask the following natural property. If a single vertex syndrome is present inside a block, then the decoder shifts it to the closest corner vertex of the block.
We show the existence of fractal-like wrongly decoded errors for \(b=3\). The first 3 steps of a wrongly decoded error inside a minimal non-trivial cycle are shown in Figure 10. One constructs the following errors \(\mathbf{e}_{i}\) for \(i\geq 4\) by repeating the same method that is used to obtain \(\mathbf{e}_{3}\) from \(\mathbf{e}_{2}\) to each path of \(\mathbf{e}_{i-1}\). By computing the weight of the errors, we then have \(\mathsf{wt}(\mathbf{e}_{k})=2^{k}=d^{\log_{3}(2)}\simeq d^{0.63}\), where \(d=3^{k}\).
More generally, for a renormalisation decoder with blocks of size \(b\), there exists a fractal-like error pattern of weight \(\mathsf{wt}(\mathbf{e}_{k})=\lfloor\frac{b}{2}+1\rfloor^{k}=d^{\log_{6}((\frac {1}{2}+1))}\), where \(d=b^{k}\). Note that for large values of \(b\), the result isn't really meaningful since the only condition we assumed on the decoders is weak. Indeed, it is restrictive to assume that only a single vertex syndrome can be present in a block since this means long chains of error edges.
### Renormalisation decoders with message-passing
A second approach to generalise the decoder we introduced in Section 3, is to add message-passing. This means that the decoder will not only base its decisions on a block in the reduction procedure, but it will also consider the blocks around it. If \(m\) denotes the depth of the message-passing, then we make our decisions considering all the blocks in a radius \(m\).
We could define new sets of instructions to explain how the message-passing aids the decoders but to illustrate our point, we again just ask the following natural property. We may naturally assume that if a single vertex syndrome is present inside a radius of \(m\) blocks, then the decoder shifts it to the closest corner vertex of the block.
Let us start to show the existence of fractal-like wrongly decoded errors for \(m=2\). The first 5 steps of a wrongly decoded error pattern inside a minimal non-trivial cycle are shown in Figure 11. One constructs the following errors \(\mathbf{e}_{i}\) for \(i\geq 6\) by repeating the same method that is used to obtain \(\mathbf{e}_{4}\) from \(\mathbf{e}_{3}\) to each path of the error \(\mathbf{e}_{i-1}\) of weight 4, and the same method that used to obtain \(\mathbf{e}_{3}\) from \(\mathbf{e}_{2}\) to each path of the error \(\mathbf{e}_{i-1}\) of weight 2. We note that the paths of length 2 have a preimage consisting of one path of length 4, and the paths of length 4 have a preimage consisting of one path of length 2 and one of length 4. By computing the weight of the errors, we then have for \(k\geq 2\), \(\mathsf{wt}(\mathbf{e}_{k})=2F_{k}\), where \((F_{k})_{k\in\mathbb{N}}\) is the Fibonacci sequence. Since for \(k\geq 1\), \(F_{k}\leq\phi^{k-1}\), where \(\phi=\frac{1+\sqrt{5}}{2}\), we have that \(\mathsf{wt}(\mathbf{e}_{k})\leq\frac{2}{\phi}\phi^{k}=\frac{2}{\phi}d^{\log_{2 }(\phi)}\simeq\frac{2}{\phi}d^{0.69}\).
Figure 11: Construction of the wrongly decoded error pattern for a renormalisation decoder with message-passing of depth \(m=1\). For \(i\in[[0,5]]\), the error \(\mathbf{e}_{i}\) is shown in the first horizontal line of the graph \(\mathbf{T}_{i}\).
Figure 10: Construction of the wrongly decoded error pattern for a renormalisation decoder with block length \(b=3\). For \(i\in[[0,3]]\), the error \(\mathbf{e}_{i}\) is shown in the first horizontal line of the graph \(\mathbf{T}_{i}\).
More generally, it is possible to find fractal-like wrongly decoded error patterns of weight \(d^{\alpha}\), even if improving the depth of the message-passing will increase the exponent \(\alpha\).
## 7 Concluding comments
We conjecture that the error-correcting radius \(\omega\) scales as \(d^{1/2}\). However, the natural expectation that minimal wrongly decoded errors are purely 1-dimensional is false. It is possible to find 2-dimensional wrongly decoded error patterns that have smaller weight than the ones introduced in Section 5.1. Nevertheless, the weight of these error patterns still scales as \(d^{1/2}\). The situation is therefore not so clear-cut.
## Acknowledgements
We acknowledge support from the Plan France 2030 through the project NISQ2LSQ, ANR-22-PETQ-0006. GZ acknowledges support from the ANR through the project QUDATA, ANR-18-CE47-0010.
|
2309.14514 | Accurate and Interactive Visual-Inertial Sensor Calibration with
Next-Best-View and Next-Best-Trajectory Suggestion | Visual-Inertial (VI) sensors are popular in robotics, self-driving vehicles,
and augmented and virtual reality applications. In order to use them for any
computer vision or state-estimation task, a good calibration is essential.
However, collecting informative calibration data in order to render the
calibration parameters observable is not trivial for a non-expert. In this
work, we introduce a novel VI calibration pipeline that guides a non-expert
with the use of a graphical user interface and information theory in collecting
informative calibration data with Next-Best-View and Next-Best-Trajectory
suggestions to calibrate the intrinsics, extrinsics, and temporal misalignment
of a VI sensor. We show through experiments that our method is faster, more
accurate, and more consistent than state-of-the-art alternatives. Specifically,
we show how calibrations with our proposed method achieve higher accuracy
estimation results when used by state-of-the-art VI Odometry as well as VI-SLAM
approaches. The source code of our software can be found on:
https://github.com/chutsu/yac. | Christopher L. Choi, Binbin Xu, Stefan Leutenegger | 2023-09-25T20:22:16Z | http://arxiv.org/abs/2309.14514v1 | Accurate and Interactive Visual-Inertial Sensor Calibration with Next-Best-View and Next-Best-Trajectory Suggestion
###### Abstract
Visual-Inertial (VI) sensors are popular in robotics, self-driving vehicles, and augmented and virtual reality applications. In order to use them for any computer vision or state-estimation task, a good calibration is essential. However, collecting _informative_ calibration data in order to render the calibration parameters observable is not trivial for a non-expert. In this work, we introduce a novel VI calibration pipeline that guides a non-expert with the use of a graphical user interface and information theory in collecting _informative_ calibration data with Next-Best-View and Next-Best-Trajectory suggestions to calibrate the intrinsics, extrinsics, and temporal misalignment of a VI sensor. We show through experiments that our method is faster, more accurate, and more consistent than state-of-the-art alternatives. Specifically, we show how calibrations with our proposed method achieve higher accuracy estimation results when used by state-of-the-art VI Odometry as well as VI-SLAM approaches. The source code of our software can be found on: [https://github.com/chutsu/yac](https://github.com/chutsu/yac).
## I Introduction
In order to use Visual-Inertial (VI) sensors in computer vision or state-estimation tasks the calibration parameters must first be obtained. Conventionally, VI sensors are calibrated by an expert who would often collect calibration data by positioning and moving the sensors in front of a calibration target such as a checkerboard or grid of fiducial markers, then use an offline calibration tool such as Kalibr [1] to estimate the sensor calibration parameters. Good calibration results, however, may only be achieved, if the right kind and right amount of data is collected. More specifically, two potential practical issues arise during data capture: first, the choice of calibration views and the range of motions needed is not immediately clear to the non-expert. Secondly, the amount of data the user has to collect for calibration is also unclear, often collecting too much or too little data. A common practice to address these issues is to collect _multiple_ calibration data sequences, however, this is impractical in the field and identifying which calibration is optimal becomes a tedious and time-consuming task.
A straight forward solution to this problem would be to mount the VI sensor on a robot arm and perform a rehearsed or optimal calibration "dance", such as in [2]. However, this requires extra hardware and is not a practical solution for many applications. As an alternative to classic offline calibration methods, one can estimate the calibration parameters within a state-estimation framework such as OKVIS [3], VINS-MONO [4], and OpenVINS [5] in real-time. Note, however, that any of these frameworks require some form of sufficiently accurate initial calibration, as well as sufficient visual features and motion excitation, therefore suffering from similar issues as offline calibration. Furthermore, natural keypoints and the lack of precise knowledge of the corresponding 3D positions may not produce the best possible results.
In this work, we present an interactive VI sensor calibration pipeline that helps guide a non-expert in collecting _informative_ calibration data for a VI sensor _once_ through Next-Best-View (NBV) and Next-Best-Trajectory (NBT) suggestions (as shown in Fig. 1) in order to efficiently obtain sound calibrations. We show through extensive quantitative experiments on calibration sequences and several self-collected VICON real-world datasets that calibration parameters optimised through our system are more accurate and consistent than Kalibr by testing on state-of-the-art VI-SLAM ORBSLAM3 [6]. In summary our contributions are:
* A complete and open-sourced interactive VI-camera calibration tool that supports any number of cameras;
* An information-theoretic procedure to identify the most informative Next-Best-View (NBV) and Next-Best-Trajectory (NBT) among a pre-defined set of viewpoints and trajectory primitives;
* An interactive graphical user interface for guiding the user through the calibration data collection process;
* Through experiments we show that our proposed method is faster, more accurate and more reliable compared to state of the art traditional _non-guided_ calibration methods, such as Kalibr [1], even when used by _novieces_.
Fig. 1: Our system interactively suggests next-best-actions to collect calibration data.
## II Related Work
**Offline Methods**. In the robotics community, early works in VI-sensor calibration methods such as [7, 8, 9, 10] showed that it is possible to calibrate the extrinsics between a camera and IMU, with Kalibr [1] regarded as the current state-of-the-art tool. It is an offline method capable of calibrating a multi-camera system, as well as a VI system. However, the use of this tool requires expert knowledge, as the result is highly dependent on the quality of the calibration data captured. Therefore, the calibration process may in practice have to be repeated until desired results are reached.
**Online Methods**. State of the art state-estimation framework such as OKVIS [3], VINS-MONO [4], and Open-VINS [5] can in practice estimate the calibration parameters in real-time. However, these frameworks require sufficiently accurate initial calibrations, as well as sufficient visual features and motion excitation in order to operate accurately.
**Reinforcement Learning Methods**: There has been a growing interest in using reinforcement learning for calibration such as [2, 11, 12] whereby the goal is to learn informative trajectories to render the VI-calibration parameters observable. However, the requirement of a robot arm to perform these motions is not always practical in the field. Further, these works do not provide quantitative results through a SLAM system to verify the optimality of the calibrated parameters.
**Information-Theoretic Methods**. The first calibration tool with an emphasis in guiding the user through capturing a good calibration sequence for a monocular camera is AprilCal [13]. The method used a quality metric to find and suggest the NBV in real-time during the camera calibration process. AprilCal, however, only supports calibrating the intrinsics of a single monocular camera.
A more recent work that uses an information-theoretic approach for VI sensor calibration is the work of [14, 15], where they proposed a segment-based method for calibrating a VI sensor system in a AR / VR headset and self-driving car setting. The idea is to extract informative data during online state-estimation using an information-theoretic metric, and then perform a full-batch optimisation to update the calibration parameters offline. This approach, however, relies on the fact that the VI sensors are calibrated well initially. Secondly, the available data does not guarantee informative segments for calibration.
In this paper, we place heavy emphasis on collecting _informative_ calibration data by using an information-theoretic metric to find the NBV and NBT in real-time, and by _interactively_ guiding the user in collecting them in order to calibrate the intrinsics, extrinsics, and time shift of a VI sensor. This is in contrast to current state-of-the-art calibration tools such as Kalibr [1] that assume the collected calibration data has sufficient views and range of motion.
## III Notation
We employ the following notation throughout this work. Let \(\boldsymbol{\mathcal{F}}_{W}\) denote the world reference frame. A 3D point \(P\) in the world frame \(\boldsymbol{\mathcal{F}}_{W}\) with respect to the origin is written as a position vector \({}_{W}\mathbf{r}_{WP}\). A rigid body transformation from the body frame, \(\boldsymbol{\mathcal{F}}_{B}\), to the world frame, \(\boldsymbol{\mathcal{F}}_{W}\), is represented by a homogeneous transformation matrix, \(\mathbf{T}_{WB}\). Its rotation matrix component is written as, \(\mathbf{C}_{WB}\), and the corresponding Hamiltonian quaternion is written as, \(\mathbf{q}_{WB}=[\boldsymbol{\eta}^{T},\epsilon]^{T}\in\mathcal{S}^{3}\), where \(\epsilon\) and \(\boldsymbol{\eta}\) are the real and imaginary parts.
In general, the state vector we will be estimating lives on a manifold and thus we define an operator \(\boxplus\) that will be used to perturb the states in tangent space such that \(\mathbf{x}=\bar{\mathbf{x}}\boxplus\delta\mathbf{x}\), where \(\bar{\mathbf{x}}\) is the state estimate and \(\delta\mathbf{x}\) is the local perturbation. Vector quantities such as positions, velocities, biases are updated via standard vector addition. Rotation components on the other hand such as a quaternion are updated via a combination of the group operator \(\otimes\) (quaternion multiplication) and exponential map \(\text{Exp}(\cdot)\), such that \(\mathbf{q}\boxplus\delta\boldsymbol{\alpha}=\text{Exp}(\delta\boldsymbol{ \alpha})\otimes\mathbf{q}\). As a result we will be using a minimal coordinate representation approach similar to [3]. A comprehensive introduction to differential calculus is beyond of the scope of this paper, the reader is therefore encouraged to review [16, 17] for a more detailed treatment on the subject.
## IV Background
In robotics, the maximum a posteriori (MAP) estimator is commonly used to solve the camera and VI calibration problem,
\[\hat{\mathbf{x}}=\operatorname*{argmax}_{\mathbf{x}}\;p(\mathbf{x}|\mathbf{z }), \tag{1}\]
where \(\mathbf{x}\) is the state vector which may be comprised of poses, velocities, IMU biases and calibration parameters we are interested in jointly estimating, given the measurements, \(\mathbf{z}\). Assuming Gaussian measurements resulting in independent error terms \(\mathbf{e}_{i}\), maximising Eq. (1) is equivalent to solving the sum of nonlinear least squares using a nonlinear optimisation algorithm such as the Gauss Newton method,
\[\sum_{i}\mathbf{E}_{i}^{T}\mathbf{W}_{i}\mathbf{E}_{i}\;\Delta\mathbf{x}= \sum_{i}-\mathbf{E}_{i}^{T}\mathbf{W}_{i}\mathbf{e}_{i}(\mathbf{x}), \tag{2}\]
where \(\Delta\mathbf{x}\) is the update vector, \(\mathbf{e}_{i}(\mathbf{x})\) is the \(i^{\text{th}}\) error term evaluated at the current estimate \(\mathbf{x}\), \(\mathbf{E}_{i}\) is the Jacobian matrix of the error term and \(\mathbf{W}_{i}\) the measurement information.
At convergence of the optimisation, we may approximate the posterior distribution as a Gaussian with mean \(\mathbf{x}\) and find the covariance matrix \(\hat{\mathbf{\Sigma}}_{\mathbf{x}}\) by inverting the quantity \(\sum_{i}\mathbf{E}_{i}^{T}\mathbf{W}_{i}\mathbf{E}_{i}\), also known as the Fisher Information matrix. However, recall that the state vector \(\mathbf{x}\) not only contains calibration parameters \(\boldsymbol{\theta}\), but also other state variables not related to the calibration parameters which we denote as \(\boldsymbol{\gamma}\). In the context of calibration, we are only interested in the estimated calibration parameters, \(\boldsymbol{\theta}\), and the covariance of the calibration parameters \(\boldsymbol{\Sigma}_{\boldsymbol{\theta}\boldsymbol{\theta}}\). Expressing \(\mathbf{x}\) and \(\mathbf{\Sigma}_{\mathbf{x}}\) in partition form,
\[\mathbf{x}=\begin{bmatrix}\boldsymbol{\theta}\\ \boldsymbol{\gamma}\end{bmatrix},\;\;\mathbf{\Sigma}_{\mathbf{x}}=\begin{bmatrix} \mathbf{\Sigma}_{\boldsymbol{\theta}\boldsymbol{\theta}}&\mathbf{\Sigma}_{ \boldsymbol{\theta}\boldsymbol{\gamma}}\\ \mathbf{\Sigma}_{\boldsymbol{\gamma}\boldsymbol{\theta}}&\mathbf{\Sigma}_{ \boldsymbol{\gamma}\boldsymbol{\gamma}}\end{bmatrix}, \tag{3}\]
we can employ marginalisation on Normal distributions to get \(p(\boldsymbol{\theta}|\mathbf{z})=\mathcal{N}(\boldsymbol{\theta},\mathbf{ \Sigma}_{\boldsymbol{\theta}\boldsymbol{\theta}})\), by extracting the corresponding blocks in Eq. (3).
To objectively quantify whether the next VI measurements are informative for the VI calibration problem, we used the Mutual Information (MI) defined in [18],
\[I(\mathbf{\theta}_{1};\tilde{\mathbf{z}}_{2})=\frac{1}{2}\log\frac{|\mathbf{\Sigma}_{\mathbf{ \theta}_{1}\mathbf{\theta}_{1}}|}{|\mathbf{\Sigma}_{\mathbf{\theta}_{1}\mathbf{\theta}_{1}| \mathbf{z}_{2}}|}, \tag{4}\]
where \(\mathbf{\Sigma}_{\mathbf{\theta}_{1}\mathbf{\theta}_{1}}\) is the covariance estimate of \(\mathbf{\theta}\) using measurements \(\mathbf{z}_{1}\) alone, and \(\mathbf{\Sigma}_{\mathbf{\theta}_{1}\mathbf{\theta}_{1}|\mathbf{z}_{2}}\) is the covariance estimate of \(\mathbf{\theta}\) using measurements \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\), finally \(|\cdot|\) is the matrix determinant. In summary, with Eq. (4) we can measure the amount of information \(\mathbf{z}_{2}\) (next VI-sensor measurements) conveys to our current estimate \(\mathbf{\theta}|\mathbf{z}_{1}\).
## V System Overview
An overview of our proposed calibration system is illustrated in Fig. 21. It consists of two stages. The first stage aims to perform vision-only camera intrinsics and extrinsics calibration employing Next-Best-View (NBV) feedback. In the second stage the camera-IMU extrinsics are found by using Next-Best-Trajectory (NBT) feedback, with the camera intrinsics and extrinsics obtained in the previous stage fixed. Both stages of the calibration process require the use of a static fiducial marker grid of known size as a calibration target. Specifically, we use a planar calibration target grid of AprilTags [13] introduced by Kalibr [1]. Throughout this work, the VI sensor to be calibrated is assumed to capture images and inertial measurements with the same clock source.
Footnote 1: The pipeline is demonstrated in details in the supplementary video.
## VI Camera Intrinsics and Extrinsics Calibration
In the following, we detail our approach of using Mutual Information (MI) and Next-Best-View (NBV) to calibrate intrinsics and extrinsics of all cameras.
### _States_
For the camera calibration problem, the states to be estimated consist of the camera poses relative to the fiducial target coordinate frame \(\mathbf{\mathcal{F}}_{F}\) as \(\mathbf{x}_{FC_{1}}\), camera extrinsics relative to reference camera 1, \(\mathbf{x}_{C_{1}C_{i}}\), and camera intrinsics, \(\mathbf{x}_{C_{i}}\), of the form:
\[\begin{split}\mathbf{x}_{FC_{1}}&=\begin{bmatrix} _{F}\mathbf{r}_{FC_{1}}^{T}&\mathbf{q}_{FC_{1}}^{T}\end{bmatrix}^{T} \in\mathbb{R}^{3}\times\mathcal{S}^{3},\\ \mathbf{x}_{C_{1}C_{i}}&=\begin{bmatrix}_{C_{1}}\mathbf{r}_{C_{1}C_{i}}^{T}& \mathbf{q}_{C_{1}C_{i}}^{T}\end{bmatrix}^{T}\in\mathbb{R}^{3}\times\mathcal{ S}^{3},\\ \mathbf{x}_{C_{i}}&=\begin{bmatrix}f_{x}&f_{y}&c_{x}&c_{y}&k_{1}&k_{2}&p_{1}&p_{2} \end{bmatrix}^{T}\in\mathbb{R}^{8},\end{split} \tag{5}\]
where \(\mathbf{\mathcal{F}}_{C_{i}}\) denotes the coordinate frame of the \(i^{\text{th}}\) camera on the sensor assembly. We used the Radial-Tangential camera model consisting of focal lengths \(f_{x},f_{y}\), centre \(c_{x},c_{y}\), radial distortion parameters \(k_{1},k_{2}\), and tangential distortion parameters \(p_{1},p_{2}\) as the camera intrinsics. Note that any other projection model could be supported in principle. The full state vector for camera calibration thus becomes,
\[\mathbf{x}=\begin{bmatrix}\underbrace{\mathbf{x}_{FC_{1}}^{T,1}\ldots\mathbf{ x}_{FC_{1}}^{T,k}}_{\text{Reference Camera 1 Poses}}&\underbrace{\mathbf{x}_{C_{1}C_{1}}^{T}\ldots\mathbf{x}_{C_{1}C_{j}}^{T}}_{ \text{Camera Extrinsics}}&\underbrace{\mathbf{x}_{C_{1}}^{T}\ldots\mathbf{x}_{ C_{j}}^{T}}_{\text{Camera Intrinsics}}\end{bmatrix}^{T}. \tag{6}\]
### _Calibration Formulation_
To estimate the camera calibration parameters we used a nonlinear least squares framework to minimise the cost function, \(J_{\text{camera}}\), containing reprojection errors, \(\mathbf{e}_{r}\), and the information matrix of the respective camera measurement, \(\mathbf{W}_{r}\). The cost function has the form:
\[J_{\text{camera}}(\mathbf{x})=\ \frac{1}{2}\sum_{i=1}^{I}\sum_{k=1}^{K}\sum_{j \in\mathcal{J}(i,k)}\mathbf{e}_{r}^{i,j,k^{T}}\mathbf{W}_{r}^{i,j,k}\mathbf{e }_{r}^{i,j,k}, \tag{7}\]
where \(i\) is the camera index, \(k\) denotes the camera frame index, and \(j\) denotes the fiducial target corner index. Finally, \(\mathcal{J}(i,k)\) denotes the set of observable fiducial corner indices in the \(i^{\text{th}}\) camera index and \(k^{\text{th}}\) camera frame index.
Here, the standard reprojection error, \(\mathbf{e}_{r}\), was used:
\[\mathbf{e}_{r}^{i,j,k}=\tilde{\mathbf{z}}^{i,j,k}-\mathbf{h}_{i}(\mathbf{T}_ {C_{i}C_{1}}\ \mathbf{T}_{FC_{1}}^{-1}\ \mathbf{r}_{FF_{f}j},\ \mathbf{x}_{C_{i}}), \tag{8}\]
whereby \(\mathbf{h}_{i}(\cdot)\) denotes the camera projection and distortion model. It needs as an input the fiducial corner, \({}_{F}\mathbf{r}_{FF_{j}}\), camera pose, \(\mathbf{T}_{FC_{1}}\), camera extrinsics, \(\mathbf{T}_{C_{i}C_{1}}\), and camera intrinsics \(\mathbf{x}_{C_{i}}\). Lastly, \(\tilde{\mathbf{z}}^{i,j,k}\) is the observed fiducial corner measurement.
### _Real-time Estimation_
Since Eq. (7) will grow in complexity with every camera frame added, it cannot be solved in real-time as the problem size increases. We therefore adopted a fixed-lag sliding window scheme similar to [3], whereby the sliding window is bounded by marginalising out old camera poses \(\mathbf{x}_{FC_{1}}\) with the Schur Complement, leading to a respective linear prior that enters the cost. Note that this is only needed for the real-time feedback to the user, and we still solve the full batch problem offline for the final calibration solution.
Fig. 2: An overview of our VI calibration pipeline.
### _Camera Calibration With Next-Best-View_
In contrast to standard full-batch camera calibration, where the calibration data is first collected and then solved as a two step process, our method takes a more integrated approach, whereby data collection and solving the calibration problem are performed incrementally, until the addition of new data is no longer informative to the camera calibration problem (see Fig. 3).
First, the camera intrinsics and extrinsics are initialised with the first \(N\) camera frames of a static fiducial marker of known size by minimising the cost function in Eq. (7). Once the camera parameters are initialised, the user is guided to maximise the calibration target measurement coverage over the image space. The information content of each camera view is evaluated using Eq. (4). Views which contain a MI score below the user-defined threshold, \(I_{\text{threshold}}\) (\(I_{\text{threshold}}=0.2\), same as in Kalibr [1]), are removed from the calibration problem. If, however, the new candidate views are not informative enough (no new views added to the calibration problem in the last 3 frames), the calibration tool enters into "Find Next-Best-View" mode where it evaluates a set of possible NBVs. Similar to [13], NBVs are pre-determined by an expert ahead of time in order to reduce the search space and make the computation feasible in real-time (see Fig. 4). Using Eq. (4), the NBV is the one that has the highest mutual information. Once the NBV is determined, the calibration tool will guide the user to the NBV interactively through the graphical user-interface in capturing that view. If the mutual information of the NBV is found to be below \(I_{\text{threshold}}\) the calibration tool stops capturing further measurements and proceeds to performing a final full batch optimisation to estimate the final calibration parameters.
## VII Camera-IMU Extrinsics Calibration
Once the camera intrinsics and extrinsics are known (from Sec. VI), we proceed to, without loss of generality, calibrate the extrinsics between the reference camera 1 and IMU, \(\mathbf{T}_{SC_{1}}\), and camera-IMU delay, \(t_{d}\), of a VI-sensor.
### _States_
The variables to be estimated are the VI sensor pose at discrete camera frame index \(k\), \(\mathbf{x}_{WS}^{k}\), fiducial target pose in the inertial frame \(\mathbf{x}_{WF}\), extrinsics between reference camera 1 and IMU \(\mathbf{x}_{SC_{1}}\), and camera-IMU time delay \(x_{d}\):
\[\begin{split}\mathbf{x}_{WS}&=\left[{}_{W}\mathbf{ r}_{WS}^{T}\ \mathbf{q}_{WS}^{T}\ \mathbf{w}_{WS}^{T}\ \mathbf{b}_{g}^{T}\ \mathbf{b}_{a}^{T}\right]^{T}\in\mathbb{R}^{3}\times \mathcal{S}^{3}\times\mathbb{R}^{9},\\ \mathbf{x}_{WF}&=\left[{}_{W}\mathbf{r}_{WF}^{T}\ \ \mathbf{q}_{WF}^{T}\right]^{T}\in\mathbb{R}^{3}\times\mathcal{S}^{3},\\ \mathbf{x}_{SC_{1}}&=\left[{}_{S}\mathbf{r}_{SC_{1}}^ {T}\ \ \mathbf{q}_{SC_{1}}^{T}\right]^{T}\in\mathbb{R}^{3}\times\mathcal{S}^{3},\\ x_{d}&=t_{d}\in\mathbb{R},\end{split} \tag{9}\]
where the state vector \(\mathbf{x}_{WS}\) holds the VI sensor position in the inertial frame \({}_{W}\mathbf{r}_{WS}\), the body orientation represented by a quaternion \(\mathbf{q}_{WS}\), the velocity expressed in the sensor frame \(\mathbf{v}_{WS}\), as well as the gyroscope and accelerometer biases \(\mathbf{b}_{g}\) and \(\mathbf{b}_{a}\). The state vectors \(\mathbf{x}_{SC_{1}}\) and \(\mathbf{x}_{WF}\) hold the sensor-camera relative pose and fiducial pose, respectively. The full state vector for camera-IMU calibration thus becomes,
\[\mathbf{x}=\left[\underbrace{\mathbf{x}_{WS_{1}}^{T,1}\dots\mathbf{x}_{WS_{1} }^{T,k}}_{\begin{subarray}{c}\text{Sensor}\\ \text{Poses}\end{subarray}}\underbrace{\mathbf{x}_{WF}^{T}}_{\begin{subarray}{ c}\text{Fiducial}\\ \text{Pose}\end{subarray}}\underbrace{\mathbf{x}_{SC_{1}}^{T}}_{\begin{subarray}{ c}\text{Camera-IMU}\\ \text{Extrinsics}\end{subarray}}\underbrace{x_{d}}_{\begin{subarray}{c}\text{Camera-IMU}\\ \text{Time-Delay}\end{subarray}}\right]^{T}. \tag{10}\]
Fig. 4: NBV candidate poses in-front of the calibration target
Fig. 5: Camera-IMU Calibration Pipeline
Fig. 3: Camera Calibration Pipeline
### _Calibration Formulation_
Similar to Sec. VI, we seek to formulate the VI calibration problem as one joint nonlinear-optimisation of a cost function \(J_{\text{imu-cam}}(\mathbf{x})\) containing both (weighted) reprojection errors \(\mathbf{e}_{r}\) and (weighted) temporal error term from the IMU \(\mathbf{e}_{s}\):
\[J_{\text{imu-cam}}(\mathbf{x})= \underbrace{\frac{1}{2}\sum_{i=1}^{I}\sum_{k=1}^{K}\sum_{j\in \mathcal{J}(i,k)}\mathbf{e}_{r}^{i,j,k^{T}}\mathbf{W}_{r}^{i,j,k}\mathbf{e}_{r }^{i,j,k}} \tag{11}\] \[+\underbrace{\frac{1}{2}\sum_{k=1}^{K-1}\mathbf{e}_{s}^{k^{T}} \mathbf{W}_{s}^{k}\mathbf{e}_{s}^{k}}_{\text{inertial}},\]
where \(i\) is the camera index of the VI sensor, \(k\) denotes the camera frame index, and \(j\) denotes the fiducial target corner index. The set \(\mathcal{J}(i,k)\) represents the indices of fiducial target corners observed in the _k_th frame and the _i_th camera.
The reprojection error was used to estimate the camera-IMU extrinsics \(\mathbf{T}_{SC_{1}}\), sensor pose in the world frame \(\mathbf{T}_{WS}\) and fiducial target in the world frame \(\mathbf{T}_{WF}\):
\[\mathbf{e}_{r}=\tilde{\mathbf{z}}^{i,j,k}-\mathbf{h}_{i}(\mathbf{T}_{C_{1}C_{ 1}}^{-1}\mathbf{T}_{SC_{1}}^{-1}\mathbf{T}_{SW}^{k}\mathbf{T}_{WF}\mathbf{r}_{ FF_{f}},\;\mathbf{x}_{C_{i}}), \tag{12}\]
where \(\mathbf{h}_{i}(\cdot)\) denotes the _i_th camera projection model which includes distortion, \({}_{F}\mathbf{r}_{FF_{j}}\) denotes the _j_th fiducial target corner point and \(\tilde{\mathbf{z}}^{i,j,k}\) denotes the corresponding measurement seen in camera \(i\) and image frame \(k\) in image coordinates. The camera-intrinsics \(\mathbf{x}_{C_{i}}\) and camera-extrinsics \(\mathbf{T}_{C_{1}C_{i}}\) estimated in Sec. VI are fixed.
The fiducial target in the world frame \(\mathbf{T}_{WF}\) is first initialised using initial measurements from the IMU and camera assuming low acceleration, where the measured acceleration vector corresponds to (inverse) acceleration due to gravity-yielding the camera pose \(\mathbf{T}_{WC_{i}}\). Without loss of generality, we set the camera position and yaw around the world-z axis to zero. Next, the relative pose between the fiducial target and the _i_th camera, \(\mathbf{T}_{FC_{i}}\), is computed with fiducial corner measurements using 3D-2D RANSAC and bundle adjustment, after which we can compose \(\mathbf{T}_{WF}=\mathbf{T}_{WC_{i}}\mathbf{T}_{C_{i}F}\).
For the IMU error term, we adopted the pre-integration scheme in [19], where the error is the difference between the predicted relative state and the actual relative state, with the exception of orientation, where a simple multiplicative minimal error was used:
\[\mathbf{e}_{S}^{k}(\mathbf{x}_{S}^{k},\mathbf{x}_{S}^{k+1},\tilde{\mathbf{z} }_{S}^{k})=\begin{bmatrix}g^{k}_{WS}(t_{d})-g^{k}_{s}\mathbf{T}_{WS}^{k+1}\\ 2\begin{bmatrix}\mathbf{a}_{S}^{k,k+1}\oplus\hat{\mathbf{a}}_{S}^{k,k+1}(t_{ d})\\ \end{bmatrix}_{1}\end{bmatrix}\in\mathbb{R}^{15}. \tag{13}\]
In addition to estimating the relative state, we further include the camera-IMU time delay scalar \(t_{d}\). Since it is only a 1 dimensional parameter, the \(15\times 1\) Jacobian was obtained through the central finite difference by perturbing the IMU timestamps.
### _Real-time Estimation_
To keep the problem in Eq. (11) bounded for real-time operation, we used the same approach as in Sec. VI-C and adopted a fixed-lag sliding window scheme, marginalising out old sensor poses \(\mathbf{T}_{WS}\), velocities \({}_{W}\mathbf{v}_{WS}\), accelerometer biases \(\mathbf{b_{a}}\) and gyroscope biases \(\mathbf{b}_{g}\). A full batch optimisation using all measurements will be performed to obtain the final calibration solution. The camera-IMU time delay parameter is fixed during online guidance, and estimated in the final full batch optimisation.
### _Next-Best-Trajectories_
Similar to [12], given our goal is to provide intuitive, easy and real-time feedback for a non-expert user to calibrate the VI-sensor, we discretized the continuous search space and used the results of [20] to design 6 non-degenerate NBTs that are computationally feasible in real-time, easy to display and followed by the user (see Fig. 6). Our NBTs are observable as the fisher-information matrix has to be invertible in order to evaluate the information gain [12].
Inspired by the Lissajous curve equations, each NBT is parameterised as:
\[\begin{split} x&=w_{\text{traj}}\sin(at+\delta)+0.5w_{ \text{calib}},\\ y&=h_{\text{traj}}\cos(bt)+0.5h_{\text{calib}},\\ z&=\sqrt{d_{\text{nbt}}-x^{2}-y^{2}},\end{split} \tag{14}\]
where \(x\), \(y\) and \(z\) are the trajectory positions relative to the fiducial target frame \(\boldsymbol{\mathcal{F}}_{F}\) to form \({}_{F}\mathbf{r}_{FS}\), \(d_{\text{nbt}}\) is the distance away from the fiducial target center, \(w_{\text{traj}}\) and \(h_{\text{traj}}\) are the trajectory max width and height, \(w_{\text{calib}}\) and \(h_{\text{calib}}\) are the fiducial target width and height, \(\delta\) represents the phase angle offset, and finally \(a\) and \(b\) are constants that determine the shape of the trajectory (e.g. a ratio of \(\frac{a}{b}=2\) forms a figure of 8). Finally, the sensor's orientation are parameterised as Euler angles and designed such that it is always pointing towards the center of the calibration target:
\[\begin{split}\phi&=\phi_{\text{bound}}\sin(2\pi t)+\pi, \\ \theta&=\theta_{\text{bound}}\sin(2\pi t),\\ \psi&=0.0,\end{split} \tag{15}\]
where \(\phi\), \(\theta\) and \(\psi\) are Euler angles around the x, y and z-axis to form \(\mathbf{C}_{FS}\), respectively, and \(\phi_{\text{bound}}\) and \(\theta_{\text{bound}}\) are the maximum rotation around x and y-axis, respectively.
To ensure the velocity and angular velocity are realistic, we parameterise \(t\) in Eq. (14) and Eq. (15) as a function of \(k\) between \([0,t_{\text{nbt}}]\) such that the first derivative of both equations, velocity and angular velocity, start and end at 0, \(t(k)=\sin^{2}\left(\pi k/2\;t_{\text{nbt}}\right)\), where \(t_{\text{nbt}}\) is the time to complete a NBT. Differentiating both Eq. (14) and Eq. (15) enables us to simulate the camera and IMU measurements for evaluating NBTs using Eq. (4).
### _Camera-IMU Calibration With Next-Best-Trajectory_
The Camera-IMU calibration begins with two separate processes running in parallel, a real-time VI estimator solving Eq. (11) and a NBT evaluator (see Fig. 5). The real-time
VI camera parameters are initialised using the parameters optimised in Section. VI and are fixed throughout. The fiducial pose, \(\mathbf{T}_{WF}\) and camera-IMU extrinsics, \(\mathbf{T}_{SC_{1}}\) on the other hand are initialised by solving Eq. (11) with the first \(N\) camera frames, and IMU measurements between the first and last camera frame timestamps.
As the real-time VI estimator is solving the camera-IMU calibration problem, it periodically sends the calibration problem data to the NBT evaluator process. The NBT evaluator in turn would use the data to evaluate the MI of a set of pre-defined NSTs (Section. VII-D) using Eq. (4), find the NBT with the highest MI and guide the user in executing the NBT in order to render the calibration parameters optimally observable, i.e. reducing the expected uncertainty on the estimated camera-IMU extrinsics. If none of the candidate NSTs satisfies \(I_{\text{mutual}}>I_{\text{threshold}}\) (\(I_{\text{threshold}}=0.2\)) then the NBT evaluator sends a "finish" message to communicate to the real-time VI process that it should proceed to perform a final full-batch calibration.
## VIII Experiments
To evaluate our method, we conducted two sets of experiments. First, we evaluated our calibration pipeline in offline mode with the EuRoC [21] dataset to verify our calibration _accuracy_ is competitive against that of Kalibr's _without the interactive component_ of our system, and despite different approaches to solving the camera-IMU calibration problem, where Kalibr uses a continuous time full-batch optimisation in contrast to our method which uses a discrete time full-batch optimisation. This is _independent_ of our contributions regarding interactivity. With this we wanted to highlight our calibration tool without interactivity is at least as good as Kalibr.
Since the main motivation in this work is to provide non-experts with good calibration results for VIO/VI-SLAM systems, we further conducted experiments involving a small batch of graduate students to prove that our system can _efficiently_ and _reliably_ calibrate VI sensors, achieving superior performance for existing VIO and VI-SLAM systems. To compare the calibrations we used them in ORBSLAM3 [6] and evaluated the accuracy using the evaluation scheme of [22] with RMSE Absolute Trajectory Error (ATE) by aligning the estimated trajectory with the ground-truth.
All experiments were conducted on a Lenovo P52 Thinkpad laptop containing an Intel Core i7-8750H CPU at 2.2 Ghz with 16GB of memory running Ubuntu 20.04 and ROS Melodic. The experiments with the graduate students were conducted with the aim of calibrating an Intel RealSense D435i which contains a stereo IR global shutter depth sensor, a monocular RGB rolling shutter sensor, and additionally an IMU sensor running at 15Hz, 15Hz and 400Hz respectively. For our purposes, we _do not_ use the RGB rolling shutter sensor. We have instead disabled the IR projector and used the stereo IR global shutter depth sensors as a standard gray-scale stereo camera.
During the camera and VI calibrations the default settings for Kalibr [1] were used to generate their results, whereas in our method we used Cauchy loss (\(s=1.5\)) on the reprojection errors and a fixed-lag smoothing window size of 10 and 3 for the camera calibration and camera-IMU calibration stages respectively. The IMU parameters used for the camera-IMU calibration are: \(\sigma_{a}=2.52\times 10^{-2}\frac{\mathrm{m}}{\mathrm{s}^{2}}\frac{1}{ \sqrt{\mathrm{Hz}}}\) for the accelerometer noise density, \(\sigma_{ba}=4.41\times 10^{-3}\frac{\mathrm{m}}{\mathrm{s}^{2}}\frac{1}{ \sqrt{\mathrm{Hz}}}\) for the accelerometer drift noise density, \(\sigma_{g}=2.78\times 10^{-3}\frac{\mathrm{rad}}{\mathrm{s}}\frac{1}{ \sqrt{\mathrm{Hz}}}\) for gyroscope noise density, and \(\sigma_{bg}=1.65\times 10^{-5}\frac{\mathrm{rad}}{\mathrm{s}^{2}}\frac{1}{ \sqrt{\mathrm{Hz}}}\) gyroscope drift noise density.
### _Calibration Results on EuRoC Dataset_
To assess our approach in offline mode, we used the calibration sequences from the EuRoC dataset [21] to calibrate the VI-sensor. The calibration process is split into two stages. First, the camera intrinsics and camera extrinsics are estimated. Then in the second stage, only the camera-IMU extrinsics and time-delay are estimated, with the camera intrinsics and camera extrinsics estimated in the first phase fixed.
The results show comparable calibration reprojection errors, where in the camera calibration stage our method obtained an RMSE reprojection error of \(0.6042\) pixels compared to Kalibr's \(0.6087\) pixels, and in the camera-IMU calibration stage the RMSE reprojection errors are \(0.5569\) pixels and \(0.5775\) pixels for our method and Kalibr respectively. Fig. 7 and Fig. 8 report RMSE ATE after running ORBSLAM3 [6] on the EuRoC dataset sequences 10 times
Fig. 6: Next-Best-Trajectory (NBT) candidates in-front of a calibration target
in Stereo-VO mode and Stereo-VIO mode, respectively. We did not change the ORB-SLAM3 EuRoC configuration that was orginally tuned for Kalibr calibration parameters. Both figures show that the calibrations produced by our method yielded better results on most sequences in Stereo-VO mode and all sequences in VIO mode, compared to Kalibr.
To verify our camera-IMU time delay estimation, we assumed the EuRoC dataset [21] has a camera-IMU time delay of \(\approx 7\mu s\), as reported in [23], and perturbed the imu_april IMU timestamps with 100ms, 10ms and 1ms time offsets. With our offline camera-IMU calibration _without interactivity_ we were able to recover the time offsets \(100ms\), \(9.97ms\) and \(0.987ms\) respectively, thus showing that our offline camera-IMU calibrator is capable of accurately estimating the camera-IMU time delay.
### _Trials with Graduate Students_
To evaluate our calibration method, we conducted a series of tests involving 16 graduate students to measure the effectiveness of our approach compared to the state-of-the-art calibrator, Kalibr [1]. Our test-subjects were postgraduate students at Imperial College London. Of the 16 students, 4 reported some previous experience with camera calibration, and only 2 reported some previous experience with camera-IMU calibration. Each participant was asked to calibrate the same Intel RealSense D435i sensor by first collecting two calibration sequences for Kalibr (one for camera calibration and the second for camera-IMU calibration), and then another two with our calibration method.
Because we do not have ground truth for the calibration parameters, we evaluated the estimated calibration parameters by applying them in ORBSLAM3 [6] running in odometry mode (with loop-closure switched off) on 10 custom-collected Vicon room sequences where ground-truth poses were recorded with various motions.
Our study shows that novices who have little to no experience in calibrating a VI sensor can obtain better calibrations using our approach compared to Kalibr. Out of 10 Vicon room sequences, the RMSE ATE error is lowest across all sequences using calibration parameters obtained through our method (see Fig. 9). Our calibration parameters also yielded overall smaller RMSE ATE variances, showing more consistent and reliable odometry accuracy, regardless of the experience of calibration users. The estimated camera-IMU time delay with our method is \(3.07\pm 0.932ms\), and Kalibr's estimate is \(4.81\pm 0.981ms\). Since ground-truth is not available we can only conclude our method is more consistent compared to Kalibr's result. The break-down of the median total time taken to calibrate the VI sensor between Kalibr and our method is shown in Fig. 10, where our method's median is 381.11 seconds compared to Kalibr's 455.44 seconds.
In addition to showing that our method yields better SLAM results and calibrations faster, by inspecting the Shannon Entropy of the calibration parameters, a metric
Fig. 8: Comparison of ORBSLAM3 using calibrations from Kalibr and ours in Stereo-VIO mode on EuRoC Dataset
Fig. 7: Comparison of ORBSLAM3 using calibrations from Kalibr and ours in Stereo-VO mode on EuRoC Dataset
Fig. 9: Comparing calibrations by graduate students across 10 different evaluation VICON room sequences by running ORBSLAM3 in odometry mode
used to measure the uncertainty of information content [24], we also observe a lower entropy (more certainty) with our method compared to Kalibr (see Fig. (a)a and Fig. (b)b). This means that our method can successfully guide a novice to collect a more informative calibration dataset for a good calibration.
## IX Conclusions
The success of SoTA computer-vision and state-estimation algorithms often hinges on good VI calibrations. However, collecting high-quality VI calibration data is not trivial, especially since most existing calibration tools do not provide an interactive live feedback to the user which ultimately increases the risk of poor calibrations. In this work, we have introduced a novel visual-inertial calibration guidance system to provide real-time NBV and NBT suggestions to guide users in collecting informative calibration data. It achieves competitive calibration results against the SoTA offline calibrator, Kalibr [1], and produces faster, more accurate and more reliable calibrations for existing SoTA visual and VI SLAM systems, even when used by novices.
## Acknowledgment
We thank members from the Smart Robotics Lab, Robot Learning Lab, Dyson Robotics Lab and Adaptive and Intelligent Robots Lab for participating in experiments, Ying Xu for graphic design, and especially Sotiris Papatheodorou for his fruitful advice in this project. This research is supported by Imperial College London, Technical University of Munich, EPSRC grant ORCA Stream B - Towards Resident Robots, and the EPSRC grant Aerial ABM EP/N018494/1.
|
2307.16877 | Evaluating Correctness and Faithfulness of Instruction-Following Models
for Question Answering | Retriever-augmented instruction-following models are attractive alternatives
to fine-tuned approaches for information-seeking tasks such as question
answering (QA). By simply prepending retrieved documents in its input along
with an instruction, these models can be adapted to various information domains
and tasks without additional fine-tuning. While the model responses tend to be
natural and fluent, the additional verbosity makes traditional QA evaluation
metrics such as exact match (EM) and F1 unreliable for accurately quantifying
model performance.
In this work, we investigate the performance of instruction-following models
across three information-seeking QA tasks. We use both automatic and human
evaluation to evaluate these models along two dimensions: 1) how well they
satisfy the user's information need (correctness), and 2) whether they produce
a response based on the provided knowledge (faithfulness). Guided by human
evaluation and analysis, we highlight the shortcomings of traditional metrics
for both correctness and faithfulness. We then propose simple token-overlap
based and model-based metrics that reflect the true performance of these
models. Our analysis reveals that instruction-following models are competitive,
and sometimes even outperform fine-tuned models for correctness. However, these
models struggle to stick to the provided knowledge and often hallucinate in
their responses. We hope our work encourages a more holistic evaluation of
instruction-following models for QA. Our code and data is available at
https://github.com/McGill-NLP/instruct-qa | Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy | 2023-07-31T17:41:00Z | http://arxiv.org/abs/2307.16877v2 | # Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering
###### Abstract
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for information-seeking tasks such as question answering (QA). By simply prepending retrieved documents in its input along with an instruction, these models can be adapted to various information domains and tasks without additional fine-tuning. While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics such as exact match (EM) and F1 unreliable for accurately quantifying model performance.
In this work, we investigate the performance of instruction-following models across three information-seeking QA tasks. We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness), and 2) whether they produce a response based on the provided knowledge (faithfulness). Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness. We then propose simple token-overlap based and model-based metrics that reflect the true performance of these models. Our analysis reveals that instruction-following models are competitive, and sometimes even outperform fine-tuned models for correctness. However, these models struggle to stick to the provided knowledge and often hallucinate in their responses. We hope our work encourages a more holistic evaluation of instruction-following models for QA. Our code and data is available at [https://github.com/McGill-NLP/instruct-qa](https://github.com/McGill-NLP/instruct-qa)
## 1 Introduction
One of the goals of natural language processing (NLP) is to enable systems to perform tasks based on natural language instructions as this would empower users to interact in an intuitive and flexible manner. Instruction-following models are a type of language models that aim to achieve this goal. Training these models usually involves exposing large language models (LLMs; Brown et al.2020; Zhang et al.2022; Thoppilan et al.2022; Rae et al.2022; Touvron et al.2023a) to thousands of tasks formulated as natural language instructions through supervised examples Sanh et al. (2022); Wei et al. (2022); Chung et al. (2022); Ouyang et al. (2022); Iyer et al. (2023); Touvron et al. (2023b) or other forms of supervision Ouyang et al. (2022); Wang et al. (2022); Taori et al. (2023); Peng et al. (2023). These are known to generalize to many tasks with little exposure to examples of those tasks Mishra et al. (2022). In this paper, we evaluate instruction-following models for their ability to perform question-answering (QA) on a given set of text passages.
Figure 1: Sample response generated by GPT-3.5. The model response is correct w.r.t information need but only partially faithful w.r.t knowledge as only one of the two locations mentioned in the response can be found in the knowledge (truncated for readability). Recall (§4.2) and K-Precision (§5.1) are automatic metrics that approximate human judgment.
Instruction-following models can perform QA when provided with a prompt describing the task, the question, and relevant text passages to reason upon retrieved by a retriever (Chung et al., 2022). These model-generated answers are known to be natural, informative, and verbose, a useful trait that helps to build users' trust and engagement but these models also generate hallucinate information that can mislead users (Dziri et al., 2022; Chiesurin et al., 2023). Moreover, many QA datasets have short reference answers that render traditional evaluation metrics like exact match (EM) and F1 word overlap unreliable when evaluating these verbose answers (Kamalloo et al., 2023).
Consider, for instance, the scenario in Figure 1, where the user question is _"Where are One Direction from?"_. A comparison between the reference response _"London, England"_ and the first part of the model's response _"One Direction are from London, England"_ yields an EM score of 0 and F1 score of only 0.5, despite both answers being effectively equivalent (The entire response gets 0.36 F1 score). Moreover, the second part of the response asserts that One Direction is from _Mullingar, Ireland_, a fact which despite being correct, is not entailed by the provided knowledge. As EM and F1 only compare against reference answers, they are unsuitable to estimate the alignment of the model's response with the provided knowledge.
In this work, we advocate that the performance of instruction-following models for retrieval-augmented QA should be evaluated along two dimensions -- 1) _correctness w.r.t information need_, which measures a model's efficacy in satisfying a user's information needs, and 2) _faithfulness w.r.t provided knowledge_, which measures a model's capability to ground responses in provided knowledge. A model demonstrating robust performance across both these dimensions can potentially be considered useful and safe for the user in information-seeking scenarios.
Along these dimensions, we evaluate several recent instruction-following models such as Llama-2 (Touvron et al., 2023), GPT-3.5 (sibling model of Ouyang et al. 2022), Flan-T5 (Chung et al., 2022), and Alpaca (Taori et al., 2023) on three popular QA datasets that correspond to three diverse QA tasks -- Natural Questions (NO; Kwiatkowski et al. 2019) for open-domain QA, HotpotQA (Yang et al., 2018) for multi-hop QA, and TopiOCQA (Adlakha et al., 2022) for conversational QA. We conduct a human analysis of 900 model responses and correlate them with several automatic metrics for correctness and faithfulness.
Our findings suggest that, for correctness, _recall_ - the proportion of tokens in the reference answer also present in the model response - exhibits the highest correlation than lexical overlap metrics like EM or F1. For faithfulness, _K-Precision_ - the proportion of model response tokens that appear in the knowledge snippet - correlates better with human judgments than any other token-overlap metric. Among model-based metrics, i.e., using a model to determine the correctness/faithfulness of an answer w.r.t. reference answer/knowledge, GPT-4 correlates the most but it is expensive and prone to systematic biases (Wang et al., 2023). However, we find that lexical overlap metrics are close to model-based metrics, allowing us to evaluate several instruction-following models at a large-scale.
A faithful model should not only answer a question when relevant knowledge is provided, but it should also abstain from answering when irrelevant knowledge is provided. Hence, we also measure the model's ability to abstain from answering as an evaluation for faithfulness.
To summarize, our contributions are as follows:
* Llama-2, GPT-3.5, Flan-T5, and Alpaca
- in retrieval-augmented settings across three diverse QA tasks. We collect human annotations for both correctness and faithfulness.
* We analyze several metrics in relation to human judgments, finding that GPT-4-based evaluation as the most correlated for both correctness and faithfulness. Additionally, we analyze failures of traditional QA metrics and highlight that models are unfairly penalized for verbosity.
* _recall_ for correctness and _K-Precision_ for faithfulness
- and demonstrate their strong correlation with human judgments.
* Our results indicate that instruction-following models can surpass the performance of fine-tuned models in terms of correctness. However, these models struggle to be faithful to provided knowledge, often demonstrating a tradeoff between the ability to remain faithful to relevant and irrelevant knowledge.
Related Work
Instruction-Following ModelsFine-tuning pre-trained models on a collection of NLP tasks formatted as natural language instructions result in instruction-following models. These models can generalize to new unseen tasks based solely on instruction and optionally a few demonstrations, often outperforming LLMs in zero-shot and few-shot settings while being only a fraction of their size Mishra et al. (2022). Depending on the nature of the datasets used for training, these models can be broadly classified into three categories.
The majority of instruction-following models in the research community are trained on publicly available NLP datasets verbalized by human annotators Wei et al. (2022); Mishra et al. (2022); Wang et al. (2022); Chung et al. (2022); Iyer et al. (2023). The number of tasks ranges from a few tens (e.g. 62 in Wei et al. 2022) to several hundred (e.g. 1800+ in Iyer et al. 2023).
Ouyang et al. (2022) conjecture that public NLP datasets are limited in scope and lack sufficient diversity in user inputs. To address this, they train _InstructGPT_ on a mix of human-written prompts submitted to the OpenAI API and prompts created by expert labelers. The model is further fine-tuned with human feedback to align it more closely with human preferences (RLHF; Christiano et al. 2017). Llama-2 Touvron et al. (2023) is another recent model in this category, trained on a mix of public NLP datasets and high-quality expert annotations of dialogue-style instructions, followed by RLHF.
Finally, _self-instruct_Wang et al. (2022) is an alternative paradigm to reduce reliance on human-generated task instructions. Starting from a small manually-annotated task pool, an LLM is prompted to generate instructions and demonstrations of new tasks. The resultant synthetic dataset is used to train a language model to follow instructions Taori et al. (2023); Peng et al. (2023).
Datasets for instruction-tuning often contain several QA tasks. However, these tasks are either reading comprehension (i.e. answering a question about a provided passage) or closed-book QA (i.e., without using a large information source). In this work, we explore a more practical setting, where an instruction-following model is paired with a retriever, a paradigm known as retrieval-augmented generation (RAG; Lewis et al. 2020).
Retrieval-Augmented GenerationRAG entails using a _retriever_ to select relevant passages from an information source, which are subsequently passed to a _generator_ to produce a response. This two-step retrieve-generate process has been shown to reduce hallucinations Shuster et al. (2021), while lending interpretability and configurability to the model Lewis et al. (2020).
RAG is a dominant paradigm for several information-seeking QA tasks such as open-domain QA Chen et al. (2017); Lee et al. (2019); Sachan et al. (2021), _inter alia_), multi-hop QA Asai et al. (2020); Qi et al. (2021); Izacard et al. (2022); _inter alia_), and conversational QA Anantha et al. (2021); Adlakha et al. (2022); _inter alia_). Various works differ on how to train the generator to utilize information from the retrieved passages, for e.g, by extracting snippets Chen et al. (2017); Clark and Gardner (2018); Wang et al. (2019); Karpukhin et al. (2020) or by jointly attending encoded passages and previously generated tokens Fusion-in-Decoder; Izacard and Grave (2021).
Recent works have also explored using off-the-shelf language models as generators in the RAG pipeline, alleviating the need to fine-tune or learn additional parameters. Lazaridou et al. (2022) demonstrated that few-shot prompting an LM conditioned on the web results outperforms a vanilla LM for several open-domain QA tasks. Shi et al. (2023) showcase that pairing LLMs like GPT-3 Brown et al. (2020) with retrievers improves language modeling performance as well. Separate from these works, we evaluate retrieval-augmented instruction-following models based only on natural language instruction. In the absence of training instances or demonstrations, these models do not learn the distribution of reference answers of the target QA dataset, raising new challenges for evaluation.
Evaluation in QALexical matching between a set of reference answers and model response remains a dominant approach for evaluation across multiple NLP tasks. As QA tasks generally consist of short reference answers, previous works have primarily relied on Exact Match (EM) and F1 to evaluate and benchmark models Rajpurkar et al. (2016); Reddy et al. (2019). For tasks that require generating longer sequences, such as summarization and translation, subsequence-based lexical matching is generally employed Papineni et al. (2002); Banerjee and Lavie (2005); Lin (2004), _inter alia_).
A major shortcoming of lexical matching is that it depends on a set of reference answers which may be incomplete. To overcome this limitation, subsequent model-based metrics compute the semantic similarity between the reference answer and the model response using contextualized embeddings Zhang et al. (2020) or train a specialized classifier Bulian et al. (2022) to predict equivalence. More recently, several works resort to prompting LLMs like GPT-4 OpenAI (2023) to act as evaluators Chiang et al. (2023); Peng et al. (2023); Chiang and Lee (2023); Kamalloo et al. (2023); Liu et al. (2023). In this work, we explore evaluating both correctness and faithfulness using GPT-4.
Concurrent to our work, Kamalloo et al. Kamalloo et al. (2023) evaluate the correctness of InstructGPT in zero-shot and few-shot settings along with several fine-tuned models for open-domain QA. They highlight the shortcomings of traditional QA metrics and propose BEM Bulian et al. (2022) and LLM-based evaluation as viable alternatives. However, they do not consider InstructGPT in _retrieval-augmented_ settings. In contrast to their work, we investigate both correctness and faithfulness of multiple instruction-following models across three diverse QA tasks and propose simple token-overlap based metrics that correlate highly with human judgments.
Faithfulness and GroundednessConversational models have been shown to produce factually incorrect or unsupported statements Rashkin et al. (2021); Dziri et al. (2022), known as _hallucinations_. To alleviate those issues, various works attempt to reduce hallucinations via methods such as iterative refinement Dziri et al. (2021), linguistic calibration Mielke et al. (2022); Lin et al. (2022), or by editing instances of hallucinations Dziri et al. (2022), thus improving _faithfulness_ of these models. Several metrics have also been developed to measure faithfulness. Honovich et al. Honovich et al. (2021) proposed \(Q^{2}\), an automatic faithfulness evaluation metric that checks for factual consistency based on automatic question generation and question answering. FaithCritic Dziri et al. (2022) is another model-based metric that predicts the degree of hallucination in a model's response.
For information-seeking, previous works have considered _groundedness_ -- the extent to which the generator relies on retrieved passages Paranjape et al. (2022), quantified using Knowledge-F1 K-F1;Shuster et al. (2021). In this work, we consider a model response to be faithful, if it is grounded in the passage _relevant_ to the user's information need. Concurrent to our work, Chiesurin et al. Chiesurin et al. (2023) investigated hallucination of retrieval-augmented GPT-3 in for conversational QA Adlakha et al. (2022) task. They found that GPT-3 is likely to produce responses that appear trustworthy but are unfaithful.
## 3 Experimental Setup
### Tasks
We evaluate our approach on the validation splits of three information-seeking QA tasks. The total number of questions and passages for each dataset are provided in Table 1. We describe the datasets used for each task below.
Open-domain QANatural Questions (NQ; Kwiatkowski et al. 2019) includes questions sourced from Google queries, with reference answers written by human annotators. We use the open version of NQ Lee et al. (2019) that consists of short answers based on 100-token passages from English Wikipedia (indexed in Dec. 2018).
Multi-hop QAWe use HotpotQA Yang et al. (2018) for this task, where each question requires reasoning across two Wikipedia passages. The passages are taken from the initial paragraphs from English Wikipedia articles (indexed in October 2017).
Conversational QAWe use TopiOCQA Adlakha et al. (2022) for this task, a dataset for open-domain information-seeking dialogue. At each turn of the conversation, an _agent_ responds to a _user_'s questions based on knowledge from Wikipedia. Each turn has an associated 200-token gold passage from English Wikipedia (indexed in Oct. 2020).
### Instruction-following Models
To evaluate retrieval-augmented instruction-following language models, we present the models with an instruction, followed by the retrieved passages and the query. The prompt template for open-domain QA and multi-hop QA tasks is given in Figure 2, whereas conversational QA differs slightly, replacing the question with conversation history (Figure 3). We consider four instruction-following models that primarily differ based on the type of training data used. We use the same generation parameters for all instruction-following models, described in Appendix A.1.
Flan-T5We use the 11B parameter version of T5 (Raffel et al., 2020), which has been trained by Chung et al. (2022) using the instruction fine-tuning methods proposed by Wei et al. (2022). Flan-T5 is trained on multiple publicly-available instruction-following datasets (Sanh et al., 2022; Wang et al., 2022; Wei et al., 2022). Together, these datasets encompass more than 1800 tasks, of which over 200 are QA tasks. Out of the three datasets on which we evaluate, the training split of NQ and HotpotQA are included in Flan-T5's training regime.
Gpt-3.5We use the _turbo_ version of GPT-3.51 which is described2 as a sibling to the InstructGPT model (Ouyang et al., 2022). The model's training incorporates user data submitted to the OpenAI API as well as expert annotations, however, the exact distribution of training tasks and datasets is not publicly available.
Footnote 1: openai.com/blog/introducing-chatgpt-and-whisper-apis
Footnote 2: openai.com/blog/chatgpt
AlpacaWe use the 7B variant of Alpaca (Taori et al., 2023), a fine-tuned version of LLaMA (Touvron et al., 2023) trained on demonstrations generated using GPT-3 (Brown et al., 2020). The demonstrations were collected using the _self-instruct_ framework (Wang et al., 2022).
Llama-2We use the 7B chat version of Llama-2 (Touvron et al., 2023). The model is initially bootstraped on similar instruction-following dataset as Flan-T5, followed by fine-tuning for dialogue-style instructions.
Fine-tuned GeneratorsTo compare against instruction-following models, we select FiD (Izacard and Grave, 2021) as our fine-tuned baseline for all three tasks. This encoder-decoder model separately encodes each retrieved passage with the query, resulting in a set of vectors. The decoder then autoregressively generates the answer by attending to the input passages and the previously generated tokens. For NQ and TopiOCQA, we use the publicly available FiD checkpoints, while for HotpotQA, we fine-tune our own variant using the default hyperparameters.
## 4 Correctness w.r.t Information Need
In this section, we investigate if retrieval-augmented instruction-following models can produce responses that satisfy user information needs. We first describe our experimental setup by providing details of the retriever used in each task (SS4.1) and the metrics used for evaluating model responses (SS4.2). Next, we describe our human evaluation setup and present the results from our analysis (SS4.3). Finally, equipped with a better understanding of evaluation metrics, we conduct large-scale evaluation of instruction-following models and present the results (SS4.4).
### Retrieval
For each task, we use a task-specific variant of DPR (Dense Passage Retrieval; Karpukhin et al. 2020) as the retriever. The general architecture of DPR consists of a question and a passage encoder. The dot product between the dense vector representations of the passage and the query is used as a ranking function.
\begin{table}
\begin{tabular}{l r r} \hline \hline Dataset & \# Questions & \# Passages \\ \hline Natural Questions & 3,610 & 21,015,324 \\ HotpotQA & 7,405 & 5,233,329 \\ TopiOCQA & 2,514 & 25,700,593 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for datasets used in this work. We use the validation split from each dataset for our evaluation as the test sets are hidden.
Figure 3: Prompt template for conversational QA task.
Figure 2: The prompt template used for open-domain QA and multi-hop QA tasks.
For NQ, we adopt a pre-trained checkpoint from Karpukhin et al. (2020). This checkpoint was trained on four QA datasets -- NQ, TriviaQA Joshi et al. (2017), WebQuestions Berant et al. (2013), and CuratedTREC Baudis and Sedivy (2015). For HotpotQA, we utilize a multi-hop variant of DPR proposed by Xiong et al. (2021). This version retrieves reasoning chains iteratively, selecting subsequent passages based on the query and previously retrieved passages. For TopiOCQA, we utilize the checkpoint provided by Adlakha et al. (2022). This variant of DPR is uniquely suited for conversational QA tasks as it encodes the conversation history in the question encoder.
In all of the tasks, the retriever selects passages from the associated Wikipedia dump, as detailed in Section 3.1. The number of retrieved passages provided to instruction-following models and fine-tuned models for each task are provided in Appendix A.2.
### Evaluation Metrics
Evaluation in QA usually involves comparing model responses to human-annotated gold answers. The metrics used for this comparison can be divided into two categories:
Lexical MatchThese metrics score a model response based on its token overlap with the gold standard answer. While some metrics perform bag-of-words matching (e.g., Exact Match (EM), F1), others consider the order of the tokens by \(n\)-gram matching such as METEOR Banerjee and Lavie (2005), and ROUGE Lin (2004).
In this work, we also consider _Recall_ -- the proportion of tokens in the reference answer that are present in the model response. Recall does not penalize verbose model response, as long as the response contains the reference answer tokens. Recent works that have evaluated the verbose responses generated by instruction-following models Liu et al. (2023); Mallen et al. (2022) have used a similar metric _accuracy_, whereby a model's response is considered correct if any reference answer appears as a substring within the model's response. This is a stricter version of recall that cannot handle small variations between reference answer and model response, such as if the reference answer is _John Kennedy_ and the model response is _John F Kennedy_. To avoid any confusion, we refer to this metric as _Recall (S)_, indicating it as a stricter version of token-level recall.
Semantic SimilarityUnlike the previous class of metrics that face _strictness_Iss Kamalloo et al. (2023), _semantic similarity-based_ metrics typically leverage a trained model to predict if the model response is semantically equivalent to the gold answer. BERTScore Zhang et al. (2020), which we refer to as _BertS_, is a commonly used metric for text generation that computes precision, recall, and F1 based on token similarity between model response and reference gold answer using contextual BERT embeddings. Furthermore, _BEM_BERT matching, Bulian et al. (2022) employs a trained BERT model to evaluate question-answering models by predicting the semantic equivalence based on the question, reference gold answer, and model response. We extend BEM to conversational QA task by providing the question from the last turn of the conversation as input. Moreover, we also consider an evaluation metric based on prompting LLMs (referred to as _GPT3.5-Eval_ and _GPT4-Eval_) to act as evaluation agents. In principle, the setup is similar to the one proposed by Kamalloo et al. (2023), however, with a different prompt, as described in Appendix B (Figure 7). Specifically, we prompt these models to act as evaluators by providing a natural language instruction along the question (or conversation history), reference gold answer, and model response.
### Human Evaluation
We conduct a human evaluation on a subset of responses generated by three instruction-following models - GPT-3.5, Flan-T5, and Alpaca - to establish a basis for comparing evaluation metrics. Specifically, we focus on cases where retrieved passages provided to the model include the gold passage. Therefore, any inaccuracies in the response can be attributed to the model's failures, rather than inaccurate retrieval. For every task, we collect annotations for 100 samples.
In our evaluation setup, the annotator is presented with the question or conversation history, the reference answer, and the anonymized model response. The annotator's task is to assess if the model response is _correct_, i.e. it satisfies the information need underlying the question. For each of the 100 samples, we collect annotations for three instruction-following models, resulting in 900 labeling tasks. Each task is completed by two different annotators (authors of the paper). The inter-annotator agreement achieved was 92.7%. In instances where the annotators disagreed, a third an
notation is collected and a majority vote is taken.
The results of this human evaluation are presented in Table 8 (Appendix D), along with scores of automated metrics on this subset. Traditional QA evaluation metrics like EM and F1 tend to score model responses much lower than human assessments, highlighting the well-known problem of strictness in lexical matching (Min et al., 2021; Kamalloo et al., 2023).
Qualitative Analysis of Failure CasesFor a more granular understanding of the shortcomings of traditional QA metrics, we analyze the models' responses that have less than or equal to 0.3 F1 score, but were deemed correct according to the human evaluations. This resulted in 296 samples out of 900. Our classification of errors is adapted from Kamalloo et al. (2023) (which itself was based on Min et al. 2021), modified to focus on instruction-following models. Specifically, we exclude some error classes relevant to fine-tuned models and include some classes for instruction-following models. The resultant categories are:
* **Semantic Equivalence:** Here, the model response is semantically similar to the reference answer. Sub-categories include **Multinomial entities**, e.g., _John Kennedy_ and _John F Kennedy_, **Synonymous Answers**, e.g., _from India_ and _Indian nationality_, and **More Elaborate Answers**, e.g., _yes_ and _yes, he is member of the band_.
* **Symbolic Equivalence:** This primarily refers to different possible representations of numeric quantities, e.g. _four seasons_ and _4 seasons_, or _3000 BC_ and _Early Dynastic Period_.
* **Intrinsic Ambiguity in Questions:** This refers to queries with multiple valid interpretations, leading to a range of correct answers, e.g. _Who is command sergeant major of the army?_ could be seeking the person's identity or a description of the position itself. This category also includes cases where the correct answer is dependent on the specific point in time being referenced, e.g. _Who won NFL football coach of the year?_.
* **Granularity Discrepancies:** The level of specificity in the model's response may not align with that in the reference answer. This discrepancy in granularity can be **Temporal**, e.g., _August 25, 1939_ and _1939_, or **Spatial**, e.g., for the question _Where's the tv show The Crossing filmed?_, _Vancouver_ and _British Columbia, Canada_ are both correct answers.
* **Incomplete Reference Answers:** These cases occur when the reference answers, despite their number, fail to cover the entire spectrum of correct responses. We break this category into two types -- **List of named entities** which includes questions like the cast of a movie or members of the band, and **Open-ended questions** which included questions that can be answered in multiple different ways, all of which are not captured by reference answers., e.g., _What was the Watergate scandal?_.
* **Enumeration of Reference Answers:** This error happens especially in NQ samples, where the question asks for a list of entities (e.g., all states of a country), but each reference answer includes only one entity (e.g., a single state). The instruction-following models often generate all the entities in their response, which results in low overlap with each individual reference answer.
* **Satisfactory Subset Responses:** This category pertains to instances where the model's response, although containing less information than the reference answer, still provides an acceptable response to the user's query. For instance, for the question _"name some of her
Figure 4: Failure cases of F1 metric. _More Elaborate Answers_ is the most common failure sub-category, followed by _Open-ended Questions_.
_songs"_, the reference answer might list 5-6 song names, while the model response includes only 1-2. This situation is predominantly observed in the TopiOCQA dataset.
Figure 4 displays the distribution of error cases based on our classification. A significant portion of the errors (55.63%) fall under the _More Elaborate Answers_ category. This suggests that traditional QA metrics often penalize models unjustly due to the verbose nature of their responses. The next most common sub-category, _Open-ended Questions_ (13.99%), suggests that models are occasionally penalized for providing correct answers that were not included in the reference responses. The percentage share and exact count of all categories are reported in Table 7 (Appendix C).
In Figure 5, we provide qualitative examples of common failure modes, along with their associated evaluation metrics scores. Recall appears to be an effective fix for sub-categories such as _More Elaborate Answers_ and _Enumeration of Reference Answers_. However, both lexical match based and semantic similarity based metrics struggle with _Open-ended Questions_. Although GPT4-Eval appears to be relatively robust based on examples in Figure 5, this metric has some failures, with most common failure sub-category being _Open-ended Questions_. The complete distribution of failure cases according to sub-categories is reported in Figure 10, along with qualitative examples in Figure 11 (Appendix C).
Overall, the results of our human evaluation and analysis indicate that traditional metrics such as EM and F1, typically used in the literature for fine-tuned QA models, are not well-aligned with the verbose nature of instruction-following models. To determine more suitable metrics for these models, we analyze the correlation of each metric with human assessments.
relation and 69.36 Kendall correlation, closely followed by GPT3.5-Eval. We speculate that the language comprehension capabilities and inherent world knowledge embedded in LLMs like GPT-3.5 and GPT-4 help them overcome many of the challenges associated with evaluating responses of instruction-following models that we identified in our human evaluation study.
After GPT4-eval and GPT3.5-Eval, Recall achieves the highest correlation with human judgment. This simple token-overlap metric correlates better than other lexical matching-based metrics or more complex semantic similarity metrics like BERTScore and BEM, likely because it does not penalize verbosity in model responses.
Surprisingly, BERTScore fares worse than token-overlap F1, even when only considering the recall component of the metric. We hypothesize that the underlying issue is the poor quality of BERT token embeddings in short strings (Bommasani et al., 2020), a common characteristic of reference answers in QA datasets. For example, for the reference answer _yes, that is correct_, the model response _yes_ recieves the BERTScore of 0.806 and _no_ recieves a slighltly higher score of 0.815. Although BEM performs better than F1, it still falls short of token-overlap recall. Given that BEM's training data includes model responses of QA systems trained on SQuAD (Rajpurkar et al., 2016), it probably doesn't generalize well to more verbose responses of instruction-following models.
Although LLM-based evaluation, such as GPT4-eval and GPT3.5-eval, have the highest correlation with human judgements on the selected subset of responses, they also have certain limitations. Accessing these proprietary models incurs substantial API costs, which renders them impractical for automatic evaluation on large-scale datasets. Moreover, the reliability of LLMs as evaluators is still unclear, as recent studies have shown that they may exhibit systematic bias (Wang et al., 2023) and can be sensitive to input instructions (Bowman, 2023). Secondly, it is currently unclear how reliable LLMs are as evaluators, with some recent works demonstrating that they exhibit systematic bias (Wang et al., 2023) and are sensitive to input instructions (Bowman, 2023). Given these considerations, we rely on Recall to compare model performance.
### Automatic Correctness Evaluation
The performance of both instruction-following and fine-tuned models in a retrieval-augmented generation setup across multiple datasets is reported in Table 3 using several lexical matching and semantic similarity metrics. Unsurprisingly, traditional QA metrics like EM and F1 assign much lower scores to instruction-following models, compared to fine-tuned FiD. The only exception is Flan-T5, that outperforms FiD with a 17.72% gap. However, it should be noted that Flan-T5 is trained on a wide range of QA tasks, including NQ and HotpotQA (Section 3.2).
Based on our finding in Section 4.3, we consider Recall to get true estimate of model perfor
\begin{table}
\begin{tabular}{l l l l l l|l l l} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & EM & F1 & Recall & METEOR & Rouge-L & BertS (F1) & BEM \\ \hline \multirow{4}{*}{NQ} & FiD & **46.57** & **53.93** & 54.45 & **42.94** & **54.33** & **92.57** & 58.81 \\ & GPT-3.5 & 1.27 & 15.12 & **58.56** & 25.68 & 14.57 & 83.08 & **69.45** \\ & Flan-T5 & 41.16 & 50.62 & 54.03 & 40.80 & 51.02 & 91.77 & 58.74 \\ & Alpaca & 8.78 & 20.3 & 46.23 & 23.17 & 20.61 & 84.67 & 55.97 \\ & Llama-2 & 0.61 & 11.85 & 52.37 & 21.16 & 11.38 & 82.58 & 62.30 \\ \hline \multirow{4}{*}{HotpotQA} & FiD & 48.43 & 60.16 & 60.55 & 46.03 & 60.18 & 93.02 & 67.94 \\ & GPT-3.5 & 5.63 & 22.16 & 66.77 & 31.56 & 21.67 & 84.16 & **78.16** \\ & Flan-T5 & **58.12** & **71.14** & **71.28** & **53.44** & **71.16** & **94.37** & 76.19 \\ & Alpaca & 16.25 & 33.54 & 56.76 & 33.23 & 33.5 & 86.88 & 67.74 \\ & Llama-2 & 1.39 & 15.91 & 67.55 & 27.48 & 15.23 & 83.08 & 78.05 \\ \hline \multirow{4}{*}{TopicOQA} & FiD & **36.48** & **58.52** & 61.64 & **52.46** & **58.26** & **92.37** & 66.55 \\ & GPT-3.5 & 2.63 & 36.07 & **66.72** & 47.35 & 33.81 & 88.14 & **69.34** \\ \cline{1-1} & Flan-T5 & 18.34 & 43.17 & 52.54 & 42.42 & 42.88 & 89.42 & 56.57 \\ \cline{1-1} & Alpaca & 5.85 & 28.61 & 41.3 & 31.08 & 27.75 & 87.07 & 46.41 \\ \cline{1-1} & Llama-2 & 0.32 & 25.16 & 55.3 & 35.16 & 23.42 & 86.06 & 56.33 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of retrieval-augmented instruction-following models on three diverse information-seeking QA tasks. Among the metrics reported, _Recall_ is most correlated with human judgements. Based on recall, instruciton-following models outperform fine-tuned FiD on all three tasks.
mance. Using recall, the performance gap between instruction-following and fine-tuned models narrows significantly, with some instruction-following models even outperforming FiD. Notably, GPT-3.5 outperforms the fine-tuned FiD across all three QA task - 7.55% gap in NQ, 10.27% in HotpotQA, and 8.24% in TopiOCQA. These results suggest that in retrieval-augmented settings, instruction-following models are equally, or even more capable than fine-tuned generators in generating correct responses w.r.t user information needs.
## 5 Faithfulness w.r.t Provided Knowledge
As previously noted, instruction-following models often produce verbose responses. Consequently, responses from these models often contain supplementary information which can be hallucinated Rashkin et al. (2021); Dziri et al. (2022); Chiesurin et al. (2023). In this section, we conduct an analysis of the faithfulness of instruction-following models w.r.t knowledge provided as part of the input. We posit that an optimal generator's response should rely _solely_ on the knowledge relevant to the user information need. Based on this hypothesis, we split our analysis into two parts - 1) faithfulness w.r.t relevant knowledge, where we prompt the instruction-following model with the user question paired with the corresponding gold passage and evaluate the groundedness of the response in the provided knowledge, and 2) faithfulness w.r.t irrelevant knowledge, where we provide a related but irrelevant passage and measure how often the model refuses to answer.
In this section, we first describe the automatic faithfulness metrics (SS5.1). Next, similar to correctness, we conduct a human evaluation and compute correlations for all metrics, followed by large-scale evaluation of faithfulness w.r.t relevant knowledge (SS5.2). Finally, we analyze the capabilities of models to refrain from answering in the presence of irrelevant knowledge (SS5.3).
### Faithfulness Metrics
Here we describe the metrics that we use for automatic evaluation in Section 5.2. Given the user question or the conversation history (denoted by \(\mathcal{H}\)), the gold passage \(\mathcal{K}\), and the model response \(u\), the goal is to check if \(u\) is grounded in \(\mathcal{K}\). We consider both faithfulness and groundedness metrics in the literature for this task.
#### K-F1
Knowledge-F1 (denoted K-F1) is a lexical overlap metric that checks for F1 overlap between the tokens of \(u\) and \(\mathcal{K}\). Although it has been widely used for knowledge-grounded dialogue Shuster et al. (2021); Dziri et al. (2022), we argue it is unsuitable for assessing groundedness in information-seeking tasks. In information-seeking, model responses tend to be shorter than the knowledge snippet. Hence, even if the model selects precise information from the knowledge, it is penalized for not utilizing the entire knowledge snippet by K-F1.
#### K-Precision
To counter the shortcomings of K-F1, we propose K-Precision - the proportion of tokens in the model response \(u\) that are present in \(\mathcal{K}\). The intuition behind this is that in information-seeking, grounding \(u\) in \(\mathcal{K}\) is inherently an asymmetric task, i.e., \(u\) can be a subset of \(\mathcal{K}\) but \(\mathcal{K}\) cannot be a subset of \(u\).
#### K-BertS
Following Shuster et al. (2021) and Dziri et al. (2022), we use of BERTScore to measure semantic similarity between \(\mathcal{K}\) and \(u\) based on contextual BERT token embeddings. We refer to this as **K-BertS** to differentiate it from BertS (Section 4).
#### FaithCritic
We use the hallucination critic model by Dziri et al. (2023) to evaluate whether a response entails a given passage.3 It outputs a score between 0 and 1 indicating how likely a given response is hallucinated. Here, lower scores are indicative of lesser hallucination within a model's responses, hence, more groundedness.
Footnote 3: RoBERTa-Large checkpoint: huggingface.co/McGill-NLP/roberta-large-faithcritic
\(\mathbf{Q^{2}}\) (Honovich et al., 2021) is an evaluation metric used to quantify factual consistency between responses and provided passages using automatic question generation, question answering, and natural language inference (NLI) models.
#### LLMCritic
Similar to correctness, we investigate prompting LLMs to act as evaluator for groundedness. More specifically, we prompt GPT-3.5 and GPT-4 to annotate whether a given response uses _only_ the knowledge present in the provided passage. The actual prompt is provided in Appendix B (Figure 8).
### Faithfulness w.r.t Relevant Knowledge
In this section, we investigate the faithfulness of model responses when they are provided a passage relevant to the user query. We first conduct human
evaluation on a subset of samples, and use it to compare several evaluation metrics. Finally, we present the results of large-scale automatic evaluation of instruction-following models.
We conduct experiments on all three information-seeking tasks. For HotpotQA and TopiOCQA, the gold passage(s) for each query is provided as part of the dataset. For NQ, we follow Karpukhin et al. (2020) and provide each question and reference answer as a query to BM25 and take the first ranked passage as the gold passage. For all instruction-following models, we use the prompt provided in Section 3.
Human EvaluationFor each example, we provide annotators with a question (or the conversation history), response, and retrieved passages and task them with determining whether the response is grounded in the provided passages. We allow annotators to provide two labels - 1) to determine if the provided passage is actually a relevant passage to the user's query, and 2) to determine if the model response is "completely," "partially," or "not" found in the presented passages. The model response is given a score of 1.0 if the label is "completely," 0.5 for "partially" and 0 for "not." We collect two annotations for each example and resolve all conflicting annotations by collecting a third annotation and taking the majority vote.
We randomly sample \(50\) examples from Natural Questions, HotpotQA, and TopiOCQA for evaluation. We first filter out annotations for which the passage is not relevant to the query. This resulted in 39 samples for NQ, 47 for HotpotQA, and 49 for TopiOCQA. The high number of non-relevant for NQ is probably due to heuristic matching of gold passage to the question. We consider three models - GPT-3.5, Flan-T5, and Alpaca, resulting in 405 samples. We compute scores from all evaluation metrics on this subset, including LLMCritic (for both GPT-3.5 and GPT-4). These are presented in Table 9 (Appendix D).
In Table 4, we present correlations between different automatic groundedness metrics and human evaluation. We find that LLMCritic based on GPT-4 correlates the most with human evaluation. K
\begin{table}
\begin{tabular}{l c c} \hline \hline Metric & Spearman & Kendall \\ \hline K-F1 & -2.67 & -2.074 \\ K-Precision & 46.482 & 41.536 \\ K-Recall & -4.258 & -3.388 \\ \hline K-BertS (F1) & 3.583 & 3.009 \\ K-BertS (Precision) & 19.721 & 16.07 \\ K-BertS (Recall) & -10.3 & -8.22 \\ FaithCritic & 11.741 & 9.528 \\ \(Q^{2}\) (F1) & 27.883 & 23.932 \\ \(Q^{2}\) (NLI) & 27.524 & 24.228 \\ LLMCritic (GPT-3.5) & 27.189 & 26.789 \\ LLMCritic (GPT-4) & **50.485** & **49.742** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Correlation of evaluation metrics of faithfulness with human judgments. LLMCritic (GPT-4) is most correlated with human judgements. K-Precision is a close second.
Figure 6: Examples of non-faithful responses alongside relevant metric scores. Text in purple indicates hallucination, while teal responses are grounded to teal provided knowledge.
Precision, the token-overlap based metrics that is invariant to the length of the knowledge snippet in a close second, better than other model-based faithfulness metrics like K-BertS, FaithCritic, and \(Q^{2}\). This indicates that models trained to detect hallucinations in knowledge-grounded dialogues do not generalize well to information-seeking QA tasks. We present some examples of model hallucinations in Figure 6, along with associated scores of evaluation metrics.
Automatic EvaluationIn Table 5, we present the results for faithfulness w.r.t relevant knowledge on NQ, HotpotQA, and TopiOCQA. Traditional faithfulness metrics such as K-F1, K-BertS, and FaithCritic, rank either Llama-2 or GPT-3.5 as the most faithful model for all the three tasks.
On the other hand, K-Precision, the metric most correlated with human judgments, denotes a completely different trend. GPT-3.5 is the _least_ faithful for NQ, while Llama-2 is least faithful for HotpotQA and TopiOCQA. K-Precision ranks Flan-T5 as the most faithful instruction-following model for all three tasks. We hypothesize that K-F1 faces a similar issue as F1 in correctness evaluation - there is a length mismatch between the model response and the provided knowledge snippet. Our preliminary examination of model responses reveals that Flan-T5 responses are generally short, which is probably why K-F1 assigns it a low score.
These findings further highlight that verbose responses from instruction-following models are often not grounded in provided passages. For example, in Figure 6, GPT-3.5 hallucinates by outputting numbers that are completely different from what was provided, whereas Alpaca fails to reason properly based on provided passages.
### Faithfulness w.r.t Irrelevant Knowledge
In the retrieval-augmented setting, an ideal model should comprehend passage contents and avoid answering if the passage lacks relevant information. To test this, we provide the models with an irrelevant passage by selecting the 1001 ranked passage from the list of retrieved passages.
Prompt SetupOur preliminary experiments demonstrated that without an explicit instruction, Flan-T5 and Alpaca did not refrain from answering at all. Hence, we modified the prompt to make this behavior more explicit and instructed the model to output _I don't know_ if the passage is deemed irrelevant, as demonstrated in Figure 9 (Appendix B). We report the proportion of model responses that contain _I don't know_ and other observed synonymous expressions.4 Note that for these experiments, we only investigate whether a model refused to answer. We do not verify the correctness of any generated responses. Moreover, to measure the impact of this new instruction, we also experiment with providing the gold passage and report the proportion of model responses that do _not_ contain _I don't know_ and other synonymous expressions.
Footnote 4: UNANSWERABLE; ”.passages do not contain..”
ResultsWe present our results in Table 6. We find that when provided with an irrelevant passage, Llama-2 most often refuses to answer on open-domain and multi-hop QA datasets (more than 99% in NQ and HotpotQA). GPT-3.5 performs the best for TopiOCQA, refraining to answer on 88.15% turns. However, for both of these models, the inclination to not answer also extends to when the gold passage is actually present. In comparison, Flan-T5 is well balanced on datasets it was exposed to
\begin{table}
\begin{tabular}{l l l l|c c c c} \hline \hline Dataset & Model & K-F1 \(\uparrow\) & K-Precision \(\uparrow\) & K-BertS (F1) \(\uparrow\) & \(Q^{2}\) (F1) \(\uparrow\) & \(Q^{2}\) (NLI) \(\uparrow\) & FaithCritic \(\downarrow\) \\ \hline \multirow{3}{*}{NQ} & GPT-3.5 & 19.66 & 65.78 & **85.34** & **38.17** & **43.07** & **19.37** \\ & Flan-T5 & 5.84 & **94.04** & 80.9 & 36.54 & 38.27 & 82.42 \\ & Alpaca & 13.29 & 70.44 & 83.40 & 30.18 & 33.46 & 69.92 \\ & Llama-2 & **20.42** & 70.9 & 84.94 & – & – & 32.37 \\ \hline \multirow{3}{*}{HotpotQA} & GPT-3.5 & 16.61 & 81.19 & **84.18** & **49.32** & **56.07** & 38.95 \\ & Flan-T5 & 3.26 & **92.12** & 78.57 & 36.03 & 37.97 & 64.31 \\ & Alpaca & 9.55 & 87.03 & 82.68 & 43.51 & 49.05 & 50.32 \\ & Llama-2 & **17.7** & 76.9 & 83.65 & – & – & **38.53** \\ \hline \multirow{3}{*}{TopicOQA} & GPT-3.5 & **26.82** & 71.96 & **87.01** & 54.74 & 60.44 & **30.71** \\ & Flan-T5 & 23.74 & **86.37** & 86.42 & **61.30** & **64.75** & 44.89 \\ \cline{1-1} & Alpaca & 19.26 & 66.91 & 84.96 & 40.21 & 44.83 & 58.28 \\ \cline{1-1} & Llama-2 & 24.75 & 64.64 & 86.19 & 45.00 & 50.72 & 42.55 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for faithfulness w.r.t relevant knowledge. We report both token-based and model-based metrics. For all metrics except FaithCritic, higher scores indicate greater response groundedness.
during training, however, it remains overconfident on TopiOCQA, which was not included in the training. Alpaca adheres the least to the instruction and answers even if the passage is not relevant to the information need of the user. Appendix E demonstrates some failure examples of these models in both scenarios. Further research is required to optimally design and prompt models to better identify when to answer and when not to answer.
## 6 Discussion and Limitations
Below, we highlight several key findings of this paper and discuss some of its limitations.
Which Evaluation Metrics are Best?Our analysis on correctness (SS4) and faithfulness (SS5) demonstrates that widely-used metrics are not suitable for evaluating the correctness (due to errors such as elaborate answers, open-ended questions, and list of named-entities) and faithfulness (due to partially grounded responses). Correlating the metrics with human judgements (Table 2 and Table 5) reveals that **Recall** and **GPT4-Eval** are the best lexical and model-based metrics for correctness and **K-Precision** and **LLMCritic (GPT-4)** are the best lexical and model-based metrics for faithfulness, respectively. However, these model-based metrics, especially the ones based on LLMs, are usually slow to run, expensive, difficult to reproduce, and may exhibit systematic biases.
While we propose that Recall and K-Precision are the most widely-accessible and human-aligned metrics for correctness and faithfulness, respectively, we emphasize that these simple lexical-based metrics are easy to hack. One model can copy all the retrieved knowledge as the output, leading to high Recall and K-Precision metrics. However, such a model will be penalized heavily when evaluated for faithfulness w.r.t. irrelevant knowledge.
Instruction-Following ModelsAccording to the most human aligned and easy to use metrics (i.e., Recall and K-Precision), we conclude that GPT-3.5 outperforms other models on majority of the datasets in correctness w.r.t information need. However, when analyzing the faithfulness w.r.t relevant knowledge, Flan-T5 is shown to be the best model in all three datasets. Moreover, our further analysis on the models' faithfulness w.r.t irrelevant knowledge demonstrates that models struggle to correctly identify whether the provided knowledge is relevant or not.
LimitationsIt is worth mentioning that the experiments for evaluating the faithfulness of the models are conducted in a modified setting, where a relevant or irrelevant passage is provided in the prompt on purpose. This is different from the real-world scenario, where the retrieved passages can contain a mix of relevant and irrelevant knowledge.
Finally, it should also be noted that beyond qualitative investigation, we did not explore a wide range of prompts for the tasks studied in this work. Recent work has shown that the performance of instruction-following models can vary greatly depending upon the provided prompt Zhao et al. (2021); Liu et al. (2023). We leave it to future works to investigate better prompts for instruction-following models in a retrieval-augmented setting.
## 7 Conclusion
We extensively study the capability of instruction-following models to correctly and faithfully respond to questions in three QA settings (natural, multi-hop, and conversational). First, we uncover various issues with using traditional metrics, like F1 score, to evaluate the correctness of models. Through correlation with human judgement, we find that LLM-based metrics (e.g. GPT-4) and token-level Recall are promising metrics for evaluating the correctness w.r.t information need. Moreover, our further faithfulness analysis shows that LLM-based metrics like LLMCritic (GPT-4) and lexical-based K-Precision are more aligned with human judgements in evaluating the faithfulness of
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \multicolumn{2}{c}{Incorrect Psg. \(\uparrow\)} & Gold Psg. \(\downarrow\) \\ Dataset & Model & & \\ \hline \multirow{3}{*}{NQ} & GPT-3.5 & 98.5 & 48.01 \\ & Flan-T5 & 91.99 & 24.76 \\ & Alpaca & 0.06 & **0.00** \\ & Llama-2 & **99.34** & 75.84 \\ \hline \multirow{3}{*}{HotpotQA} & GPT-3.5 & 98.54 & 26.39 \\ & Flan-T5 & 77.14 & 1.58 \\ & Alpaca & 0.09 & **0.11** \\ & Llama-2 & **99.16** & 76.96 \\ \hline \multirow{3}{*}{TopicQA} & GPT-3.5 & **88.15** & 32.42 \\ & Flan-T5 & 40.77 & 7.68 \\ \cline{1-1} & Alpaca & 1.27 & **0.80** \\ \cline{1-1} & Llama-2 & 87.59 & 61.77 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Percentage of model responses that contain _I don’t know_ and other synonymous expressions when provided with an incorrect passage (higher is better) or the gold passage (lower is better).
the models given the relevant knowledge.
Overall, we find that GPT-3.5 is better at providing correct responses for all tasks, whereas Flan-T5 comes out on top for faithfulness. However, all models struggle to accurately respond with "I don't know" given an irrelevant passage when explicitly instructed to do so.
While Recall and K-Precision are the most human judgement aligned and widely-accessible alternative metrics, they are easy to hack. Therefore, we encourage the community to come up with more reliable metrics.
|
2306.17840 | Statler: State-Maintaining Language Models for Embodied Reasoning | There has been a significant research interest in employing large language
models to empower intelligent robots with complex reasoning. Existing work
focuses on harnessing their abilities to reason about the histories of their
actions and observations. In this paper, we explore a new dimension in which
large language models may benefit robotics planning. In particular, we propose
Statler, a framework in which large language models are prompted to maintain an
estimate of the world state, which are often unobservable, and track its
transition as new actions are taken. Our framework then conditions each action
on the estimate of the current world state. Despite being conceptually simple,
our Statler framework significantly outperforms strong competing methods (e.g.,
Code-as-Policies) on several robot planning tasks. Additionally, it has the
potential advantage of scaling up to more challenging long-horizon planning
tasks. | Takuma Yoneda, Jiading Fang, Peng Li, Huanyu Zhang, Tianchong Jiang, Shengjie Lin, Ben Picker, David Yunis, Hongyuan Mei, Matthew R. Walter | 2023-06-30T17:58:02Z | http://arxiv.org/abs/2306.17840v4 | # Statler: State-Maintaining Language Models
###### Abstract
Large language models (LLMs) provide a promising tool that enable robots to perform complex robot reasoning tasks. However, the limited context window of contemporary LLMs makes reasoning over long time horizons difficult. Embodied tasks such as those that one might expect a household robot to perform typically require that the planner consider information acquired a long time ago (e.g., properties of the many objects that the robot previously encountered in the environment). Attempts to capture the world state using an LLM's implicit internal representation is complicated by the paucity of task- and environment-relevant information available in a robot's action history, while methods that rely on the ability to convey information via the prompt to the LLM are subject to its limited context window. In this paper, we propose Statler, a framework that endows LLMs with an explicit representation of the world state as a form of "memory" that is maintained over time. Integral to Statler is its use of two instances of general LLMs--a world-model reader and a world-model writer--that interface with and maintain the world state. By providing access to this world state "memory", Statler improves the ability of existing LLMs to reason over longer time horizons without the constraint of context length. We evaluate the effectiveness of our approach on three simulated table-top manipulation domains and a real robot domain, and show that it improves the state-of-the-art in LLM-based robot reasoning. Project website: [https://statler-lm.github.io/](https://statler-lm.github.io/).
Keywords:Large language models, Long-horizon planning, World state model
## 1 Introduction
Large language models (LLMs) are capable of generating intricate free-form text and complex code with an impressive level of proficiency [1; 2; 3]. Recently, researchers have shown that the success of LLMs extends to robotics domains, where the capacity for LLMs to perform complex reasoning using language enables robots to perform tasks that require sophisticated planning and language understanding [4; 5; 6]. These methods either rely solely on the implicit in-context memory that is internal to the LLM [5] or they augment LLMs with scene information extracted from an ego-centric image captured at the current time step [4]. Both approaches have proven effective for difficult embodied reasoning tasks, however they struggle when faced with planning tasks that require planning over long time horizons, due to the limited context window of contemporary LLMs. Although there have been recent efforts to enlarge the context window of LLMs [7], the size of the
context window remains fundamentally bounded. Further, providing the model with long-range context improves prediction accuracy only on on a small number of tokens--LLMs struggle to exploit information conveyed in long-term context beyond what can be directly copied [8]. Meanwhile, reliance on the robot's current ego-centric view prohibits the language model from reasoning over aspects of the scene that are not directly observable, e.g., the fruit located in the (closed) kitchen refrigerator or an object in a room that the robot previously visited.
In this paper, we propose Statler (STATE-maintaining Language models for Embodied Reasoning), a framework that maintains an external world model as explicit memory to improve the long-term reasoning capabilities of LLMs for robot planning. Integral to our approach, as shown in Figure 1, it maintains and interfaces with this world model over time using two instances of general LLMs--a **world-model reader** and a **world-model writer**. The world-model reader interfaces with the world model to generate code that answers user queries. The world-model writer is responsible for predicting the next world state based on the current world state and a query given by the reader. We employ a structured representation of the world state, which has been found to improve the performance of LLMs [9; 10], particularly when the output is also structured, and has the advantage of being human-readable and concise for efficient processing. Note that while we individually tailor each world model's design to its general task type (see Prompts 12, 8, 7, and 9), the design is highly flexible because the reader and writer are both LLMs and are instructed with in-context-learning to understand how to parse and manipulate the world model. This is in contrast to domain-specific formal languages [11], where the designs are fixed and parsing and writing requires that specific rules be followed.
We evaluate Statler on a series of simulated and real-world robot manipulation domains. Experimental results demonstrate that Statler improves the long-term embodied reasoning capabilities of LLMs and that it outperforms the current state-of-the-art [5].
## 2 Motivational Example
As a demonstration of the challenges to temporal reasoning with LLMs, we consider a _three-cups-and-a-ball_ version of the classic shell game. In this game, three visually identical cups are placed upside down on a table with a ball hidden under one of the cups. At the start, the player knows under which of the three cups the ball lies. In each of the subsequent \(K\) rounds, the dealer swaps the position of two randomly selected cups. After the \(K\) rounds, the player is asked which of the three
Figure 1: Our Statler framework enables robots to carry out complex tasks specified in natural language that require reasoning over long time horizons. Integral to our model are its world model writer and world model reader, two instances of general LLMs that are responsible for maintaining the explicit world state and generating code that enables the robot to carry out the task.
cups contains the ball. Because the cups are visually indistinguishable, the player must keep track the ball's location as the cups are swapped in order to successfully identify its final location.
We simulate this three-cups-and-a-ball game using text as the interface. Prompt 1 presents the setup of the game. In Line 2, the Boolean value indicates the location of the ball and the subsequent lines describe the sequence of dealer swaps. After providing the LLM with multiple in-context learning examples prior to the prompt, the model is then asked to identify the location of the ball by generating the list highlighted in green after the \(K\) swaps.
We evaluate three different approaches that attempt to solve this task: a vanilla LLM, an LLM with chain-of-thought (CoT) [12], and a state-maintaining LLM, a simplified version of our Statler model. The vanilla LLM (see Prompt 1) provides only the final location of the ball at the end of the game given the initial location and sequence of swaps. The LLM with CoT (see Prompt 2) generates the sequence of ball positions after the final swapping action. This triggers the model to reason over the state transitions (i.e., changes in the cup positions) that can help to identify the final location of the ball. The state-maintaining LLM (see Prompt 3) stores and updates a state representation at every step. In contrast to the other models, the state-maintaining LLM processes each query step-by-step conditioned on the previous (generated) state representation, and then updates the representation.
We evaluate the accuracy with which these three models predict the location of the ball for different numbers of dealer swaps. We use the text-davinci-003 version of GPT-3 as our LLM using the OpenAI API.1 We prompt the LLM with \(30\) demonstration examples with a randomized number of swaps, and one final prompt for each episode. We evaluate the three models using \(100\) episodes, each of which involves querying the model for the location of the ball after every dealer swap. We terminate the episode if the response to the query is incorrect.
Footnote 1: [https://openai.com/api](https://openai.com/api)
Figure 2 visualizes the average absolute accuracy of each model as well the accuracy relative to the model's one-swap accuracy. As we increase the number of swaps, the absolute accuracy of the vanilla LLM drops precipitously, reaching a near-zero value after only three swaps. This behavior is consistent with existing work that highlights the difficulty of maintaining the world state implicitly in LLMs [13, 14]. The LLM with CoT performs slightly better after one swap, but also experiences a pronounced decrease in absolute and relative accuracy. In contrast, the state-maintaining model consistently achieves higher absolute accuracy. More importantly, the relative accuracy of the state-maintaining model decreases far more gradually than the other methods, retaining more than \(75\%\) (absolute and relative) accuracy after five rounds of swaps.
Figure 2: The accuracies of different methods for different numbers of swaps in the three-cups-and-a-ball shell game. LLM w/ State is a simplified version of our proposed Statler framework. For each method, the solid line shows how its accuracy \(a(n)\) changes with the number of swaps \(n\). The dashed line is the _relative_ accuracy: \(r(n)=a(n)/a(1)\). Intuitively, it measures how fast the performance decreases from a _hypothetically perfect_ one-swap performance. Note that LLM w/ State indeed achieves \(a(1)=100\%\).
Next, we present our full method (Statler)--a generalized version of this simple state model--and demonstrate its ability to produce plans in the context of more realistic scenarios that require reasoning with significantly greater complexity.
## 3 Method
As exemplified in Section 2, the key to our approach is to allow the LLM to describe the next state while responding to each user query. The motivating example is simple in that the next state description _is_ the response. Instead, we consider a more challenging and arguably more realistic scenario, such as manipulating objects on a table as depicted in Figure 4. In this setting, there is a significant burden on the LLM to track the state updates as well as generate responses. Inspired by the concept of modularity, we propose to _split_ the burden across multiple different prompted LLMs. Precisely, we maintain a separate prompt that includes instructions and demonstrations for each subtask (state tracking or query responding) and then use the prompt to elicit an LLM to perform the particular subtask. As we will discuss shortly, our framework includes **world-model reader** that responds to the user query and a **world-model writer** that is responsible for updating the state representation. Our framework, also shown in Figure 1 does not pose any limitation on what domain it can be applied to, or how many number of subtasks there are. We note that our approach can be seen as an extension to Code-as-Policies, where the state-managing mechanism is additionally embedded without affecting the fundamental capability of Code-as-Policies (i.e., hierarchical code generation).
To give a better idea of how the world-model reader and writer operate, we show example prompts and what each model is expected to generate. Prompt 4 is an example of the input passed to the world-model reader. Given a user query "Put the cyan block on the yellow block" and the current state representation (Lines 1-12), The world-model reader is expected to generate the code that responds to the query, taking into account the current state. The expected code to be generated is highlighted in green. After generating the code, our model executes it to complete the query. When the state needs to be updated, the generated code contains an update_wm function, which triggers the world-model writer with the query specified in its argument. In Prompt 5, we show the corresponding example for the world-model writer. Similar to the world-model reader, we prepend the current state representation before the user query and the model generates the updated state representation
Figure 3: Examples of simulations that show the result of executing different natural language instructions using a vanilla LLM and our state-maintaining LLM.
(highlighted in green). Whenever the writer updates the state representation, we store it in external memory and refer to it as the current state representation.
## 4 Experiments
To demonstrate the capability of our approach, we evaluate our method on three tabletop domains (shown in Figure 4): simple pick-and-place, block disinfection, and relative weight reasoning. For each domain, we designed a training prompt and consider \(20\) evaluation episodes, where each episode consists of between \(5\) and \(16\) consecutive steps of user queries. We ensure every episode contains at least one query that requires reasoning over the interaction history (i.e., it requires "memory" across steps). This section is organized as follows: First, we provide a description of the three evaluation domains. Second, we present the details of our prompt design. Third, we discuss the evaluation results and then provide qualitative analyses.
### Simulated Table-top Manipulation Domains
The **simple pick-and-place** domain involves scenarios that require a robot arm to sequentially pick up a block and place it onto another block, bowl, or the table. The model needs to remember and reason over the block locations. The example user queries are "Put the green block in the red bowl", "What is the color of the block under the pink block?", and "How many blocks are in the green bowl?"
In the **block disinfection** domain, we consider the scenario in which a block can be either _dirty_ or _clean_. When a clean block touches a dirty block (for example by stacking a dirty block on a clean block), it becomes dirty. There is a _disinfector_ on the table that cleans any block placed inside it. This scenario emulates a clean-up task in which you might ask a robot to put dirty dishes in a dishwasher or dirty clothes in a washing machine. The user query contains pick-and-place commands similar to those in the simple pick-and-place domain as well as textual utterances that require reasoning over which blocks are clean and dirty, such as "Put all the clean blocks in the green bowl." This domain presents a particular challenge as the model must effectively track the current cleanliness status of each block and accurately capture the state mutations that happens when a dirty block comes into contact with another clean block.
**Relative weight reasoning** involves memorizing and reasoning over the relative weights of the blocks. User queries provide information about the weight of blocks (e.g., "The red block is twice the weight of the bronze block"), which are followed by queries that require reasoning over the weights (e.g., "Put blocks in the purple bowl so that their total weight becomes identical to what is in the gray bowl").
We run the baseline (Code-as-Policies) and our Statler state-maintaining model on each domain. Table 1 reports the success rates of each method as well as their step count until the first failed attempt to generate the correct code. We normalize the successful steps by the total number of steps for each episode.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Simple Pick-and-Place} & \multicolumn{2}{c}{Block Disinfection} & \multicolumn{2}{c}{Rel. Weight Reasoning} \\ \hline & successful & success & successful & success & successful & success \\ & steps & rate & steps & rate & steps & rate \\ \hline Code-as-Policies & \(0.54\) & \(0.00\ \ (0/20)\) & \(0.68\) & \(0.00\ (0/20)\) & \(0.84\) & \(0.00\ \ (0/20)\) \\ Statler (ours) & \(\mathbf{0.88}\) & \(\mathbf{0.50}\ (\mathbf{10}/\mathbf{20})\) & \(\mathbf{0.82}\) & \(\mathbf{0.40}\ (\mathbf{8}/\mathbf{20})\) & \(\mathbf{0.93}\) & \(\mathbf{0.55}\ (\mathbf{11}/\mathbf{20})\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of successful steps until failure (normalized by episode length) and the success rate for each domain.
Figure 4: The simulated domains we consider include (a) pick-and-place; (b) block disinfection, where the translucent sphere around a block represents its dirtiness (this is not visible to the robot); and (c) relative weight reasoning, where the radius of the disk under each block provides an indication of its weight. These disks are rendered there only for visual aids.
We observe that the baseline Code-as-Policies model correctly processes most of the user queries that do not require reasoning over the past steps, such as "Put the red block on the blue block" or "The red block has the same weight as the blue block" (in this case, noop() is the correct code to generate). However, when it comes to the queries that require non-trivial operation of the memory, such as "Put all the dirty blocks in the pink bowl" and "What is the color of the block under the purple block?", the baseline model tends to generate incorrect code or often fails to generate any code at all (see Figure 5 (left)). In contrast, our Statler model successfully handles the majority of cases that require complex logical reasoning over the past history. In each of the three domains, we find that Statler outperforms the baseline in the majority of scenarios.
In order to better understand the behavior of Statler, we analyze the success rate of code generation based on the type of textual utterance. Specifically, we categorize each query as either _temporal_ or _non-temporal_ depending on whether it involves temporal reasoning. Table 2 summarizes the performance of Statler in comparison to Code-as-Policies on both types of queries. We note that we consider the sequence of steps up until the point that model fails to generate a correct code, including the step on which it failed. The difference in denominator between the two models under the same setting results from the fact that the models fail at different steps in some episodes. We also report an alternative way to calculate the success rate in Table 2, by aligning the set of queries evaluated by both of the models.
Examining the failure cases reveals some interesting observations. Firstly, we find that both models generally successfully handle the basic pick-and-place tasks. However the baseline model consistently fails to generate a response when presented with a non-trivial query that involves reasoning over the past. Secondly, thanks to its state-updating mechanism, our model demonstrated superior comprehension of complex queries, resulting in a better performance. For instance, in queries like "Put the block in the golden bowl on the block in the silver bowl" our model executed flawlessly, whereas the baseline model consistently failed.
Despite its robustness, our model is not without errors. It occasionally generates incorrect responses and still suffers from hallucinations. For example, it hallucinates block conditions (clean or not)
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{2}{c}{Non-temporal} & \multicolumn{2}{c}{Temporal} \\ \hline & Code-as-Policies & Statler (ours) & Code-as-Policies & Statler (ours) \\ \hline Simple Pick-and-Place & \(1.00\ \ (62/62)\) & \(1.00\ \ (68/68)\) & \(0.31\ (9/29)\) & **0.83**\((48/58)\) \\ Block Disinfection & \(0.99\ (148/149)\) & \(0.98\ (164/168)\) & \(0.05\ (1/20)\) & **0.65**\((15/23)\) \\ Weight Reasoning & \(1.00\ (107/107)\) & \(1.00\ (107/107)\) & \(0.00\ (0/20)\) & **0.55**\((11/20)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Success rates of Code-as-Policies and Statler for non-temporal and temporal queries, truncating at the first failure of each model.
Figure 5: Examples that show the result of querying language models with and without state maintenance for the environment depicted in the image. In the scenario depicted on the left, a standard language model fails to produce an answer, while our state-maintaining language model produces the correct response. On the right, one of the blocks is currently not visible and so a standard language model (Code-as-Policies) incorrectly identifies two blocks as not being in the bowls. By maintaining a persistent model of the world, our method is aware of the third block and correctly answers the query.
or locations when the cleanliness of the block is never explicitly described. Moreover, the model's reasoning strategy seems to predominantly focus on evaluating the weight relationships between blocks, e.g., contemplating whether a block is light or heavy, rather than executing mathematical computations. This weakness became evident when asked to accumulate blocks in a bowl until their total weight surpassed another bowl's content, as the model underfilled the bowl. Additionally, our model also makes other mistakes and struggles to comprehend ambiguous terms like "other" in queries such as "the other blocks are clean." In the disinfection domain, it wrongly inferred from the training prompt that a block _at the bottom_ becomes dirty when another block is placed on top of it, independent of the cleanliness of the placed block, rather than "a block becomes dirty when it is in contact with another block."
### Real Robot Experiments
In order to validate our method on a real robot, we implement it on a UR5 arm in a similar tabletop domain as the simulated experiments. Because ground-truth position of objects is not available, unlike in simulation, we use MDETR [15], an open-vocabulary segmentation model, to obtain segmentation masks for objects from an RGB camera on the gripper. Through camera transforms of the masks and a depth camera also located on the gripper, we obtain the \((x,y,z)\) positions for grasping and placement. Besides these details, all of the primitive functions are the same as in simulation. In this domain, the robot is asked to stack objects and to cover objects with different colored cups. At any point, an object is only permitted to be covered by a single object or cover. If the robot is asked to manipulate the bottom object, it must remove the top object. If it is asked to use a new cover, it must remove the old cover. In Figure 6, we provide a short example where the vanilla language model approach fails. The difficulty is in recognizing that the black cup must be removed in order to move the yellow block, which Statler correctly spots. Instead, the vanilla approach assumes that the object does not need to be uncovered, which leads MDETR to incorrectly detect the toy wheel that has yellow color in it as the yellow block.
Figure 6: A comparison of the resulting behavior for (top) Code-as-Policies and (bottom) our Statler model for the real robot experiments given the multi-sentence instruction “Put the black cup on the yellow block. Put the yellow block on the Rubik’s cube.” Frames are arranged with time increasing from left to right, and correspond to instances when the robot has placed a (possibly imaginary) object. In order to successfully carry out the instruction, the robot must remove the black cup after placing it above the yellow block in order to place the block on the Rubik’s cube. However, the the baseline Code-as-Policies (top row, third frame) fails to move the black cup aside, leaving the yellow block covered, and instead places an imaginary object on top of the Rubik’s cube.
Related Work
**Language Understanding for Robotics** There is a large body of work on language understanding for robotic agents dating back several decades. A common approach involves symbol grounding [16], whereby words and phrases are mapped to symbols in the robot's world model. Early work [17; 18] relies upon hand-engineered rules to perform this mapping. More recent methods replace these rules with statistical models the parameters of which are trained on annotated corpora [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. Other methods use neural network-based architectures to jointly reason over natural language utterances and the agent's (visual) observations of the scene [31; 32; 33; 34; 35].
**LLMs for Robotics** Since LLMs are trained with enormous Internet corpora, their infused common sense have shown to help in the domain of robotics in terms of high-level planning from natural language instructions [4; 5; 36; 4] for both object manipulation [37; 38] and navigation tasks [39; 40; 41; 42]. Combining LLMs with expressive visual-language embeddings also enables impressive capabilities [43]. This has led to efforts to push for general multi-modality embodied models [44; 45].
**Code Generation with LLMs** Code generation has been one of the most successful use cases for LLMs [2; 46; 47; 48; 49; 3]. Since code can connect with executable APIs for tasks including computation, vision and manipulation, a large chunk of work has focused on code generation with different tools [50; 51; 52]. In particular, Code-as-policies [5] has been one of the first to use code generation within the robotics context.
**State Representation in Reasoning** State representation is a common formulation in robotics to summarize and provide necessary information for the agents to perform actions [53; 54]. For example, in a Markov chain, the state is constructed so that future predictions are independent of the past given the current state. This saves the agent from remembering all details of the history [55]. State representation has also been helpful in algorithmic reasoning tasks [56; 57]. Instead of using one forward pass to predict the execution result for the whole code snippet, Nye et al. [56][56] propose to spell out step-by-step intermediate outputs to help infer the final execution results. Also relevant are research efforts that aim to enhance language modeling by rolling out possible future tokens [58].
## 6 Conclusion
In this paper, we presented Statler, a state-maintaining language model that consists of a world-model reader and a writer. The world-model reader responds to a user query taking into account the current internal state, while offloading the state update to the world-model writer. Our model does not pose any limitations in how the state representation should be formatted, as long as it is represented in the form of a string, leaving some space for flexibility in its design. We evaluated our approach on various simulated and real tasks. The experimental results suggest that our approach effectively maintains state representation and handles non-trivial reasoning over the past steps, whereas the baseline approach (Code-as-Policies) fails to generate correct code on such queries. Since the capability of the world-model reader depends directly on the language model behind it, our model has a potential to handle various challenging scenarios as well as various types of state representations, given a strong backbone LLM.
In addition, having separate models (i.e., the world-model reader and the world-model writer) suggests that it may be possible to use a lightweight language model for some components. For example, if the task for the world-model writer is much easier than the reader, one can utilize a smaller LLM with reduced API costs or one that is hosted locally to complete the task without sacrificing performance.
A potential extension of our work is to integrate numerical representation, such as coordinates and sizes of the objects, into the state. An ability to reason over these quantities will be an important step toward embodied intelligence.
## 7 Limitations
There are several limitations with the current approach. Firstly, although highly flexible, the world models are designed by hand individually for each task. Ideally there should be an automatic way of generating it, maybe from the LLMs themselves. Secondly, the current world models are still purely text-based, so it does not directly reason about visual information. It will be interesting to see how it will work out when more capable multi-modal models are accessible. Thirdly, in this paper, we assume that the generated code executes successfully, thus if there are issues in execution the updated state will be incorrect. This could be alleviated by providing some feedback from external modules such as image captioning models.
## Acknowledgements
We are grateful to National Science Foundation for enabling this work under HDR TRIPODS (No. 2216899), and to Adobe for supporting the second-to-last author through an Adobe Research gift. We thank Luzhe Sun for his help with prompt writing at an early stage, and Richard Xu for helping to set up the simulator.
|
2309.08883 | Solving Satisfiability Modulo Counting for Symbolic and Statistical AI
Integration With Provable Guarantees | Satisfiability Modulo Counting (SMC) encompasses problems that require both
symbolic decision-making and statistical reasoning. Its general formulation
captures many real-world problems at the intersection of symbolic and
statistical Artificial Intelligence. SMC searches for policy interventions to
control probabilistic outcomes. Solving SMC is challenging because of its
highly intractable nature($\text{NP}^{\text{PP}}$-complete), incorporating
statistical inference and symbolic reasoning. Previous research on SMC solving
lacks provable guarantees and/or suffers from sub-optimal empirical
performance, especially when combinatorial constraints are present. We propose
XOR-SMC, a polynomial algorithm with access to NP-oracles, to solve highly
intractable SMC problems with constant approximation guarantees. XOR-SMC
transforms the highly intractable SMC into satisfiability problems, by
replacing the model counting in SMC with SAT formulae subject to randomized XOR
constraints. Experiments on solving important SMC problems in AI for social
good demonstrate that XOR-SMC finds solutions close to the true optimum,
outperforming several baselines which struggle to find good approximations for
the intractable model counting in SMC. | Jinzhao Li, Nan Jiang, Yexiang Xue | 2023-09-16T05:34:59Z | http://arxiv.org/abs/2309.08883v2 | Solving Satisfiability Modulo Counting for Symbolic and Statistical AI Integration With Provable Guarantees
###### Abstract
Satisfiability Modulo Counting (SMC) encompasses problems that require both symbolic decision-making and statistical reasoning. Its general formulation captures many real-world problems at the intersection of symbolic and statistical Artificial Intelligence. SMC searches for policy interventions to control probabilistic outcomes. Solving SMC is challenging because of its highly intractable nature (NPPP-complete), incorporating statistical inference and symbolic reasoning. Previous research on SMC solving lacks provable guarantees and/or suffers from sub-optimal empirical performance, especially when combinatorial constraints are present. We propose XOR-SMC, a polynomial algorithm with access to NP-oracles, to solve highly intractable SMC problems with constant approximation guarantees. XOR-SMC transforms the highly intractable SMC into satisfiability problems, by replacing the model counting in SMC with SAT formulae subject to randomized XOR constraints. Experiments on solving important SMC problems in AI for social good demonstrate that XOR-SMC finds solutions close to the true optimum, outperforming several baselines which struggle to find good approximations for the intractable model counting in SMC.
## 1 Introduction
Symbolic and statistical approaches are two fundamental driving forces of Artificial Intelligence (AI). Symbolic AI, exemplified by SATisfiability (SAT) and constraint programming, finds solutions satisfying constraints but requires rigid formulations and is difficult to include probabilities. Statistical AI captures uncertainty but often lacks constraint satisfaction. Integrating symbolic and statistical AI remains an open field and has gained research attention recently [1, 2, 3].
Satisfiability Modulo Counting (SMC) is an umbrella problem at the intersection of symbolic and statistical AI. It encompasses problems that carry out symbolic decision-making (satisfiability) _mixed with_ statistical reasoning (model counting). SMC searches for policy interventions to control probabilistic outcomes. Formally, SMC is an SAT problem involving predicates on model counts. Model counting computes the number of models (i.e., solutions) to an SAT formula. Its weighted form subsumes probabilistic inference on Machine Learning (ML) models.
As a motivating SMC application, stochastic connectivity optimization searches for the optimal plan to reinforce the network structure so its connectivity preserves under stochastic events - a central problem for a city planner who works on securing her residents multiple paths to emergency shelters in case of natural disasters. This problem is useful for disaster preparation [4], bio-diversity protection [5], internet resilience [6], social influence maximization [7], energy security [8], etc. It requires symbolic reasoning (satisfiability) to decide which roads to reinforce and where to place emergency shelters, and statistical
inference (model counting) to reason about the number of paths to shelters and the probabilities of natural disasters. Despite successes in many use cases, previous approaches [9, 10, 11, 12] found solutions _lack of certifiable guarantees_, which are unfortunately in need for policy adoption in this safety-related application. Besides, their surrogate approximations of connectivity may overlook important probabilistic scenarios. This results in _suboptimal quality_ of the generated plans. As application domains for SMC solvers, this paper considers emergency shelter placement and supply chain network management - two important stochastic connectivity optimization problems.
It is challenging to solve SMC because of their highly intractable nature (NP\({}^{\text{PPP}}\)-complete) [13] - still intractable even with good satisfiability solvers [14, 15, 16] and model counters [17, 18, 19, 20, 21, 22, 23]. Previous research on SMC solves either a special case or domain-specific applications [24, 25, 26, 27, 28, 29, 30]. The special case is called the Marginal Maximum-A-Posterior (MMAP) problem, whose decision version can be formulated as a special case of SMC [31, 32, 33, 34, 35, 36] Both cases are solved by optimizing the surrogate representations of the intractable model counting in variational forms [37, 38], or via knowledge compilation [39, 36, 40] or via sample average approximation [41, 42, 43, 11, 44, 45, 46, 47].
Nevertheless, previous approaches either cannot quantify the quality of their solutions, or offer one-sided guarantees, or offer guarantees which can be arbitrarily loose. The lack of tight guarantees results in delayed policy adoption in safety-related applications such as the stochastic connectivity optimization considered in this paper. Second, optimizing surrogate objectives without quantifying the quality of approximation leads to sub-optimal behavior empirically. For example, previous stochastic connectivity optimization solvers occasionally produce suboptimal plans because their surrogate approximations overlook cases of significant probability. This problem is amplified when combinatorial constraints are present.
We propose XOR-SMC, _a polynomial algorithm accessing NP-oracles, to solve highly intractable SMC problems with constant approximation guarantees_. These guarantees hold with high (e.g. \(>99\%\)) probability. The strong guarantees enable policy adoption in safety-related domains and improve the empirical performance of SMC solving (e.g., eliminating sub-optimal behavior and providing constraint satisfaction guarantees). The constant approximation means that the solver can correctly decide the truth of an SMC formula if tightening or relaxing the bounds on the model count by a multiplicative constant do not change its truth value. The embedding algorithms allow us to find approximate solutions to beyond-\(NP\) SMC problems via querying \(NP\) oracles. It expands the applicability of the state-of-the-art SAT solvers to highly intractable problems.
The high-level idea behind XOR-SMC is as follows. Imagine a magic that randomly filters out half models (solutions) to an SAT formula. Model counting can be approximated using this magic and an SAT solver - we confirm the SAT formula has more than \(2^{k}\) models if it is satisfiable after applying this magic \(k\) times. This magic can be implemented by introducing randomized constraints. The idea is developed by researchers [48, 49, 50, 17, 18, 51, 52, 19, 20, 53]. In these works, model counting is approximated with guarantees using polynomial algorithms accessing NP oracles. XOR-SMC notices such polynomial algorithms can be encoded as SAT formulae. Hence, SAT-Modulo-Counting can be written as SAT-Modulo-SAT (or equivalently SAT), when we _embed_ the SAT formula compiled from algorithms to solve model counting into SMC. The constant approximation guarantee also carries.
We evaluate the performance of XOR-SMC on real-world stochastic connectivity optimization problems. In particular, we consider applied problems of emergency shelter placement and supply chain management. For the shelter placement problem, our XOR-SMC finds high-quality shelter assignments with less computation time and better quality than competing baselines. For wheat supply chain management, the solutions found by our XOR-SMC are better than those found by baselines and are close to the optimal solutions.
Our contributions can be summarized as follows:
* We propose XOR-SMC to solve highly intractable Satisfiability Modulo Counting problems with a polynomial number of queries to NP-oracles.
* We prove the proposed XOR-SMC method has a high probability constant approximation guarantee.
* In experiments, we consider two important SMC problems: emergency shelter allocation and robust supply chain design. Our XOR-SMC method requires less time to find better solutions compared to several baselines.
## 2 Preliminaries
### Satisfiability Modulo Theories
Satisfiability Modulo Theory (SMT) determines the SATisfiability (SAT) of a Boolean formula, which contains predicates whose truth values are determined by the background theory. SMT represents a line of successful efforts to build general-purpose logic reasoning engines, encompassing complex expressions containing bit vectors, real numbers, integers, and strings, etc [54]. Over the years, many good SMT solvers are built, such as the Z3 [55, 56] and cvc5 [57]. They play a crucial role in automated theorem proving, program analysis [58], program verification [59], and software testing [60].
### Model Counting and Probabilistic Inference
Model counting computes the number of models (i.e., satisfying variable assignments) to an SAT formula. Consider a Boolean formula \(f(\mathbf{x})\), where the input \(\mathbf{x}\) is a vector of Boolean variables, and the output \(f\) is also Boolean. When we use 0 to represent false and 1 to represent true, \(\sum_{x}f(\mathbf{x})\) computes the model count. Model counting is closely related to probabilistic inference and machine learning because the marginal inference on a wide range of probabilistic models can be formulated as a weighted model counting problem [61, 62].
Exact approaches for probabilistic inference and model counting are often based on knowledge compilation [63, 21, 64, 65]. Approximate approaches include Variational methods and sampling. Variational methods [66, 67, 27, 68, 69] use tractable forms to approximate a complex probability distribution. Due to a tight relationship between counting and sampling [49], sampling-based approaches are important for model counting. Importance sampling-based techniques such as SampleSearch [70] is able to provide lower bounds. Markov Chain Monte Carlo is asymptotically accurate. However, they cannot provide guarantees except for a limited number of cases [71, 72]. The authors of [73, 74, 75] transform weighted integration into optimization queries using extreme value distribution, which today is often called the "Gumbel trick" [76, 77].
### XOR Counting
There is an interesting connection between model counting and solving satisfiability problems subject to randomized XOR constraints. To illustrate this, hold \(\mathbf{x}\) at \(\mathbf{x}_{0}\), suppose we would like to know if \(\sum_{\mathbf{y}\in\mathcal{Y}}f(\mathbf{x}_{0},\mathbf{y})\) exceeds \(2^{q}\). Consider the SAT formula:
\[f(\mathbf{x}_{0},\mathbf{y})\wedge\texttt{XOR}_{1}(\mathbf{y})\wedge\ldots \wedge\texttt{XOR}_{q}(\mathbf{y}). \tag{1}\]
Here, \(\texttt{XOR}_{1},\ldots,\texttt{XOR}_{q}\) are randomly sampled XOR constraints. \(\texttt{XOR}_{i}(\mathbf{y})\) is the logical XOR or the parity of a randomly-sampled subset of variables from \(\mathbf{y}\). In other words, \(\texttt{XOR}_{i}(\mathbf{y})\) is true if and only if an odd number of these randomly sampled variables in the subset are true.
Formula (1) is likely to be satisfiable if more than \(2^{q}\) different \(\mathbf{y}\) vectors render \(f(\mathbf{x}_{0},\mathbf{y})\) true. Conversely, Formula (1) is likely to be unsatisfiable if \(f(\mathbf{x}_{0},\mathbf{y})\) has less than \(2^{q}\) satisfying assignments. The significance of this fact is that it essentially transforms model counting (beyond NP) into satisfiability problems (within NP). An intuitive explanation of why this fact holds is that each satisfying assignment \(\mathbf{y}\) has 50% chance to satisfy a randomly sampled XOR constraint. In other words, each XOR constraint "filters out" half satisfying assignments. For example, the number of models satisfying \(f(\mathbf{x}_{0},\mathbf{y})\wedge\texttt{XOR}_{1}(\mathbf{y})\) is approximately half of that satisfying \(f(\mathbf{x}_{0},\mathbf{y})\). Continuing this chain of reasoning, if \(f(\mathbf{x}_{0},\mathbf{y})\) has more than \(2^{q}\) solutions, there are still satisfying assignments left after adding \(q\) XOR constraints; hence formula (1) is likely satisfiable. The reverse direction can be reasoned similarly. The precise mathematical argument of the constant approximation is in Lemma 1.
**Lemma 1**.: _[_49, 17, 78_]_ _Given Boolean function \(f(\mathbf{x}_{0},y)\) as defined above,_
* _If_ \(\sum_{\mathbf{y}}f(\mathbf{x}_{0},\mathbf{y})\geq 2^{q_{0}}\)_, then for any_ \(q\leq q_{0}\)_, with probability_ \(1-\frac{2^{c}}{(2^{c}-1)^{2}}\)_,_ XOR-Binary__\((f,\mathbf{x}_{0},q-c)\) _returns true._
* _If_ \(\sum_{\mathbf{y}}f(\mathbf{x}_{0},\mathbf{y})\leq 2^{q_{0}}\)_, then for any_ \(q\geq q_{0}\)_, with probability_ \(1-\frac{2^{c}}{(2^{c}-1)^{2}}\)_,_ XOR-Binary__\((w,\theta_{0},q+c)\) _returns false._
This idea of transforming model counting problems into SAT problems subject to randomized constraints is rooted in Leslie Valiant's seminal work on unique SAT [48, 49] and has been developed by a rich line of work [50, 17, 18, 51, 52, 19, 20, 53]. This idea has recently gathered momentum thanks to the rapid progress in SAT solving [79, 16]. The contribution of this proposal extends the success of SAT solvers to problems with even higher complexity, namely, NPPP-complete SMC problems.
## 3 Problem Formulation
Satisfiability Modulo Counting (SMC) is Satisfiability Modulo Theory (SMT) [54] with model counting as the background theory. A canonical definition of the SMC problem is to determine if there exists \(\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathcal{X}=\{0,1\}^{n}\) and \(\mathbf{b}=(b_{1},\ldots,b_{k})\in\{0,1\}^{k}\) that satisfies the formula:
\[\phi(\mathbf{x},\mathbf{b}),b_{i}\Leftrightarrow\left(\sum_{\mathbf{y}_{i}\in \mathcal{Y}_{i}}f_{i}(\mathbf{x},\mathbf{y}_{i})\geq 2^{q_{i}}\right),\forall i \in\{1..,k\}. \tag{2}\]
Here each \(b_{i}\) is a Boolean predicate that is true if and only if the corresponding model count exceeds a threshold. Bold symbols (i.e., \(\mathbf{x}\), \(\mathbf{y}_{i}\) and \(\mathbf{b}\)) are vectors of Boolean variables. \(\phi,f_{1},\ldots,f_{k}\) are Boolean functions (i.e., their input is Boolean vectors, and their outputs are also Boolean). We use \(0\) to represent false
and \(1\) to represent true. Hence \(\sum f_{i}\) computes the number of satisfying assignments (model counts) of \(f_{i}\). The directions of the inequalities do not matter much because one can always negate each \(f_{i}\).
Our XOR-SMC algorithm obtains the constant approximation guarantee to the following slightly relaxed SMC problems. The problem \(\texttt{SMC}(\phi,f_{1},\ldots,f_{k},\)\(q_{1},\ldots,q_{k})\) finds a satisfying assignment \((\mathbf{x},\mathbf{b})\) for:
\[\phi(\mathbf{x},\mathbf{b})\wedge\left[b_{1}\Rightarrow\left( \sum_{\mathbf{y}_{1}\in\mathcal{Y}_{1}}f_{1}(\mathbf{x},\mathbf{y}_{1})\geq 2 ^{q_{1}}\right)\right]\cdots\wedge\left[b_{k}\Rightarrow\left(\sum_{\mathbf{y }_{k}\in\mathcal{Y}_{k}}f_{k}(\mathbf{x},\mathbf{y}_{k})\geq 2^{q_{k}}\right) \right]. \tag{3}\]
The only difference compared to the full-scale problem in Eq. (2)) is the replacement of \(\Leftrightarrow\) with \(\Rightarrow\). This change allows us to derive a concise constant approximation bound. We also mention that all the applied SMC problems considered in this paper can be formulated in this relaxed form.
## 4 The XOR-SMC Algorithm
The key motivation behind our proposed XOR-SMC algorithm is to notice that Algorithm 1 itself can be written as a Boolean formula due to the Cook-Levin reduction. When we embed this Boolean formula into Eq. (3), the Satisfiability-Modulo-Counting problem translates into a Satisfiability-Modulo-SAT problem, or equivalently, an SAT problem. This embedding also ensures a constant approximation guarantee (see Theorem 2).
To illustrate the high-level idea, let us consider replacing each \(\sum_{\mathbf{y}_{i}\in\mathcal{Y}_{i}}f_{i}(\mathbf{x},\mathbf{y}_{i})\geq 2 ^{q_{i}}\) in Eq. (3) with formula
\[f_{i}(\mathbf{x},\mathbf{y}_{i})\wedge\texttt{XOR}_{1}(\mathbf{ y}_{i})\wedge\ldots\wedge\texttt{XOR}_{q_{i}}(\mathbf{y}_{i}). \tag{4}\]
We denote the previous equation (4) as \(\gamma(f_{i},\mathbf{x},q_{i},\mathbf{y}_{i})\). This replacement results in the Boolean formula:
\[\phi(\mathbf{x},\mathbf{b})\wedge\left[b_{1}\Rightarrow\gamma(f_ {1},\mathbf{x},q_{1},\mathbf{y}_{1})\right]\wedge\cdots\wedge\left[b_{k} \Rightarrow\gamma(f_{k},\mathbf{x},q_{k},\mathbf{y}_{k})\right]. \tag{5}\]
We argue that the satisfiability of formula (5) should be closely related to that of formula (3) due to the connection between model counting and satisfiability testing subject to randomized constraints (discussed in Section 2.3). To see this, Eq. (5) is satisfiable if and only if there exists \((\mathbf{x},\mathbf{b},\mathbf{y}_{1},\ldots,\mathbf{y}_{k})\) that render Eq. (5) true (notice \(\mathbf{y}_{1},\ldots,\mathbf{y}_{k}\) are also its variables). Suppose \(\texttt{SMC}(\phi,f_{1},\ldots,f_{k},q_{1}+c,\ldots,q_{k}+c)\) is satisfiable
Figure 1: XOR-SMC (Algorithm 2) solves the intractable model counting with satisfiability problems subject to randomized XOR constraints and obtains constant approximation guarantees for SMC.
(a.k.a., Eq. (3) is satisfiable when \(q_{i}\) is replaced with \(q_{i}+c\)). Let \((\mathbf{x},\mathbf{b})\) be a satisfying assignment. For any \(b_{i}=1\) (true) in \(\mathbf{b}\), we must have \(\sum_{\mathbf{y}_{1}\in\mathcal{Y}_{i}}f_{i}(\mathbf{x},\mathbf{y}_{i})\geq 2^{q _{i}+c}\). This implies with a good chance, there exists a \(\mathbf{y}_{i}\) that renders \(\gamma(f_{i},\mathbf{x},q_{i},\mathbf{y}_{i})\) true. This is due to the discussed connection between model counting and SAT solving subject to randomized constraints. Hence \(b_{i}\Rightarrow\gamma(f_{i},\mathbf{x},q_{i},\mathbf{y}_{i})\) is true. For any \(b_{i}=0\) (false), the previous equation is true by default. Combining these two facts and \(\phi(\mathbf{x},\mathbf{b})\) is true, we see Eq. (5) is true.
Conversely, suppose \(\texttt{SMC}(\phi,f_{1},\ldots,f_{k},q_{1}-c,\ldots,q_{k}-c)\) is not satisfiable. This implies for every \((\mathbf{x},\mathbf{b})\), either \(\phi(\mathbf{x},\mathbf{b})\) is false, or there exists at least one \(j\) such that \(b_{j}\) is true, but \(\sum_{\mathbf{y}_{j}\in\mathcal{Y}_{j}}f_{j}(\mathbf{x},\mathbf{y}_{j})<2^{q _{j}-c}\). The first case implies Eq. (5) is false under the assignment. For the second case, \(\sum_{\mathbf{y}_{j}\in\mathcal{Y}_{j}}f_{j}(\mathbf{x},\mathbf{y}_{j})<2^{q _{j}-c}\) implies with a good chance there is no \(\mathbf{y}_{j}\) to make \(\gamma(f_{j},\mathbf{x},q_{j},\mathbf{y}_{j})\) true. Combining these two facts, with a good chance Eq. (5) is not satisfiable.
In practice, to reduce the error probability the determination of the model count needs to rely on the majority satisfiability status of a series of equations (4) (instead of a single one). Hence we develop Algorithm 2, which is a little bit more complex than the high-level idea discussed above. The idea is still to _transform the highly intractable SMC problem into solving an SAT problem of its polynomial size_, while _ensuring a constant approximation guarantee_. Fig. 1 displays the encoding of Algorithm 2. We can see the core is still to replace the intractable model count with satisfiability problems subject to randomized constraints. We prove XOR-SMC has a constant approximation guarantee in Theorem 2. We leave the implementation of XOR-SMC in the Appendix B.
**Theorem 2**.: _Let \(0<\eta<1\) and \(c\geq\log(k+1)+1\). Select \(T=\lceil((n+k)\ln 2-\ln\eta)/\alpha(c,k)\rceil\), we have_
* _Suppose there exists_ \(\mathbf{x}_{0}\in\{0,1\}^{n}\) _and_ \(\mathbf{b}_{0}\in\{0,1\}^{k}\)_, such that_ \(\texttt{SMC}(\phi,f_{1},\ldots,f_{k},q_{1}+c,\ldots,q_{k}+c)\) _is_
true. In other words,_
\[\phi(\mathbf{x}_{0},\mathbf{b}_{0})\wedge\left(\bigwedge_{i=1}^{k}\left(b_{i} \Rightarrow\sum_{\mathbf{y}_{i}}f_{i}(\mathbf{x}_{0},\mathbf{y}_{i})\geq 2^{q_{ i}+c}\right)\right),\]
_Then algorithm_ XOR-SMC _(\(\phi\), \(\{f_{i}\}_{i=1}^{k}\), \(\{q_{i}\}_{i=1}^{k}\), \(T\)) returns true with probability greater than \(1-\eta\)._
\(\bullet\) _Contrarily, suppose_ SMC\((\phi,f_{1},\ldots,f_{k},q_{1}-c,\ldots,q_{k}-c)\) _is not satisfiable. In other words, for all_ \(\mathbf{x}\in\{0,1\}^{n}\) _and_ \(\mathbf{b}\in\{0,1\}^{k}\)_,_
\[\neg\left(\phi(\mathbf{x},\mathbf{b})\wedge\left(\bigwedge_{i=1}^{k}\left(b_{ i}\Rightarrow\sum_{\mathbf{y}_{i}}f_{i}(\mathbf{x},\mathbf{y}_{i})\geq 2^{q_{ i}-c}\right)\right)\right),\]
_then_ XOR-SMC _(\(\phi\), \(\{f_{i}\}_{i=1}^{k}\), \(\{q_{i}\}_{i=1}^{k}\), \(T\)) returns false with probability greater than \(1-\eta\)._
Proof.: **Claim 1:** Suppose there exists \(\mathbf{x}_{0}=[x_{1},\ldots,x_{n}]\in\{0,1\}^{n}\) and \(\mathbf{b}_{0}=[b_{1},\ldots,b_{k}]\in\{0,1\}^{k}\), such that
\[\phi(\mathbf{x}_{0},\mathbf{b}_{0})\wedge\left(\bigwedge_{i=1}^{k}\left(b_{i} \Rightarrow\sum_{\mathbf{y}_{i}}f_{i}(\mathbf{x}_{0},\mathbf{y}_{i})\geq 2^{q _{i}+c}\right)\right) \tag{6}\]
holds true. Denote \(k_{0}\) as the number of non-zero bits in \(\mathbf{b}_{0}\). Without losing generality, suppose those non-zero bits are the first \(k_{0}\) bits, i.e., \(b_{1}=b_{2}=\cdots=b_{k_{0}}=1\) and \(b_{i}=0,\forall i>k_{0}\). Then Eq. 6 can be simplified to:
\[\phi(\mathbf{x}_{0},\mathbf{b}_{0})\wedge\left(\bigwedge_{i=1}^{k_{0}}\left( \sum_{\mathbf{y}_{i}}f_{i}(\mathbf{x}_{0},\mathbf{y}_{i})\geq 2^{q_{i}+c}\right)\right) \tag{7}\]
Consider the Boolean formula \(\psi_{t}\) defined in the XOR-SMC algorithm (choosing any \(t\in\{1,\ldots,T\}\)). \(\psi_{t}\) can be simplified by substituting the values of \(\mathbf{x}_{0}\) and \(\mathbf{b}_{0}\). After simplification:
\[\psi_{t}= \left(f_{1}(\mathbf{x}_{0},\mathbf{y}_{1}^{(t)})\wedge\texttt{ XOR}_{1}(\mathbf{y}_{1}^{(t)})\wedge\cdots\wedge\texttt{XOR}_{q_{1}}(\mathbf{y}_{1}^{ (t)})\right)\wedge\ldots\] \[\wedge\left(f_{k_{0}}(\mathbf{x}_{0},\mathbf{y}_{k_{0}}^{(t)}) \wedge\texttt{XOR}_{1}(\mathbf{y}_{k_{0}}^{(t)})\wedge\cdots\wedge\texttt{ XOR}_{q_{k_{0}}}(\mathbf{y}_{k_{0}}^{(t)})\right)\]
Denote \(\gamma_{i}=\left(f_{i}(\mathbf{x}_{0},\mathbf{y}_{i}^{(t)})\wedge\texttt{ XOR}_{1}(\mathbf{y}_{i}^{(t)})\wedge\cdots\wedge\texttt{XOR}_{q_{i}}(\mathbf{y}_{i}^{ (t)})\right)\). Observing that \(\sum_{\mathbf{y}_{i}}f_{i}(\mathbf{x}_{0},\mathbf{y}_{i})\geq 2^{q_{i}+c},\forall i=1, \ldots,k_{0}\), according to Lemma 1, with probability at least \(1-\frac{2^{c}}{(2^{c}-1)^{2}}\), there exists \(\mathbf{y}_{i}^{(t)}\), such that \((\mathbf{x}_{0},\mathbf{y}_{i}^{(t)})\)
renders \(\gamma_{i}\) true. The probability that \(\psi_{t}\) is true under \((\mathbf{x}_{0},\mathbf{b}_{0},\mathbf{y}_{1}^{(t)},\ldots,\mathbf{y}_{k}^{(t)})\) is:
\[\mathbb{P}((\mathbf{x}_{0},\mathbf{b}_{0},\mathbf{y}_{1}^{(t)}, \ldots,\mathbf{y}_{k}^{(t)})\text{ renders }\psi_{t}\text{ true})\] \[= \mathbb{P}\left(\bigwedge_{i=1}^{k_{0}}((\mathbf{x}_{0},\mathbf{y }_{i}^{(t)})\text{ renders }\gamma_{i}\text{ false})\right)\] \[= 1-\mathbb{P}\left(\bigvee_{i=1}^{k_{0}}((\mathbf{x}_{0},\mathbf{ y}_{i}^{(t)})\text{ renders }\gamma_{i}\text{ false})\right)\] \[\geq 1-\sum_{i=1}^{k_{0}}\mathbb{P}((\mathbf{x}_{0},\mathbf{y}_{i}^{( t)})\text{ renders }\gamma_{i}\text{ false})\] \[\geq 1-\frac{k_{0}2^{c}}{(2^{c}-1)^{2}}\] \[\geq 1-\frac{k2^{c}}{(2^{c}-1)^{2}}.\]
Define \(\Gamma_{t}\) as a binary indicator variable where
\[\Gamma_{t}=\begin{cases}1&\text{if }(\mathbf{x}_{0},\mathbf{b}_{0},\mathbf{y}_{1 }^{(t)},\ldots,\mathbf{y}_{k}^{(t)})\text{ renders }\psi_{t}\text{ true}\\ 0&\text{otherwise}\end{cases}\]
Therefore \(\mathbb{P}(\Gamma_{t}=0)\leq\frac{k2^{c}}{(2^{c}-1)^{2}}\). \(\mathbb{P}(\Gamma_{t}=0)<\frac{1}{2}\) when \(c\geq\log_{2}(k+1)+1\). XOR-SMC returns true if the majority of \(\psi_{t},t=1,\ldots,T\) are true; that is, \(\sum_{t}\Gamma_{t}\geq\frac{T}{2}\). Let's define
\[\alpha(c,k)=D\left(\frac{1}{2}\|\frac{k2^{c}}{(2^{c}-1)^{2}}\right)=\frac{1}{ 2}\ln\frac{(2^{c}-1)^{2}}{k2^{c+1}}+\left(1-\frac{1}{2}\right)\ln\frac{2(2^{c }-1)^{2}}{(2^{c}-1)^{2}-k2^{c+1}}\]
When \(c\geq\log_{2}(k+1)+1\), observing that \(\alpha(c,k)>0\), we can apply the Chernoff-Hoeffding theorem to obtain:
\[\mathbb{P}\left(\sum_{t=1}^{T}\Gamma_{t}\geq\frac{T}{2}\right)=1-\mathbb{P} \left(\sum_{t=1}^{T}\Gamma_{t}<\frac{T}{2}\right)\geq 1-e^{-\alpha(c,k)T}\]
For \(T\geq\lceil\frac{((n+k)\ln 2-\ln\eta)}{\alpha(c,k)}\rceil\geq\frac{-\ln\eta}{ \alpha(c,k)}\), it follows that \(e^{-\alpha(c,k)T}\leq\eta\). Therefore, with a probability at least \(1-\eta\), we have \(\sum_{t}\Gamma_{t}\geq\frac{T}{2}\). In this scenario, XOR-SMC (\(\phi\), \(\{f_{i}\}_{i=1}^{k}\), \(\{q_{i}\}_{i=1}^{k}\), \(T\)) returns true as it discovers \(\mathbf{x}_{0}\), \(\mathbf{b}_{0}\), \((\mathbf{y}_{1}^{(t)},\ldots,\mathbf{y}_{k}^{(t)})\), for which the majority of Boolean formulae in \(\{\psi_{t}\}_{t=1}^{T}\) are true.
**Claim 2:** Suppose for all \(\mathbf{x}\in\{0,1\}^{n}\) and \(\mathbf{b}\in\{0,1\}^{k}\),
\[\neg\left(\phi(\mathbf{x},\mathbf{b})\wedge\left(\bigwedge_{i=1}^{k}\left(b_{ i}\Rightarrow\sum_{\mathbf{y}_{i}}f_{i}(\mathbf{x},\mathbf{y}_{i})\geq 2^{q_{i}-c} \right)\right)\right)\]
Consider a fixed \(\mathbf{x}_{1}\) and \(\mathbf{b}_{1}\), we show the previous condition with high probability renders most \(\psi_{t}\) false in Algorithm 2. We can prove that the probability is sufficiently low such that XOR-SMC will return false with a high probability after examining all \(\mathbf{x}\) and \(\mathbf{b}\). The detailed proof is left to the Appendix A.
## 5 Experiment: Locate Emergency Shelters
In this section, we show our XOR-SMC finds better shelter location assignments (more emergency vacate paths) than baseline approaches (in Fig. 3). It also needs less computational time (in Table 1).
### Problem Formulation
Disasters such as hurricanes and floods continue to endanger millions of lives. Shelters are safe zones that protect residents from possible damage, and evacuation routes are the paths from resident zones toward shelter areas. To enable the timely evacuation of resident zones, picking a set of _shelter locations_ with sufficient routing from resident areas should be considered. Given the unpredictability and chaos during natural disasters, it is crucial to guarantee multiple paths rather than a single path from residential areas to shelters. This ensures that even if one route is obstructed, residents have alternative paths to safety areas.
Current methods [80, 81] has considered finding those shelters locations that have at least one single path from a residential area and model the problem of finding an optimal assignment of shelters using Mixed Integer Programming (MIP) framework [82, 83]. However, those proposed methods cannot be generalized to solve the problem that requires sufficient alternative routes from each residential area to shelters, primarily because counting the number of paths is intractable. This complexity renders it nearly impossible to address directly through MIP, especially on large-scale problems. Our proposed XOR-SMC can identify shelter location assignments that ensure each residential area has a designated number of alternative paths with high probability.
Given a map \(G=(V,E)\) where nodes in \(V=\{v_{1},\ldots,v_{N}\}\) represents \(N\) areas and \(e=(v_{i},v_{j})\in E\) indicates a road from \(v_{i}\) to \(v_{j}\). \(N\) and \(M\) denote the number of nodes and edges correspondingly. Given a subset of nodes \(R=\{v_{r_{1}},\ldots,v_{r_{k}}\}\subseteq V\) indicates the _residential areas_, the task is to choose at most \(m\) nodes as shelters from the rest nodes, such that the number of routes that can reach a shelter from each residential area is maximized. Fig. 2 gives an example with \(m=4\) shelters and there are many roads connecting the resident area to those shelters.
**SMC Formulation.** We transform this optimization problem into a decision problem by binary searching the thresholds on the number of paths. The decision problem decides there are at least \(2^{q_{r}}\) paths connecting any
Figure 2: Example assignment of shelters that guarantee sufficient alternative paths from the resident areas, at Hawaii Island. Every orange dot corresponds to shelters and the green dot indicates a resident area.
residential area with a shelter. The assigned shelters is represented by a vector \(\,=(b_{1},\ldots,b_{n})\in\{0,1\}^{n}\), where \(b_{i}=1\) implies node \(v_{i}\) is chosen as shelter. Let \(\phi(\mathbf{b})=(\sum_{i=1}^{n}b_{i})\leq m\) represent there are at most \(m\) shelters. Let \(f(v_{r},v_{s},E^{\prime})\) be an indicator function that returns one if and only if the selected edges \(E^{\prime}\) form a path from \(v_{r}\) to \(v_{s}\). The whole formula is:
\[\phi(\mathbf{b}),b_{i}\Rightarrow\left(\sum_{v_{s}\in S,E^{\prime}\subseteq E }f(v_{r},v_{s},E^{\prime})\geq 2^{q_{r}}\right)\text{for }1\leq i\leq n.\]
We leave the details implementation of \(f(v_{r},v_{s},E^{\prime})\) in the Appendix C.1.
### Empirical Experiment Analysis
**Experiment Setting.** We crawl the real-world dataset from the Hawaii Statewide GIS Program website. We extract the real Hawaii map with those major roads and manually label those resident areas on the map. We create problems of different scales by subtracting different sub-regions from the map. Fig. 2 is an example of a labeled dataset with possible shelter assignments. 3 major resident areas are picked as \(R\), and set \(m=5\).
In terms of baselines, we consider the local search algorithm with shelter locations as the state and the number of paths between shelters and resident areas as the heuristic. Due to the intractability of path counting in our formulation, the heuristic is approximated by querying sampling oracles. 1) Gibbs sampling-based [84] Local Search (Gibbs-LS). 2) Uniform SAT sampler-based [85] Local Search (Unigen-LS). 3) Quick Sampler-based [86] Local Search (Quick-LS). Each baseline runs 5 times, and the best result is included.
For the evaluation metrics, 1) we consider the empirical running time when reaching an assignment that guarantees each resident area has at least \(2^{4}\) paths to a shelter, the result is shown in Table 1. 2) By gradually increasing the path count threshold \(q_{r}\), we can find the best shelter location that guarantees the most number of paths. Given a time limit of 5 hours, we evaluate different algorithms as in Fig. 3.
**Result Analysis.** In terms of running time analysis, XOR-SMC takes less empirical running time than baselines for finding shelter location assignments over different graphs. Furthermore, we evaluate the quality of the identified shelter locations. Our XOR-SMC finds shelter locations that have a higher number of connecting paths than those found by baselines.
## 6 Experiment: Robust Supply Chain Design
In this section, we show our XOR-SMC resolves robust supply chain design problems against stochastic events. We show the solutions found by XOR-SMC outperform baselines and are near the optimal ones in real-world applications.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & \multicolumn{3}{c}{Graph Size} \\ & \(N=121\) & \(N=183\) & \(N=388\) \\ \hline XOR-SMC (ours) & \(\mathbf{0.04}h\) & \(\mathbf{0.11}h\) & \(\mathbf{0.16}h\) \\ Gibbs-LS & \(0.56h\) & \(0.66h\) & \(6.97h\) \\ QuickSampler-LS & \(0.31h\) & \(0.29h\) & \(0.62h\) \\ Unigen-LS & \(0.08h\) & \(0.07h\) & \(0.42h\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: XOR-SMC takes less empirical running time than baselines, for finding shelter location assignments over different graphs. Graph size is the number of nodes in the graph.
### Problem Formulation
Supply chain management founds its importance in logistics, operations research, and economics. The essence of supply chain management is to coordinate and integrate the flow of products, information, and finances across companies to enhance overall performance and maximize value to the consumer. Its importance is underscored by the increasing complexity of global business environments, where even minor inefficiencies can result in significant cost implications and jeopardize competitiveness.
Existing works in supply chain optimization often gravitate towards mathematical programming approaches. However, when delving into more complex scenarios involving stochastic events--such as uncertain demand or supply disruptions--the task becomes considerably more intricate. Specifically, formulating counting constraints, like guaranteeing a certain amount of supplies across multiple vendors under stochastic events, is notably intractable. These complexities necessitate innovative approaches that can capture the randomness and dynamism inherent in real-world supply chain systems without sacrificing optimality.
Suppose we have a supply chain of \(N\) suppliers, they form a supply-demand network \((V,E)\) where each node \(v\in V\) represents a supplier and edges \(e\in E\) represent supply-to-demand trades. Each supplier \(v\) plays two roles: as a vendor to downstream suppliers, and as a buyer from upstream suppliers. There are \(M\) different kinds of products in the network. To guarantee substantial production, supplier \(v\) should order necessary raw materials from upstream suppliers in advance. The trade between vendor \(u\), and buyer \(v\) costs \(f(u,v)\) for \(v\). The capacity (amount of goods) of the trade between \(u\) and \(v\) is \(c(u,v)\). \(v\) has total budget of \(B(v)\) to get all his materials ready. Due to unpredictable factors such as natural disasters, equipment failure and etc., the trade between \(u\) and \(v\) may fail. Denote \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{L})\in\{0,1\}^{L}\) as the state of \(L\) different stochastic events, where \(\theta_{l}=1\) indicates event \(l\) occurs. The task is to determine the optimal trading suppliers to maximize expected production, accounting for stochastic influences.
**SMC Formulation.** The optimization problem can be formulated as a decision problem similar to the previous section. The trading plan can be denoted as \(\{x_{s,v}\}_{s,v\in V}\), where \(x_{s,v}=1\) indicates \(v\) orders raw material from \(s\). For every node \(v\), the decision problem decides whether
\[\bigwedge_{v}\bigwedge_{d\in D(v)}\mathbb{E}_{\mathbf{\theta}}\left(\sum_{s\in S( d)}c(s,v)I(x_{s,v},\mathbf{\theta})\right)\geq 2^{q_{v,d}} \tag{8}\]
where \(D(v)\) denotes the set of materials required by \(v\), \(S(d)\) represents the set of suppliers that produce raw material \(d\), \(2^{q_{v,d}}\) is a threshold measuring the amount of product \(d\) can be acquired by \(v\), and indicator function \(I(x_{s,v},\mathbf{\theta})\) is evaluated to one if the random events do not affect the trade between \(s\) and \(v\). Eq. (8)
Figure 3: XOR-SMC finds better shelter locations with way more paths from the resident area to the chosen shelters than competing baselines, across different graphs.
also needs to be accompanied by a budget constraint per supplier. We leave the details of implementation in Appendix C.2.
### Empirical Experiment Analysis
**Experiment Setting.** We evaluate our algorithm on the wheat supply chain network from [87]. A detailed introduction is shown in the appendix. Although the cost of trade, the capacity of transportation, raw material demand and etc. can be directly extracted, it lacks a model of stochastic events due to the limitation of their algorithm. We generated 10 stochastic events (see Appendix C.2) overall supply-demand edges that jointly determine the trading, which makes the expectation intractable by enumeration.
For the baseline, we used Gibbs-CD [88] and Belief Propagation with Contrastive Divergence (BP-CD) [89] as two alternative methods for estimating the maximum expected productivity for every node which corresponds to their trading strategy. As for the metric, the expected total productivity (tons of bread) of all markets is evaluated in Table 2. For a better comparison, two synthetic networks with sizes of 20 and 100, in which the parameters are generated with a Gaussian distribution fitted from the real-world data.
**Result Analysis.** The productivity of bread is used in the evaluation. XOR-SMC finds the most optimized supply chain network among all methods. It also produces solutions close to the best possible ones.
## 7 Conclusion
We presented XOR-SMC, an algorithm with polynomial approximation guarantees to solve the highly intractable Satisfiability Modulo Counting (SMC) problems. SMC fuses statistical inference and symbolic reasoning. Solving it presents unique challenges. Prior work on SMC solving offers no or loose guarantees and may find suboptimal solutions. XOR-SMC transforms the intractable SMC problem into satisfiability problems by replacing intricate model counting with SAT formulae subject to randomized XOR constraints. XOR-SMC also obtains constant approximation guarantees on the solutions obtained. SMC solvers offer useful tools for many real-world problems at the nexus of symbolic and statistical AI. Extensive experiments on SMC problems for social good demonstrate that XOR-SMC outperforms other approaches in solution quality and also runs efficiently.
## Acknowledgements
This research was supported by NSF grants IIS-1850243, CCF-1918327.
\begin{table}
\begin{tabular}{c|c c c} \hline & \multicolumn{3}{c}{Supply Network Size} \\ & \(20\) (Synth) & \(44\) (Real) & \(100\) (Synth) \\ \hline XOR-SMC (ours) & \(\mathbf{947.61}\) & \(\mathbf{3467.82}\) & \(\mathbf{4753.78}\) \\ Gibbs-CD & \(911.43\) & \(3217.70\) & \(4190.39\) \\ BP-CD & \(917.66\) & \(3325.51\) & \(4374.66\) \\ \hline Best Possible & \(1055.22\) & \(4223.23\) & \(6046.48\) \\ \hline \end{tabular}
\end{table}
Table 2: Expectation of the total bread production (unit: ton) from the supply network. XOR-SMC proposes the best bread production supply chain that achieves the maximum total productivity and is close to the best possible solutions. |
2310.00364 | Third order nonlinear correlation of the electromagnetic vacuum at
near-infrared frequencies | In recent years, electro-optic sampling, which is based on Pockel's effect
between an electromagnetic mode and a copropagating, phase-matched ultrashort
probe, has been largely used for the investigation of broadband quantum states
of light, especially in the mid-infrared and terahertz frequency range. The use
of two mutually delayed femtosecond pulses at near-infrared frequencies allows
the measurement of quantum electromagnetic radiation in different space-time
points. Their correlation allows therefore direct access to the spectral
content of a broadband quantum state at terahertz frequencies after Fourier
transformation. In this work, we will prove experimentally and theoretically
that when using strongly focused coherent ultrashort probes, the electro-optic
sampling technique can be affected by the presence of a third-order nonlinear
mixing of the probes' electric field at near-infrared frequencies. Moreover, we
will show that these third-order nonlinear phenomena can also influence
correlation measurements of the quantum electromagnetic radiation. We will
prove that the four-wave mixing of the coherent probes' electric field with
their own electromagnetic vacuum at near-infrared frequencies results in the
generation of a higher-order nonlinear correlation term. The latter will be
characterized experimentally, proving its local nature requiring the physical
overlap of the two probes. The parameters regime where higher order nonlinear
correlation results predominant with respect to electro-optic correlation of
terahertz radiation is provided. | Francesca Fabiana Settembrini, Alexa Herter, Jèrôme Faist | 2023-09-30T13:03:14Z | http://arxiv.org/abs/2310.00364v1 | # Third order nonlinear correlation of the electromagnetic vacuum at near-infrared frequencies
###### Abstract
In recent years, electro-optic sampling, which is based on Pockel's effect between an electromagnetic mode and a copropagating, phase-matched ultrashort probe, has been largely used for the investigation of broadband quantum states of light, especially in the mid-infrared and terahertz frequency range. The use of two mutually delayed femtosecond pulses at near-infrared frequencies allows the measurement of quantum electromagnetic radiation in different space-time points. Their correlation allows therefore direct access to the spectral content of a broadband quantum state at THz frequencies after Fourier transformation. In this work, we will prove experimentally and theoretically that when using strongly focused coherent ultrashort probes, the electro-optic sampling technique can be affected by the presence of a third-order nonlinear mixing of the probes' electric field at near-infrared frequencies. Moreover, we will show that these third-order nonlinear phenomena can also influence correlation measurements of the quantum electromagnetic radiation. We will prove that the four-wave mixing of the coherent probes' electric field with their own electromagnetic vacuum at near-infrared frequencies results in the generation of a higher-order nonlinear correlation term. The latter will be characterized experimentally, proving its local nature requiring the physical overlap of the two probes. The parameters regime where higher order nonlinear correlation results predominant with respect to electro-optic correlation of terahertz radiation is provided.
Introduction
Nonlinear quantum optics [1] has been of paramount importance for recent technological developments in a multitude of research fields, from optical quantum communication [2], quantum computing [3] to quantum spectroscopy[4] and quantum metrology [5]. In particular, the optimization of nonlinear photon-photon interaction has been crucial for the implementation of new platforms for quantum logic integrated on-chip [6; 7; 8] and of building blocks for efficient entanglement and squeezed radiation generation, which is fundamental for improved performances in quantum sensing [9].
Nonlinear interaction of photons at different frequencies has been particularly significant for the metrological study of fundamental states of electromagnetic radiation, especially in the mid-infrared (MIR) and terahertz (THz) frequency range. In alternative to commonly used heterodyne detection, an established measurement scheme compatible with the investigation of the most fundamental broadband states of electromagnetic radiation, has been developed through electro-optic sampling (EOS) [10]. This measurement technique exploits the Pockels effect in a material with \(\chi^{(2)}\) nonlinearity to map the amplitude of the investigated electromagnetic field on the polarization state of a phase-matched femtosecond pulse. The use of ultrashort laser pulses insures the subcycle resolution of the electromagnetic radiation under study both in space and in time and the nonlinear properties of the material determine the large detection bandwidth of the technique.
Experimental implementations of electro-optic detection have led to the first measurements of the statistical properties of the electromagnetic vacuum in the MIR frequency range [11; 12], through a parametric study of the detection system's shot noise. These results have led to further theoretical proposals in the field of quantum metrology for the exploration of higher order noise distributions of the electromagnetic vacuum [13; 14; 15; 16].
In previous works, we have presented a further development of the electro-optic field detection technique involving the use of two probing pulses, which enables the measurement of electromagnetic radiation in distinct space-time points. This opened up the possibility of investigating both second- [17] and first-order coherence on an electromagnetic quantum state of radiation, which provides access to its spectral content after a Fourier transformation. We have experimentally proven that the developed electro-optic field correlation measurement scheme allows access to the spatial and temporal coherence of the electromagnetic quantum vacuum in the THz frequency range [18]. In particular, our latest results [19] have provided the first experimental proof of a
fundamental hypothesis in quantum electrodynamics, which claims the quantum vacuum to be correlated outside the relativistic light cone [20].
In order to improve the accuracy of quantum metrology measurements based on electro-optic sampling, a strong confinement of the probing radiation in both space and time is needed. However, the presence of a strong local probing electric field will influence the measured electro-magnetic state through quantum back-action [21] as well as lead to the appearance of competing higher-order nonlinear phenomena. In fact, in a recent theoretical work [22], the higher order nonlinear mixing of the electromagnetic radiation studied with the local probing laser field has been proposed in combination with homodyne detection as an alternative method for quantum noise distribution measurements with efficient background suppression.
In this work, we will present the experimental results of electro-optic electric field correlation measurement on a thermally populated electromagnetic state, obtained using highly confined probing laser beams with a strong spatial overlap. Our results present a significant deviation from the expected electro-optic field correlation induced by thermal radiation at THz frequencies both in time and frequency domain. We will prove that the observed results can be ascribed to the effect of higher-order nonlinear correlation of the probing pulses' electric field with the electromagnetic vacuum at their own near-infrared frequency. In order to validate our hypothesis, the dependence of the detected field correlation measurement on the experimental parameters will be investigated and the parameters range in which third-order nonlinear phenomena become predominant with respect to electro-optic detection will be defined.
The paper is organized as follows. In Section II, we investigate both theoretically and experimentally the effect of the third-order non-linearity onto the balanced detection scheme by modulating one beam and detecting its influence on the copropagating one. In the third section, we investigate how the third-order non-linearity influences the correlation of their fluctuations and show that the nonlinear correlation signal still arises from vacuum fluctuations - albeit at near-infrared frequencies. We also show that the signal arises only from the physical overlap of the two probing beams, a feature that is exploited in the non-local measurements of vacuum fluctuations.
## II Third order nonlinear balanced detection
In our experimental implementation of electro-optic field coherence detection, the nonlinear medium chosen is a \(\langle 110\rangle\)-cut zinc telluride (ZnTe) crystal. The working principle of electro-optic
sampling implemented with the ZnTe crystal is shown schematically in Fig. 1 (a). The interaction of an electromagnetic mode \(\vec{E}_{\mathrm{THz}}(t)\) with a phase-matched femtosecond probing pulse, described by the electric field \(\vec{E}_{\mathrm{p}}(t)\), leads to the creation of a second order nonlinear polarization \(\vec{P}^{(2)}(t)\). The resulting additional electric field component \(\vec{E}^{(2)}(t)\) at the frequency of the probe is oriented along the perpendicular direction with respect to the original pulse polarization. The subsequent change in polarization from linear to elliptical can be then measured via a homodyne detection system based on balanced ellipsometry [23]. The combination of a quarter-wave plate and a polarizing beam-splitter mixes the NIR \(z\)-component of the probe electric field \(\vec{E}_{\mathrm{p}}(t)\) (local oscillator) with NIR \(x\)-component \(\vec{E}^{(2)}(t)\) generated in the nonlinear process. The two beams separated by the polarizing beam splitter both contain the mixing of local oscillator and signal. As a consequence, the subtraction of the beams measured at the two photodiodes of the balanced detector removes the local oscillator's intensity, while a signal proportional to the amplitude of the NIR \(\vec{E}^{(2)}(t)\) field is measured.
In the frequency domain, the electro-optic detection process relies on sum and difference frequency generation [10]. The nonlinear interaction of a single mode \(\omega\) within the broadband femtosecond probe spectrum with the THz electromagnetic mode at a much lower frequency \(\Omega\) leads to the creation of modes \(\omega\pm\Omega\) still lying within the spectral bandwidth of the ultrashort pulse, therefore allowing their direct detection.
The crystallographic axes of ZnTe together with the laboratory reference frame are presented in Fig. 1 (b) in orange and black respectively. In order to achieve maximum sensitivity and avoid the creation of additional radiation at THz frequency via optical rectification [24], in all of the presented experimental work the polarization of the probing pulses has been oriented along the laboratory \(z\)-axis (as indicated also in Fig. 1 (a)). Due to the zinc blende symmetry of ZnTe and the chosen polarization of the probes, the electro-optic detection results in a measurement sensitive exclusively to THz radiation polarized along the \(\hat{x}\) axis in the laboratory reference frame.
The use of a balanced ellipsometry detection implicitly allows the nonlinear detection scheme to be susceptible to any higher-order nonlinear phenomena which could lead to a probe polarization change with equal crystallographic symmetry and resulting frequency within the probe pulse bandwidth as electro-optic sampling.
Significant third-order non-linearities have been observed in ZnTe crystals using tightly focused pulsed near-infrared radiation[25]. Using a pump-probe detection scheme involving two mutually delayed ultrashort pulses, it has been experimentally shown that a four-wave mixing pro
cess between a high-intensity pump and a weaker probe electric fields can lead to the generation of a signal detectable via balanced ellipsometry. For probing pulses with a temporal extent in the order of 100 fs, Gaussian beam waist smaller than 100 \(\upmu\)m and average optical powers in the order of tens of mW, third-order nonlinear interaction has been demonstrated to produce a probe polarization change comparable in magnitude with the one induced via electro-optic detection of THz radiation [26; 27]. The two contributions however present significantly different features in the temporal domain. In particular, the balanced signal caused by third-order nonlinearities has been theoretically demonstrated to be directly proportional to the intensity autocorrelation trace of the two pulses in the time domain.
### Semi-classical description of third order nonlinear balanced signal
In order to predict theoretically the influence that the third-order nonlinear interaction bears on balanced detection in our experimental implementation of electro-optic sampling, we follow the derivations presented in Ref. [26; 28]. We assume the use of two ultrashort probing pulses,
Figure 1: _Balanced ellipsometry detection_. a) Electro-optic detection in ZnTe is determined by the change in polarization (red dashed line) of a femtosecond probe pulse \(\vec{E}_{\mathrm{p}}(t)\) due to its second order nonlinear interaction with a phase matched THz electric field \(\vec{E}_{\mathrm{THz}}(t)\) (solid black line). b) Relative orientation of the laboratory axis (in black) and ZnTe crystallographic axis (in orange) with respect to the facet of the crystal.
mutually delayed by a time \(\tau\) and both polarized along the \(\hat{z}\) direction of the laboratory reference axis. We will furthermore assume the presence of a non-zero probe electric field component along the \(\hat{x}\) axis for the initial pulse (\(t\) pulse): \(\vec{E}_{t}(t)=(E_{t,x}(t),0,E_{t,z}(t)),\vec{E}_{\tau}(t+\tau)=(0,0,E_{\tau,z} (t+\tau))\). For simplicity, we will focus our analysis only on the dependence of the nonlinear signal detected by the time-delayed pulse (\(t+\tau\) pulse) due to its third-order nonlinear interaction with its non-delayed version (\(t\) pulse).
According to the crystallographic symmetry of ZnTe [26], the higher-order nonlinear interaction between the two ultrashort probes will result in the creation of the following third-order polarization terms (for further details see Sec. I of the Suppl. Material):
\[P_{x}^{(3)}(t+\tau)=2\epsilon_{0}\chi_{44}E_{t,z}(t)E_{t,x}(t)E_{\tau,z}(t+\tau), \tag{1a}\] \[P_{z}^{(3)}(t+\tau)=\epsilon_{0}\chi_{11}E_{t,z}(t)E_{t,z}(t)E_{\tau,z}(t+\tau). \tag{1b}\]
Here, \(\epsilon_{0}\) represents the vacuum permittivity and the terms \(\chi_{11},\chi_{44}\) are the only non-vanishing components of the third-order nonlinear tensor in ZnTe. Their values correspond to \(\chi_{11}=3\times 10^{-19}\frac{\text{m}^{2}}{\text{V}^{2}}\) and \(\chi_{44}=1.5\times 10^{-19}\frac{\text{m}^{2}}{\text{V}^{2}}\), respectively [26]. Eq. (1a) clearly points to the equivalence of the crystallographic symmetry of third-order nonlinear interaction and electro-optic detection. It is important to underline as well the presence of an additional polarization component oriented along the \(\hat{z}\) laboratory axis, presented in Eq. (1b). Due to the balanced detection scheme, the latter will not influence the result of the final ellipsometry measurement but will only be responsible for an intensity modulation of the probe and will be therefore disregarded in the following analysis.
Following the derivations' steps presented in Ref. [28] in the case of classical fields, the nonlinear signal detected by the \(t+\tau\) probing pulse \(S^{(3)}(\tau)\) will assume the form:
\[S^{(3)}(\tau)=\frac{1}{2}cn\epsilon_{0}\int_{0}^{\infty}d\omega\frac{\eta( \omega)}{\hbar\omega}\int d^{2}\vec{r_{\perp}}|E_{\tau,z}(\zeta)|^{2}\left[i \,\frac{E_{\tau,x}^{(3)}(\zeta)}{E_{\tau,z}(\zeta)}+\text{h.c.}\right]. \tag{2}\]
In this expression, \(c\) indicates the speed of light, \(n\) the refractive index of ZnTe crystal at NIR frequencies, which are indicated as \(\omega\). The function \(\eta(\omega)\) represents the quantum responsivity of the balanced detector in the NIR frequency region. The coordinates \(\vec{r}_{\perp}=(x,0,z)\) indicate the spatial position on the transverse plane with respect to the propagation direction of the ultrashort probing pulses \(\hat{y}\) in the laboratory reference frame. The set of coordinates \(\zeta\) is defined as \(\zeta=\{(x,z),l,\omega\}\), where \(l\) is the length of the crystal, and represents the position of the two probes after propagation through the nonlinear medium. The quantity \(E_{\tau,x}^{(3)}(\zeta)\) represents the perpendicular
electric field component acquired by the time-delayed \(t+\tau\) probe after propagation in the detection crystal and generated by the nonlinear polarization term described in Eq. (1a). Assuming for the two interacting ultrashort probes a Gaussian distribution both in space and in the frequency domain (see Suppl. Material Sec. I for details), the expression in Eq. (2) can be simplified as follows:
\[S^{(3)}(\tau)=\frac{\chi_{44}\omega_{\text{p}}N_{\tau}E_{t,x}E_{t,z}}{ncw_{0}^{2 }}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}d\omega^{\prime}\ d\omega^{ \prime\prime}\ \Gamma(\omega^{\prime},\omega^{\prime\prime})A_{\text{overlap}}. \tag{3}\]
Here, \(\omega_{\text{p}}\) represents the central frequency of the ultrashort probing pulse and \(w_{0}\) its Gaussian waist. The amplitude at focus of the electric fields' components of the non-time delayed probe along the \(\hat{x},\hat{z}\) direction is indicated as \(E_{t,x},E_{t,z}\) respectively. The quantity \(N_{\tau}\) represents the total number of photons in the temporal delayed pulse impinging on the balanced detector. The function \(A_{\text{overlap}}\) describes the spatial overlap of the two ultrashort probe pulses propagating along the nonlinear material (see Suppl. Material Sec.I Eq. (19)). The function \(\Gamma(\omega^{\prime},\omega^{\prime\prime})\) indicates the convolution of the spectral distributions of the electric fields involved in the nonlinear mixing and effectively represents the spectral intensity autocorrelation function of the two pulses.
As predicted in Ref. [26], Eq. (3) indicates the possible presence of a higher order nonlinear balanced signal \(S^{(3)}(\tau)\) due to the interaction of two ultrashort probing pulses polarized along the \(\hat{z}\) axis in ZnTe. As it is characteristic for third-order nonlinear phenomena, the amplitude of the signal generated results directly proportional to the product of the two electric field components \(E_{t,x}E_{t,z}\) (and therefore to the overall intensity) of the t femtosecond pulse. This dependence also clarifies the direct connection between the arising of the higher order nonlinear balanced signal \(S^{(3)}(\tau)\) and the presence of a non-vanishing component of the t probing pulse along the \(\hat{x}\) direction. Moreover, the mutual interaction of the two laser probes leads to the generation of a signal \(S^{(3)}(\tau)\) which represents in the time domain their intensity autocorrelation function, as derived in Eq. (16) of Sec. I in the Suppl. Material.
### Experimental study of third order nonlinear balanced signal
In order to verify the presence of the third order induced balanced detection signal \(S^{(3)}(\tau)\) predicted in Eq. (3) we have implemented the experimental setup sketched in Fig. 2 (a). The setup is similar to the one reported in Ref. [27]. The ultrashort pulsed radiation generated by a Ti:Sapphire laser at a wavelength of 800 nm is divided into two equal optical paths. A delay stage on one of these paths allows an adjustable temporal delay \(\tau\) between the two identical femtosecond
pulses. They are subsequently collected by a system of short focal-length lenses and focused with a Gaussian beam waist of \(w_{0}=10~{}\upmu\)m in the 1 mm long ZnTe crystal placed at the center of the lens system. The influence of the third-order nonlinear interaction between the femtosecond probes can be investigated by acquiring the balanced ellipsometry signal registered by the time delayed
Figure 2: _Third order balanced coherent detection._ a) Experimental setup used for coherent nonlinear signal detection via balanced ellipsometry. Both \(t\) and \(t+\tau\) probes are polarized along the \(<\) 001 \(>\) axis of the ZnTe crystal, which coincides with the \(z\)-axis of the laboratory reference frame. The acquisition of the signal has been performed with a lock-in at the optical chopping frequency of \(f=600\) Hz. BD = balanced detectors, WP = Wollaston prism, WVP = quarter waveplate. (b) Experimental third-order balanced signal \(S^{(3)}(\tau)\) recorded for different probing power of the interacting pulse \(P_{t}\) (faded line) and their respective Gaussian fitting functions (solid lines). The experimental timetraces have been recorded with a temporal resolution of 33 fs and an integration time of 2 s per point. (c) The peak-to-peak amplitude of the experimental signal \(S^{(3)}_{\text{pp}}(\tau)\) presents a linear dependence with the intensity of the copropagating pulse \(P_{t}\). The uncertainty on the experimental measurement is derived from the uncertainty of the fitting parameters. (d) The extracted temporal extent of the two interfering femtosecond pulses \(\tau_{\text{p}}\) is retrieved from the Gaussian fitting function \(g(\tau)\). The uncertainty on the reported value represents the 2\(\sigma\) confidence interval and it has been obtained from the uncertainty of the fitting parameters.
\(t+\tau\) probe \(S^{(3)}(\tau)\) via a lock-in acquisition system. No external source at THz frequencies is present and the generation of THz electromagnetic radiation via optical rectification is inhibited by the choice of the probes polarization direction along the \(\hat{z}\) laboratory reference axis. The two femtosecond probes present as well a residual electric field component along the \(\hat{x}\) laboratory axis, due to the non-perfect extinction ratio of the polarizers. The polarization components of the two probes present values of \(P_{t,z}=6.3\) mW, \(P_{\tau,z}=7.2\) mW and \(P_{t,x}=170\)\(\upmu\)W, \(P_{\tau,x}=120\)\(\upmu\)W at the detection crystal facet along the \(\hat{z}\), \(\hat{x}\) axis of the laboratory reference frame respectively.
The experimental balanced ellipsometry signal \(S^{(3)}(\tau)\) recorded as a function of the pulses temporal delay \(\tau\) is presented for different powers of the copropagating t probing pulse \(P_{t}\) in Fig. 2 (b). All experimental results have been collected maintaining the optical power of the detection \(t+\tau\) pulse constant. As it can be clearly observed in Fig. 2 (b), the amplitude of the nonlinear signal \(S^{(3)}(\tau)\) appears to decrease significantly as a function of the t probe power \(P_{t}\). In order to obtain a more accurate estimation of the \(S^{(3)(\tau)}\) amplitude dependence, the experimental data in Fig. 2 (b) have been fitted with a Gaussian function of the form \(g(\tau)=c+a\tau+b\exp\left(-\frac{4\ln(2)(\tau-d)^{2}}{\gamma^{2}}\right)\). As clearly shown in Fig. 2 (c), the peak-to-peak amplitude of the fitted signal \(S^{(3)}_{\rm pp}(\tau)\) with respect to the baseline \(a\tau\) presents a direct proportionality to the intensity of the interacting probing pulse \(P_{t}\), decreasing from a value of 56 mV for \(P_{t}=6.3\) mW to a value of 0.76 mV in the case of \(P_{t}=0.1\) mW. This experimental result is in good accord with the intensity dependence of a signal generated via third-order nonlinear interaction, as reported in Eq. (3).
As proposed in Ref. [26] and derived in Eq. (3), the experimental data reported in Fig. 2 (b) should also correspond in the temporal domain to the intensity autocorrelation of the two interacting pulses \(\langle I(t)I(t+\tau)\rangle\). From the fitting parameter \(\gamma\), the temporal extent of the two ultrashort NIR probes can be extracted through the relation \(\tau_{\rm p}=0.7\gamma\)[29]. The results obtained from the experimental data reported in Fig. 2 (a) are presented in Fig. 2 (d). For all the t pulse power values, the estimated temporal extent of the pulses \(\tau_{\rm p}\) results constant and equal to \(\tau_{\rm p}=200\) fs. The result is in good agreement with the estimated value of the temporal extent of the femtosecond pulses \(\tau_{\rm p}\) employed in our experiment (for further information on the experimental value estimation see Note D of Suppl. Material in Ref. [19]).
## III Nonlinear electric field correlation of quantum electromagnetic radiation
The classical description of electro-optic and third-order nonlinear balanced detection, presented in Sec. II, assumes implicitly the coherence of the electromagnetic radiation investigated. The nonlinear balanced detection of both a coherent THz electromagnetic single mode \(\vec{E}_{\rm THz}(t)\) and of the third order polarization \(P_{x}^{(3)}(t+\tau)\) described in Eq. (1a) is in fact based on the constant relation between the phase of the electromagnetic radiation and that of the sampling laser probe. As a consequence, measurements based on balanced ellipsometry implemented with an experimental setup as in Fig. 2 generally rely on the measurement of a large number of subsequent NIR ultrashort pulses and their averaging.
The same experimental implementation would be, however, not suited for the investigation of broadband incoherent quantum states of light, such as, for instance, thermally populated states of electromagnetic radiation and the quantum vacuum. Intuitively, the incompatibility can be attributed to the incoherent nature of the quantum electromagnetic radiation which does not exhibit a stable phase relation with the sampling laser pulse. As a result, the randomness of the measurements collected via balanced detection will lead to a null result upon direct averaging.
The same conclusion can be also obtained from a rigorous quantum mechanic description of nonlinear balanced detection, as reported in detail in Ref. [28]. According to Eq. (10) in Ref. [28], the quantum operator describing the electro-optic measurement performed by a single probing pulse \(\hat{S}\) results directly proportional to the amplitude of the quantum electromagnetic field investigated \(\hat{E}\). The latter, in second quantization formalism, is directly proportional to a sum of creation(destruction) operators \(\hat{a}^{(\dagger)}\), \(\hat{E}\propto\hat{a}+\hat{a}^{\dagger}\). In the case of thermally populated electromagnetic radiation, described as a statistical mixture state, the expectation value of the measurement operator is identically zero.
Nevertheless, the technique of electro-optic sampling has been proven to possess sufficient sensitivity to investigate the statistical properties of the electromagnetic vacuum and of its higher-order noise distribution in the MIR frequency range [11; 12]. The measurement of higher order noise terms, described by the operators \(\hat{S}^{(2)}\), \(\hat{S}^{(3)}\), etc., would in fact provide a non-vanishing result due to the presence of single mode normally and not normally ordered creation and destruction operator terms \(\hat{a}^{\dagger}\hat{a}\), \(\hat{a}\hat{a}^{\dagger}\).
In the THz frequency domain, the technique of electro-optic sampling combined with the use
of two probing laser pulses has been proven to possess sufficient sensitivity to resolve the first-order degree coherence measurement performed on a quantum thermal state of radiation. The latter presents at room temperature in the THz frequency range with an average number of photons equal to only a few units within the detection bandwidth. The tunable temporal and spatial distance between the two ultrashort probes allows investigating the quantum electromagnetic state in two distinct space-time points \((\vec{r},t),\ (\vec{r}+\delta\vec{r}_{\perp},t+\tau)\). The result obtained from each couple of mutually delayed pulses is then used for the direct computation of the electric field correlation function, defined in analogy with Ref. [18]:
\[G^{(1)}(\tau,\delta\vec{r}_{\perp})\propto\langle\{\hat{S}(t,\vec{r}),\hat{S} (t+\tau,\vec{r}+\delta\vec{r}_{\perp})\}\rangle. \tag{4}\]
The quantum mechanics operators \(\hat{S}(t,\vec{r})\), \(\hat{S}(t+\tau,\vec{r}+\delta\vec{r}_{\perp})\) describe the nonlinear measurement performed by the pair of femtosecond probes at the two distinct space-time points \((\vec{r},t),\ (\vec{r}+\delta\vec{r}_{\perp},t+\tau)\). The term \(\{,\ \}\) indicates the anticommutator of the operators and \(\langle,\ \rangle\) their expectation value.
The electric field correlation measurement technique has been shown to provide non-vanishing results even in the limit of the detectable mode's photon population tending to zero, resulting therefore as a valuable instrument for the investigation of broadband electromagnetic vacuum[18]. The use of strongly focused Gaussian probes combined with the subcycle resolution of the employed femtosecond sampling pulses has allowed the characterization of the temporal but most importantly spatial field correlation of electromagnetic radiation in its ground state[19].
The incompatibility between nonlinear balanced detection and the measurement of chaotic quantum radiation can be resolved by employing a technique defined as "RF-referencing". The latter consists in referencing the measurements of each of the sampling pulses to their respective temporal adjacent pulse, which are described by the operators \(\hat{S}(t+T_{\rm rep},\vec{r})\), \(\hat{S}(t+\tau+T_{\rm rep},\vec{r}+\delta\vec{r}_{\perp})\). Here the quantity \(T_{\rm rep}\) represents the time occurring between each couple of measurement pulses. As described in the Methods section of Ref. [18], the electro-optic field correlation function will be given by the expression:
\[G^{(1)}(\tau,\delta\vec{r}_{\perp})=\big{\langle}\big{(}\hat{S}(t,\vec{r})- \hat{S}(t+T_{\rm rep},\vec{r})\big{)}\big{(}\hat{S}(t+\tau,\vec{r}+\delta\vec {r}_{\perp})-\hat{S}(t+\tau+T_{\rm rep},\vec{r}+\delta\vec{r}_{\perp})\big{)} \big{\rangle}. \tag{5}\]
This specific measurement configuration will allow the efficient suppression of systematic coherent noise affecting each individual pulse equally, such as higher-order nonlinear balanced signal \(S^{(3)}(\tau)\) described in Sec. II.2, and long time drifts, such as \(1/f\)-noise. Therefore, it confers to the
nonlinear field correlation measurement technique sensitivity only to electromagnetic quantum state in which radiation presents with a coherence time smaller than \(T_{\text{rep}}\) by design.
Besides the quantum thermal radiation at THz frequencies, an additional incoherent electromagnetic light source in the experimental system is represented by the quantum vacuum at near-infrared frequencies \(\hat{E}_{x}^{\text{vac.}}\) polarized along the \(\hat{x}\) laboratory axis, which is responsible for the generation of shot noise of each sampling laser pulse upon balanced detection [28]. As the quantum vacuums characterizing the two pulses are uncorrelated, their presence does not give rise to any further contribution to the balanced correlation. However, they can influence the polarization state of the copropagating beam.
As experimentally and theoretically demonstrated classically in Sec. II, an ultrashort NIR probe with a polarization component along the \(\hat{x}\) axis of the laboratory reference frame can induce a polarization change in the copropagating laser pulse, leading to the generation of a nonlinear balanced signal, described in Eq. (3). In a similar fashion, the electromagnetic NIR vacuum \(\hat{E}_{x}^{\text{vac.}}\) inducing the shot noise of one sampling beam can influence the polarization of the copropagating one via four-wave mixing. The generated third-order nonlinear polarization term will induce upon balanced detection a signal correlated to the shot noise amplitude of the paired ultrashort pulse. The latter will therefore introduce a measurable nonlinear correlation term between the nonlinear balanced measurement in the two detectors.
In the following chapter, we will provide a detailed theoretical derivation of the higher-order nonlinear balanced correlation term due to the third-order nonlinear interaction of each sampling laser pulse with the quantum vacuum characterizing the copropagating probe in Sec. III.1. The experimental characterization of the higher order nonlinear correlation term together with its experimental parameters dependence will be presented in Sec. III.2 and Sec. III.3 respectively.
### Kerr-induced nonlinear correlation of electromagnetic radiation
In the following, the Kerr nonlinearity caused by electromagnetic vacuum at NIR frequency able to induce a non-vanishing correlation term in the balanced ellipsometry correlation detection scheme will be proposed. It is represented schematically in Fig. 3. For simplicity, we will consider the influence of the electromagnetic vacuum \(\hat{E}_{x}^{\text{vac.},t}\) affecting the non-time delayed \(\mathfrak{t}\) probe \(E_{t,z}^{\text{p}}(t)\) on the quantum balanced ellipsometry signal detected by the time delayed \(t+\tau\) probing pulse \(E_{\tau,z}^{\text{p}}(t+\tau)\). Both probes are polarized along the \(\hat{z}\) axis of the laboratory reference frame,
as indicated in the pedices notation. The symmetric process obtained by exchanging the roles of time-delayed and not time-delayed probes can be shown to be formally equivalent.
According to second quantization formalism, the amplitude of the electromagnetic vacuum at NIR frequency can be described as \(\hat{E}^{\text{vac}}=i\sum_{k}\sqrt{\frac{\hbar\omega_{k}}{2\epsilon_{0}n^{2}V}} \left[\hat{a}_{k}e^{-i\omega_{k}t+\vec{k}\cdot\vec{r}}-\hat{a}_{k}^{\dagger}e^ {i\omega_{k}t-\vec{k}\cdot\vec{r}}\right]\). Here the quantum operator \(\hat{a}^{(\dagger)}\) describes the destruction (creation) of a NIR mode of frequency \(\omega_{k}\) propagating with wavevector \(\vec{k}\) inside the nonlinear material. In analogy with Eq. (1a), the third-order nonlinear polarization arising due to the interaction of the quantum electromagnetic vacuum with the high-intensity classical electric field of the two probes can be written as:
\[\hat{P}_{x}^{(3)}(t+\tau)=2\epsilon_{0}\chi_{44}E_{t,z}^{\text{p}}(t)E_{\tau,z} ^{\text{p}}(t+\tau)\hat{E}_{x}^{\text{vac}.,t}. \tag{6}\]
The term \(\hat{P}_{x}^{(3)}(t+\tau)\) reported in Eq. (6) serves as a source term for the generation of the additional
polarization component of the temporal delayed \(t+\tau\) laser pulse \(\hat{E}^{(3)}(t+\tau)\), as indicated in Fig. 3. The latter induces the balanced nonlinear signal at the photodetector described via the quantum mechanic operator \(\hat{S}^{(3)}(t+\tau,\delta\vec{r}_{\perp})\). It can be expressed as (for detailed derivation, see Sec. II A of Suppl. Material):
\[\begin{split}&\hat{S}^{(3)}(t+\tau,\vec{r}+\delta\vec{r}_{\perp}) =\frac{1}{2}cn\epsilon_{0}\int_{0}^{+\infty}d\omega\frac{\eta(\omega)}{\hbar \omega}\int d^{2}\vec{r}_{\perp}|E^{\mathrm{p}}_{\tau,z}(\zeta)|^{2}\left[i \frac{\hat{E}^{(3)}}{E^{\mathrm{p}}_{\tau,z}(\zeta)}+\mathrm{h.c.}\right]\\ &=-\frac{2\chi_{44}E^{\mathrm{p}}_{t}N_{\tau}\omega_{c}}{3cn}\int d ^{2}\vec{r}_{\perp}g_{0}^{2}(\vec{r}-\frac{\delta\vec{r}_{\perp}}{2})g_{0}( \vec{r}+\frac{\delta\vec{r}_{\perp}}{2})\sum_{\vec{k}}\sqrt{\frac{\hbar\omega _{k}}{2\epsilon_{0}\epsilon_{r}V}}\left[\hat{a}_{\vec{k}}e^{-i\omega_{\vec{k}} t+\vec{\hat{k}}\cdot\delta\vec{r}_{\perp}}R(\tilde{k}_{y},\omega_{\vec{k}})- \mathrm{h.c.}\right]\end{split} \tag{7}\]
Here the functions \(g_{0}(\vec{r}\pm\frac{\delta\vec{r}_{\perp}}{2})\) represent the Gaussian spatial distribution of the two probing pulses centered symmetrically at \(\pm\frac{\delta\vec{r}_{\perp}}{2}\) with respect to the origin of the laboratory reference frame (at the crystal center). \(N_{\tau}\) indicates the number of photons of the \(t+\tau\) probe effectively measured at the photodetector. The function \(R(\tilde{k}_{y},\omega_{\vec{k}})\) represents the responsivity of the third-order nonlinear interaction, which is dependent on the phase matching condition of the near-infrared modes with component \(\tilde{k}_{y}\) along the pulses propagation direction and frequency \(\omega_{\vec{k}}\).
The quantum vacuum at near-infrared frequencies \(\hat{E}^{\mathrm{vac}.,t}_{x}\) is responsible for the generation of shot noise associated with its photodetection, as also appearing directly in Ref. [28]. The measurement of the latter is described through the quantum mechanics operator \(\hat{S}_{n,t}\):
\[\begin{split}\hat{S}_{n,t(\tau)}&=\frac{1}{2}cn \epsilon_{0}\int_{0}^{+\infty}d\omega\frac{\eta(\omega)}{\hbar\omega}\int d^ {2}\vec{r}_{\perp}\left[(E^{\mathrm{p}}_{t(\tau),z}(\zeta))^{*}i\hat{E}^{ \mathrm{vac}.,t(\tau)}_{x}+\mathrm{h.c.}\right]\\ &=-\frac{N_{t}\omega_{c}}{E^{\mathrm{p}}_{t,z}}\sum_{k^{\prime}} \sqrt{\frac{\hbar\omega_{k^{\prime}}}{2\epsilon_{0}\epsilon_{r}V}}\left[\hat{ a}_{\vec{k}}e^{-i\omega_{\vec{k}}t+i\tilde{k}^{\prime}_{x}O^{i}_{t}}-\mathrm{h.c.} \right]f(\omega_{k^{\prime}})\Gamma(\tilde{k}^{\prime}_{x},\tilde{k}^{\prime} _{z}).\end{split} \tag{8}\]
The functions \(f(\omega_{\vec{k}^{\prime}})\) and \(\Gamma(\tilde{k}^{\prime}_{x},\tilde{k}^{\prime}_{z})\) indicate the detection responsivity for a single electromagnetic quantum vacuum mode characterized by frequency \(\omega_{\vec{k}^{\prime}}\) and transverse wavevectors components \(\tilde{k}^{\prime}_{x},\tilde{k}^{\prime}_{z}\) respectively. An equivalent expression can be derived for the shot noise operator \(\hat{S}_{n,\tau}\) of the delayed \(t+\tau\) probe.
Following the derivations present in literature, the quantum mechanics operator \(\hat{S}_{\mathrm{tot.}}\) describing the complete result of the balanced ellipsometry measurement performed by the two femtosecond pulses in the space-time points \((t,\vec{r}),(t+\tau,\vec{r}+\delta\vec{r}_{\perp})\) individually can be written as a sum of the following contributions:
\[\hat{S}_{\mathrm{tot.}}(t,\vec{r})=\hat{S}_{\mathrm{eo}}(t,\vec{r})+\hat{S}^{( 3)}(t,\vec{r})+\hat{S}_{n,t}, \tag{9a}\] \[\hat{S}_{\mathrm{tot.}}(t+\tau,\vec{r}+\delta\vec{r}_{\perp})=\hat{S}_{ \mathrm{eo}}(t+\tau,\vec{r}+\delta\vec{r}_{\perp})+\hat{S}^{(3)}(t+\tau,\vec{r }+\delta\vec{r}_{\perp})+\hat{S}_{n,\tau}. \tag{9b}\]
Here the operators \(\hat{S}_{\rm eo}(t,\vec{r}),\hat{S}_{\rm eo}(t+\tau,\vec{r}+\delta\vec{r}_{\perp})\) are related to the electro-optic measurement of THz electromagnetic radiation, as defined in Ref. [18]. Rewriting Eq. (5) as a function of the quantum operators defined in Eq. (9a) and Eq. (9b), the final experimental result will be given by the expectation value of the quantum operator on a thermally populated radiation state. Taking into consideration only the lower nonlinear contribution terms, the result will read:
\[\begin{split}& G^{(1)}(\tau,\delta\vec{r}_{\perp})=\big{\langle} \big{\{}\hat{S}_{\rm tot.}(t,\vec{r}),\hat{S}_{\rm tot.}(t+\tau,\vec{r}+\delta \vec{r}_{\perp}\big{\}}\big{\rangle}\big{\rangle}\\ &=\big{\langle}\big{\{}\hat{S}_{\rm eo}(t,\vec{r}),\hat{S}_{\rm eo }(t+\tau,\vec{r}+\delta\vec{r}_{\perp})\big{\}}\big{\rangle}+\big{\langle} \big{\{}\hat{S}^{(3)}(t,\vec{r}),\hat{S}_{n,\tau}\big{\}}\big{\rangle}+\big{ \langle}\big{\{}\hat{S}^{(3)}(t+\tau,\vec{r}+\delta\vec{r}_{\perp}),\hat{S}_{ n,t}\big{\}}\big{\rangle}.\end{split} \tag{10}\]
Here, \(\big{\{},\big{\}}\) indicates the anti-commutator of the two operators and \(\big{\langle},\big{\rangle}\) the quantum mechanics expectation value on a quantum thermal state of radiation, which can be formally described as a statistical mixture state. According to quantum mechanics, the only non-vanishing terms upon computation of the expectation value of Eq. (10) will be terms presenting both the normally ordered \(\hat{a}^{\dagger}\hat{a}\) and not normally ordered \(\hat{a}\hat{a}^{\dagger}\) creation and destruction operators relative to the same electromagnetic mode. The first surviving term \(\big{\langle}\big{\{}\hat{S}_{\rm eo}(t,\vec{r}),\hat{S}_{\rm eo}(t+\tau,\vec {r}+\delta\vec{r}_{\perp})\big{\}}\big{\rangle}\) represents the electro-optic electric field correlation \(G_{\rm eo.}^{(1)}(\tau,\delta\vec{r}_{\perp})\) of THz electromagnetic radiation, as defined in Ref. [18; 19]. The additional terms \(\big{\langle}\big{\{}\hat{S}^{(3)}(t,\vec{r}),\hat{S}_{n,\tau}\big{\}}\big{\rangle}\) and \(\big{\langle}\big{\{}\hat{S}^{(3)}(t+\tau,\vec{r}+\delta\vec{r}_{\perp}),\hat {S}_{n,t}\big{\}}\big{\rangle}\) are induced from the correlation of the electromagnetic vacuum responsible for the shot noise characterizing one ultrashort probe with the third order nonlinear balanced signal induced by the same quantum vacuum on the copropagating pulse. Due to wave-vector conservation, the nonlinear balanced signal detected individually by each femtosecond pulse could not arise from a four-wave mixing process induced via its own quantum vacuum. It is also important to note that the same argument cannot be applied to the quantum vacuum fluctuations at THz frequencies, due to their very long wavelength that relaxes the phase-matching condition.
The shot noise in the two photodetectors also results uncorrelated \(\langle\{\hat{S}_{n,t},\hat{S}_{n,\tau}\}\rangle=0\). This result derives directly from the lack of correlation between the electromagnetic quantum vacuum characterizing the sampling probes \(\hat{E}_{x}^{\rm vac.,\,\prime}\) and \(\hat{E}_{x}^{\rm vac.,\tau}\). Even if stemming from the same source, the presence of a beam splitter in the optical path as illustrated in Fig. 4 (a) is responsible for different destruction and creation operators of the quantum electromagnetic vacuum affecting the two probes [21], lifting their degeneracy.
To summarize, a simple quantum mechanical description of both the noise characterizing the two probes and of the incoherent nonlinear balanced ellipsometry measurement predicts, in addition to the term arising from the THz quantum thermal radiation, the existence of an additional
correlation term \(G^{(1)}_{\text{Kerr}}(\tau,\delta\vec{r}_{\perp})=\left\langle\left\{\hat{S}^{(3 )}(t,\vec{r}),\hat{S}_{n,\tau}\right\}\right\rangle+\left\langle\left\{\hat{S}^ {(3)}(t+\tau,\vec{r}+\delta\vec{r}_{\perp}),\hat{S}_{n,\tau}\right\}\right\rangle\) arising from a third-order mixing between the quantum vacuum at the NIR frequency and the propagating pulse of the other line. In other words, vacuum fluctuations responsible for the shot noise on one line mix with the pulse on the other line and create a correlated noise affecting the latter after the ellipsometry measurement.
### Experimental nonlinear field correlation
The investigation of quantum field correlation measurement on the thermally populated electromagnetic state has been carried out using the experimental setup reported in Fig. 4 (a). The system is similar to the one presented in Fig. 2 (a), but in addition to the temporal distance between the two probing ultrashort pulses, also their spatial distance \(\delta\vec{r}_{\perp}\) in the transverse plane can be controlled via a symmetric couple of piezo mirrors. The change in the propagation angle \(\delta\theta\) with respect to the femtosecond pulses propagation direction \(\hat{y}\) will be translated by the lens system into a spatial separation \(\delta\vec{r}_{\perp}=f\delta\theta\). The two femtosecond probing pulses are focused by the lens onto the detection crystal, where they will sample the same electromagnetic state at two different space-time points. Afterward, they will be individually analyzed via balanced ellipsometry, where the acquisition is performed at the repetition rate of the laser \(f_{\text{rep.}}\). The recording of the results obtained from each couple of near-infrared pulses allows the real-time computation of the electric field correlation function as described in Eq. (5).
The experimental nonlinear field correlation measurement obtained using the setup presented in Fig. 4 (a) with overlapping sampling beams of Gaussian beam waist \(w_{0}=10\)\(\upmu\)m, temporal extent \(\tau_{\text{p}}=200\) fs and peak electric field amplitude of \(E_{z}^{\text{p}}=10\)\(\text{MV}/\text{m}\) are reported in Fig. 4 (b) and (c) in time and frequency domain respectively. As shown in figure, the results measured using the specified set of experimental parameters for perfectly overlapping sampling beams \(\delta\vec{r}_{\perp}=0\) differ significantly both in amplitude and most importantly in spectral content from the expected electro-optic THz field correlation \(G^{(1)}_{\text{eo.}}(\tau,\delta\vec{r}_{\perp}=0)\) generated via blackbody emission from the environment at 300 K, also shown for comparison. The latter has been numerically estimated using the expression reported in Eq. (2) of Ref. [18] and the experimental parameters of the femtosecond probing pulses employed in the experiment.
In order to compare the measured phase correlation between the sampling probes obtained via balanced ellipsometry with the one due to electro-optic field correlation \(G^{(1)}_{\text{eo.}}(\tau,\delta\vec{r}_{\perp})\) induced by
THz thermal radiation, all experimental measurements have been normalized by the constant \(C=\frac{r_{41}n^{3}l_{\text{op}}l_{\text{p},l}l_{\text{p},\tau}}{c}\), provided in literature [18; 19; 28]. Here, \(r_{41}\) represents the electro-optic coefficient of the nonlinear ZnTe crystal and \(I_{\text{p},t}\), \(I_{\text{p},\tau}\) the femtosecond probes intensity impinging on the balanced photodetectors. The experimental field correlations have been moreover filtered with a Kaiser windowing function, in order to reduce the presence of noise-induced artefacts (for more informations see Supplementary Note 3 of Ref. [19]).
As reported in Fig. 4 (b), the amplitude of the experimental signal presents a peak-to-peak amplitude of around \(G_{\text{pp}}^{(1)}(\tau,0~{}\upmu\text{m})=500~{}\text{V}^{2}/\text{m}^{2}\), which results approximately ten times larger than the expected correlation of THz photons \(G_{\text{e},\text{op},\text{pp}}^{(1)}(\tau,0~{}\upmu\text{m})=55~{}\text{V}^{ 2}/\text{m}^{2}\). The two curves present
Figure 4: _Experimental quantum higher order nonlinear correlation._ (a) The experimental setup for nonlinear electric field correlation measurements allows the control of both the temporal and spatial distance between probes. The latter is controlled via a couple of piezo mirrors, which confer a relative propagation angle of \(2\delta\theta\) to the two probes. The acquisition of the experimental measurement points is performed at the repetition rate of the femtosecond laser \(f_{\text{rep}}=80\) MHz. BD = Balanced detectors, WP = Wollaston prism, WVP = Quarter waveplate. (b),(c) Experimental electric field correlation measurement results performed with a beams’ spatial separation of \(\delta\overline{r}=0~{}\upmu\text{m}\) in time (b) and frequency domain (c) (in faded purple). For better visualization, the data have been filtered with a Kaiser windowing function (in solid purple). For comparison, the numerically simulated result for the electro-optic field correlation \(G_{\text{e}\text{o}.}^{(1)}(\tau,\delta\overline{r}_{\perp}=0)\) of THz thermal radiation at 300 K is reported in dashed-dotted lines. In both figures, the experimental uncertainty indicates the \(2\sigma\) confidence interval.
significant differences as well in frequency content, as it is shown by their Fourier transformation in Fig. 4 (c). While the spectral content of the numerically simulated electric field correlation \(G_{\text{eo.}}^{(1)}(\tau,0~{}\upmu\text{m})\) depends on the phase matching properties of THz electromagnetic modes in ZnTe and therefore presents mostly contributions from higher frequency components around 2 THz, the experimental spectrum derived from Fig. 4 (b) presents maximum contributions in the low-frequency components regions with a maximum centered at zero frequency.
### Experimental nonlinear field correlation dependence on experimental parameters
The nature of the measured nonlinear correlation has been investigated by studying its dependence on experimental parameters such as temperature, crystal length, and probing pulses' wavelength, power, and spatial distance \(\delta\vec{r}_{\perp}\). In order to compare the nonlinear correlation measurement to the numerically predicted results induced by electro-optic detection of thermal THz radiation \(G_{\text{eo.}}^{(1)}(\tau,\delta\vec{r}_{\perp})\), the experimental measurements presented in this section have also been normalized by the electro-optic correlation constant \(C\) and filtered as defined in Sec. III.
As derived theoretically in Sec. III A and more in detail in Sec. II A of the Suppl. Material, the higher-order nonlinear correlation term \(G_{\text{Kerr}}^{(1)}(\tau,\delta\vec{r}_{\perp})=\left\langle\left\{\hat{S}^{ (3)}(t+\tau,\vec{r}+\delta\vec{r}_{\perp}),\hat{S}_{n,t}\right\}\right\rangle\) and the THz blackbody induced field correlation \(G_{\text{eo.}}^{(1)}(\tau,\delta\vec{r}_{\perp})\) will present strongly different temporal coherence characteristics, which will strongly depend on the sampling beams transverse separation \(\delta\vec{r}_{\perp}\). This dependence has been investigated experimentally. The measured nonlinear correlation \(G_{\text{tot.,pp}}^{(1)}(\tau,\delta\vec{r}_{\perp})\) amplitude dependence on the relative distance between the sampling probes \(\delta\vec{r}_{\perp}\) is reported both in time and frequency domain in Fig. 5.
In Fig. 5 (a) the total peak-to-peak amplitude of the measured nonlinear correlation \(G_{\text{tot.,pp.}}^{(1)}(\tau,\delta\vec{r}_{\perp})\) is reported as a function of the relative transverse distance between the sampling probes \(\delta\vec{r}_{\perp}\). As the latter increases, the nonlinear correlation significantly decreases in amplitude from a value of \(G_{\text{tot.,pp.}}^{(1)}(\tau,\delta\vec{r}_{\perp})=500~{}\text{V}^{2}/ \text{m}^{2}\) in the case of perfectly overlapping sampling beams \(\delta\vec{r}_{\perp}=0~{}\upmu\text{m}\) to a value of \(G_{\text{tot.,pp.}}^{(1)}(\tau,\delta\vec{r}_{\perp})=8.2~{}\text{V}^{2}/ \text{m}^{2}\) for a transverse beam separation of \(\delta\vec{r}_{\perp}=175~{}\upmu\text{m}\). As shown in figure by comparison with the numerically simulated result, for spatially overlapping sampling laser probes the measured correlation is dominated by higher-order nonlinear term \(G_{\text{Kerr}}^{(1)}(\tau,\delta\vec{r}_{\perp})\), while for sampling beams with a spatial transverse distance equal several times their diameters the experimental results are well predicted by the electro-optic field correlation of THz thermal radiation \(G_{\text{eo.}}^{(1)}(\tau,\delta\vec{r}_{\perp})\). A similar behavior can be observed also employing different
nonlinear ZnTe detection crystals of various lengths, as shown in Sec. III in the supplementary material.
As expected, the experimental correlation measurements performed for different probing beams' distances present significant differences not only in amplitude but in the extent of their temporal coherence as well, as it is reported in Fig. 5 (b). The experimental result measured in the case of \(\delta\vec{r}_{\perp}=0\)\(\upmu\)m preserves its temporal coherence for a time of 0.5 ps. An increase in spatial distance between sampling probes \(\delta\vec{r}_{\perp}\) correlates on the other hand with an increased temporal extent of the nonlinear correlation, which is preserved for over 2 ps for all the measurements. This difference is reflected in the spectral content of the sampled broadband radiation in the case of overlapping and non-overlapping sampling pulses, which are reported in Fig. 5 (c). As it can be seen in figure, the spectral content of the nonlinear correlation measured with \(\delta\vec{r}_{\perp}=0\)\(\upmu\)m presents a large bandwidth of 3.5 THz with the majority of the contributions given by lower frequency modes around 0 THz, in good agreement with a major contribution stemming from
Figure 5: _Quantum higher order nonlinear correlations as a function of beam separation_. (a) Dependence of the peak-to-peak amplitude of higher order nonlinear correlation term \(G_{\text{pp}}^{(1)}(\tau,\delta\vec{r}_{\perp})\) (in log scale) as a function of the transverse beam separation \(\delta\vec{r}_{\perp}\). For comparison, also the peak-to-peak amplitude of the electro-optic field correlation of THz modes \(G_{\text{co},\text{pp}}^{(1)}(\tau,\delta\vec{r}_{\perp})\) is reported (dash-dotted line). (b,c) Experimental nonlinear correlation measurements in time (b) and frequency domain (c) for increasing \(\delta\vec{r}_{\perp}\). All errorbars represent the 2\(\sigma\) confidence interval. (d) Numerically computed electro-optic electric field correlation \(G_{\text{co}}^{(1)}(\tau,\delta\vec{r}_{\perp})\) at T = 300 K for spatial beams separation \(\delta\vec{r}_{\perp}\) reported in (b) and (c) for non-overlapping beams.
the higher-order nonlinear correlation \(G_{\text{Kerr}}^{(1)}(\tau,\delta\vec{r}_{\perp})\). On the other hand, the experimental result obtained with the sampling beams non-overlapping inside the nonlinear material \(\delta\vec{r}_{\perp}>50\)\(\upmu\)m show low spectral contributions from the low-frequency region. Moreover, their spectral contents present a varying bandwidth and peak contribution frequency, which decrease from a value of 3 THz and 1.5 THz respectively for the measurement performed with \(\delta\vec{r}_{\perp}=50\)\(\upmu\)m to values of 1.75 THz and 0.75 THz for the nonlinear correlation measured with \(\delta\vec{r}_{\perp}=175\)\(\upmu\)m. The decrease in bandwidth as well as the redshift of the peak contribution frequency for nonlinear correlation measured with non-spatially overlapping probes (\(\delta\vec{r}_{\perp}>50\)\(\upmu\)m) presents a good agreement with the numerical results for electro-optic field correlation of thermal THz radiation, reported in Fig. 5 (d). As it is shown in figure, the numerical THz electro-optic correlation estimated for different transverse beams spatial separations \(\delta\vec{r}_{\perp}\) present almost identical bandwidths to the experimental results and similar redshift in the peak contribution frequency, which decrease from a value of 3.2 THz and 2 THz respectively for the measurement performed with \(\delta\vec{r}_{\perp}=50\)\(\upmu\)m to values of 1.4 THz and 0.6 THz for the nonlinear correlation measured with \(\delta\vec{r}_{\perp}=175\)\(\upmu\)m. The redshift of the peak contribution frequency of the THz electro-optic correlation can be intuitively explained in the spatial domain with electromagnetic modes of wavelength equal or smaller than twice the spatial transverse distance between the sampling pulses providing negative or null contributions to the measured field correlation.
In order to corroborate the hypothesis on the origin of the higher-order nonlinear correlation from a vacuum-assisted four-wave mixing of the overlapping probing pulses, a further analysis of its dependence on the experimental parameters has been performed. The experimental results obtained as a function of different crystals, environment temperature and probes' power and wavelength are reported in Fig. 6. All the measurements shown have been performed with perfectly overlapping sampling beams \(\delta\vec{r}_{\perp}=0\)\(\upmu\)m.
The higher-order nonlinear correlation measurements obtained employing different lengths of the \(\langle 110\rangle\)-cut ZnTe detection crystal are reported in Fig. 6 (a). As it can be seen, the normalized nonlinear correlation measurement performed using two distinct 1 mm long and a 2 mm long ZnTe detection crystals are all characterized by the presence of a higher order correlation term, whose amplitude decreases from a peak value of \(G_{\text{tot.}}^{(1)}(\tau,0\)\(\upmu\)m\()=230\) V\({}^{2}/\)m\({}^{2}\) for the 1 mm long crystal to a value of \(G_{\text{tot.}}^{(1)}(\tau,0\)\(\upmu\)m\()=75\) V\({}^{2}/\)m\({}^{2}\) for the 2 mm long crystal. All the measurements present however similar temporal coherence, which is preserved for a period of 0.75 ps. Their common origin in both crystals is further highlighted by their comparison in frequency domain,
presented in Fig. 6 (b). All experimental measurements present exactly the same spectral content, with a bandwidth of around 2 THz and the majority of the contributions originating from the lower frequency region around 0 THz. The latter is in good agreement with a nonlinear correlation signal generated via third-order nonlinear mixing of the probes mediated via the electromagnetic vacuum, as described in Eq. (33) of Supp. Material in Sec. II A. In contrast, a signal generated from the
Figure 6: _Experimental parameters dependence_. (a,b) Higher order nonlinear correlation measurement in time (a) and frequency domain (b) employing two different 1mm and a 2 mm long \(\langle 110\rangle\)-cut ZnTe detection crystal. The experimental parameters used are: \(P_{t}=P_{\tau}=0.8\) mW, \(\tau_{\text{p}}=200\) fs, \(\lambda=780\) nm. (c) Third-order nonlinear correlation measurement for different sampling laser pulses’ power. Both measurements have been performed using a 1 mm long ZnTe crystal, \(\tau_{\text{p}}=200\) fs and \(\lambda=780\) nm. (d). Experimental nonlinear correlation measured employing radiation of \(\lambda=800\) nm, 780 nm. The measurements have been performed using the same \(\langle 110\rangle\)-cut ZnTe crystal. (e) Effect of the temperature of the blackbody radiation detected on the higher order nonlinear correlation \(G_{\text{tot.}}^{(1)}(\tau,0\ \upmu\text{m})\). (f) Average number of photons per mode \(\langle n\rangle\) according to Planck’s radiation law at different temperatures of 300 K (orange) and 4 K (yellow). The central frequency of the ultrashort probing pulse employed in the experiment \(\omega_{c}=375\) THz is reported in blue dashed line. The errorbars in all the experimental measurements in figure represent the \(2\sigma\) confidence interval.
electro-optic correlation of the THz thermal background field would have significantly depended on the phase-matching condition and therefore on the length of the crystal.
A further indication of the third-order nonlinear origin of the measured signal is shown in Fig. 6 (c), where the measurements performed employing a sampling pulses probing powers of \(P_{t}=P_{\tau}=1.6\) mW and \(P_{t}=P_{\tau}=0.8\) mW respectively are shown. Given the power normalization introduced by the use of the constant C, the experimental higher-order nonlinear correlation measurements appear to be directly proportional to the product of the power of the sampling probes \(P_{t}\), \(P_{\tau}\). This result is once more in good agreement with the expected power dependency of a third-order induced nonlinear correlation, as shown in Eq. (33) of Suppl. Material in Sec.II A.
The dependence of the balanced ellipsometry correlation amplitude on the wavelength of the sampling radiation \(\lambda\) is reported in Fig. 6 (d). As it can be seen in figure, the use of a longer wavelength of near-infrared radiation \(\lambda=800\) nm leads to a higher order nonlinear correlation of peak-to-peak amplitude of around \(G_{\text{tot.}}^{(1)}(\tau,0\ \upmu\text{m})=500\ \text{V}^{2}/\text{m}^{2}\), twice as large with respect to the result obtained using red-shifted sampling probes centered at \(\lambda=780\) nm.
The influence of the temperature \(T\) of the thermal radiation to which the ZnTe detection crystal is exposed and, as a consequence, of the average number of photons per mode \(\langle n\rangle\) populating the incoherent quantum radiation responsible for the nonlinear correlation is reported in Fig. 6 (e). As shown in figure, the change in temperature \(T\) of the blackbody radiation measured does not affect significantly the amplitude of the higher order nonlinear correlation amplitude, which increases from a peak-to-peak value of \(G_{\text{tot.}}^{(1)}(\tau,0\ \upmu\text{m})=500\ \text{V}^{2}/\text{m}^{2}\) at \(T=300\) K to a value of \(G_{\text{tot.}}^{(1)}(\tau,0\ \upmu\text{m})=580\ \text{V}^{2}/\text{m}^{2}\) at \(T=4\) K. The experimental result is in good agreement with a higher-order correlation term generated via the electromagnetic vacuum at NIR frequencies. As it can be seen in Fig. 6 (f), the average number of photons per mode \(\langle n\rangle\) at NIR frequencies, given by Planck's radiation law, in the frequency range around the central frequency of the infrared probing pulse \(\omega_{c}=375\) THz remains below a value of \(\langle n\rangle<10^{-14}\) for both temperatures.
## IV Conclusion
In this work, we have shown how third-order nonlinearity at NIR frequencies can affect the balanced ellipsometry measurement performed with ZnTe, both on the signal measured by the individual pulses and most importantly on their correlation.
In Sec. II.1 we have derived theoretically the properties in time and in frequency domain of the
third-order coherent signal induced on each of the sampling pulses due to their mutual interaction in the nonlinear material. The predictions have been corroborated by the experimental results reported in Sec. II.2.
In a balanced field correlation measurement, third-order nonlinearities have also been shown to play a significant role in the generation of correlation between the incoherent signals measured by the two sampling pulses. As proven theoretically in Sec. III.1, the electromagnetic vacuum at NIR frequencies characterizing the noise of one pulse can interact with the copropagating one inside the nonlinear medium, leading to the change of its polarization. Upon measurement, the vacuum-induced change in polarization of one femtosecond pulse will therefore result correlated with the noise of the copropagating pulse, which is induced by the same quantum vacuum.
The experimental characterization of the vacuum-assisted third-order nonlinear correlation has been reported in Sec. III.3. Here, the analysis of the spatial nonlinear correlation of incoherent radiation clearly indicates the presence of two different regimes. Whilst for spatially overlapping sampling beams the nonlinear correlation result is dominated by the four-wave mixing due to the quantum vacuum at NIR frequencies, for increasing distance between the sampling pulses the experimental results return to a good agreement with electro-optic field correlation of THz thermal radiation. A detailed analysis of the experimental parameters dependence of the third-order nonlinear correlation further verifies its connection to the electromagnetic vacuum at NIR frequencies, given its independence on temperature, power dependence and its presence with similar features in different crystals. All our experimental findings present a good agreement with the predicted measurement parameters dependence of the responsivity of the higher-order nonlinear correlation, derived in Sec. II A and B of the supplementary material. The qualitative analysis of the latter, moreover, clearly implies the absence of significant contributions from vacuum-induced third-order nonlinear correlation due to quantum vacuum at NIR frequencies in the electro-optic field correlation measurements of THz thermal radiation presented in our previous work [18; 19].
Because two-beam correlation experiments, in contrast to single-beam studies [11; 12], allow the identification of the frequency components contributing to the correlation signal, they also have allowed to isolate the contribution from the higher order non-linear signal that is inevitably mixed into the electro-optic sampling of the THz radiation. Furthermore, it shows that measuring two nearby spatial locations with beams that do not physically overlap is a very efficient strategy to remove such contribution while increasing the detection sensitivity and allowing further studies of the quantum vacuum [19].
###### Acknowledgements.
The experimental work was funded by the Swiss National Science Foundation (grant 200020 192330/1) and the National Centre of Competence in Research Quantum Science and Technology (NCCR QSIT) (grant 51NF40-185902) (F.F.S, A.H.). We acknowledge the mechanical workshop at ETHZ. We acknowledge the contributions of Dr. Cristina Ileana Benea-Chelmus for their contributions to previous work, Dr. E. Mavrona for the extraction of the refractive index of ZnTe. We thank Prof. Denis Seletskiy for fruitful discussion.
## Authors Declaration
### Conflict of interest
The authors declare no conflict of interest.
## Authors contribution
J.F. and F.F.S. conceived the idea for the experiment and its theoretical interpretation. F.F.S and A.H. conducted the measurements. The data analysis was primarily performed by F.F.S and their results were interpreted by F.F.S, A.H and J.F. The theoretical framework was developed by F.F.S. J.F. was the scientific supervisor of this work. The manuscript was written through contributions from all authors. All authors have given approval to the final version of the manuscript.
## Data availability statement
The experimental data supporting the findings of this study are available from the corresponding author upon reasonable request.
|
2309.13482 | A Unified Scheme of ResNet and Softmax | Large language models (LLMs) have brought significant changes to human
society. Softmax regression and residual neural networks (ResNet) are two
important techniques in deep learning: they not only serve as significant
theoretical components supporting the functionality of LLMs but also are
related to many other machine learning and theoretical computer science fields,
including but not limited to image classification, object detection, semantic
segmentation, and tensors.
Previous research works studied these two concepts separately. In this paper,
we provide a theoretical analysis of the regression problem: $\| \langle
\exp(Ax) + A x , {\bf 1}_n \rangle^{-1} ( \exp(Ax) + Ax ) - b \|_2^2$, where
$A$ is a matrix in $\mathbb{R}^{n \times d}$, $b$ is a vector in
$\mathbb{R}^n$, and ${\bf 1}_n$ is the $n$-dimensional vector whose entries are
all $1$. This regression problem is a unified scheme that combines softmax
regression and ResNet, which has never been done before. We derive the
gradient, Hessian, and Lipschitz properties of the loss function. The Hessian
is shown to be positive semidefinite, and its structure is characterized as the
sum of a low-rank matrix and a diagonal matrix. This enables an efficient
approximate Newton method.
As a result, this unified scheme helps to connect two previously thought
unrelated fields and provides novel insight into loss landscape and
optimization for emerging over-parameterized neural networks, which is
meaningful for future research in deep learning models. | Zhao Song, Weixin Wang, Junze Yin | 2023-09-23T21:41:01Z | http://arxiv.org/abs/2309.13482v1 | # A Unified Scheme of ResNet and Softmax
###### Abstract
Large language models (LLMs) have brought significant changes to human society. Softmax regression and residual neural networks (ResNet) are two important techniques in deep learning: they not only serve as significant theoretical components supporting the functionality of LLMs but also are related to many other machine learning and theoretical computer science fields, including but not limited to image classification, object detection, semantic segmentation, and tensors.
Previous research works studied these two concepts separately. In this paper, we provide a theoretical analysis of the regression problem:
\[\|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle^{-1}(\exp(Ax)+Ax)-b\|_{2}^{2},\]
where \(A\) is a matrix in \(\mathbb{R}^{n\times d}\), \(b\) is a vector in \(\mathbb{R}^{n}\), and \(\mathbf{1}_{n}\) is the \(n\)-dimensional vector whose entries are all 1. This regression problem is a unified scheme that combines softmax regression and ResNet, which has never been done before. We derive the gradient, Hessian, and Lipschitz properties of the loss function. The Hessian is shown to be positive semidefinite, and its structure is characterized as the sum of a low-rank matrix and a diagonal matrix. This enables an efficient approximate Newton method.
As a result, this unified scheme helps to connect two previously thought unrelated fields and provides novel insight into loss landscape and optimization for emerging over-parameterized neural networks, which is meaningful for future research in deep learning models.
###### Contents
* 1 Introduction
* 2 Related Work
* 3 Preliminary
* 3.1 Basic Definitions
* 3.2 Basic Facts
* 4 Gradient
* 5 Hessian
* 5.1 Basic Definition
* 5.2 Computation of Hessian
* 5.3 Helpful Lemma
* 5.4 Decomposing \(B_{1}(x),B_{2}(x)\) and \(B_{3}(x)\) into low rank plus diagonal
* 6 Rewrite Hessian
* 7 Hessian is PSD
* 7.1 PSD Lower Bound
* 8 Hessian is Lipschitz
* 8.1 Main results
* 8.2 A core Tool: Upper Bound for Several Basic Functions
* 8.3 A core Tool: Lipschitz Property for Several Basic Functions
* 8.4 Summary of Four Steps
* 8.5 Calculation: Step 1 Lipschitz for Matrix Function \(\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))^{\top}\cdot K(x)^{\top}K(x)\cdot \operatorname{diag}(z(x))\)
* 8.6 Calculation: Step 2 Lipschitz for Matrix Function \(\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot\operatorname{ diag}(z(x))\)
* 8.7 Calculation: Step 3 Lipschitz for Matrix Function \(\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot\widetilde{c}(x)\cdot z(x)^ {\top}\)
* 8.8 Calculation: Step 4 Lipschitz for Matrix Function \(\alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))\)
* 9 Main Result
* 10 Conclusion
* A Approximate Newton Method
* A.1 Definition and Update Rule
* A.2 Approximate of Hessian and Update Rule
Introduction
Softmax regression and residual neural networks (ResNet) are two emerging techniques in deep learning that have driven advances in computer vision and natural language processing tasks. In previous research, these two methods were studied separately.
**Definition 1.1** (Softmax regression, [16]).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\) and a vector \(b\in\mathbb{R}^{n}\), the goal of the softmax regression is to compute the following problem:_
\[\min_{x\in\mathbb{R}^{d}}\|\langle\exp(Ax),\mathbf{1}_{n}\rangle^{-1}\exp(Ax) -b\|_{2}^{2},\]
_where \(\mathbf{1}_{n}\) denotes the \(n\)-dimensional vector whose entries are all \(1\)._
Because of the explosive development of large language models (LLMs), there is an increasing amount of work focusing on the theoretical aspect of LLMs, aiming to improve the ability of LLMs from different aspects, including sentiment analysis [20], natural language translation [14], creative writing [15, 16], and language modeling [17]. One of the most important components of an LLM is its ability to identify and focus on the relevant information from the input text. Theoretical works [13, 14, 15, 16, 17, 18, 19, 20, 21, 22] analyze the attention computation to support this ability.
**Definition 1.2** (Attention computation).: _Let \(Q\), \(K\), and \(V\) be \(n\times d\) matrices whose entries are all real numbers._
_Let \(A=\exp(QK^{\top})\) and \(D=\operatorname{diag}(A\mathbf{1}_{n})\) be \(n\)-dimensional square matrices, where \(\operatorname{diag}(A\mathbf{1}_{n})\) is a diagonal matrix whose entries on the \(i\)-th row and \(i\)-th column is the same as the \(i\)-th entry of the vector \(A\mathbf{1}_{n}\)._
_The static attention computation is defined as_
\[\mathsf{Att}(Q,K,V):=D^{-1}AV.\]
In attention computation, the matrix \(Q\) is denoted as the query tokens, which are derived from the previous hidden state of decoders. \(K\) and \(V\) represent the key tokens and values. When computing \(A\), the softmax function is applied to get the attention weight, namely \(A_{i,j}\). Inspired
Figure 1: The visualization of the softmax regression (see Definition 1.1). \(A\) is a matrix in \(\mathbb{R}^{n\times d}\). \(b,\mathbf{1}_{n}\in\mathbb{R}^{n}\) and \(x\in\mathbb{R}^{d}\) are vectors. First, we compute \(\exp(Ax)\) by multiplying \(A\) with \(x\) and then calculating the exponential of each entry in their product. Second, we compute the inner product of \(\exp(Ax)\) and \(\mathbf{1}_{n}\) and then find the multiplicative inverse of it. Third, we multiply the results of the first and the second step and subtract \(b\) from it. Finally, we compute the minimum of the square of the \(\ell_{2}\) norm of result of the third step. The blue rectangles represent the \(n\times d\) matrices. The pink rectangles represent the \(n\)-dimensional vectors. The green rectangles represent the \(d\)-dimensional vectors.
by the role of the exponential functions in attention computation, prior research [11, 12] has built a theoretical framework of hyperbolic function regression, which includes the functions \(f(x)=\exp(Ax),\cosh(Ax)\), and \(\sinh(Ax)\).
**Definition 1.3** (Hyperbolic regression, [12]).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\) and a vector \(b\in\mathbb{R}^{n}\), the goal of the hyperbolic regression problem is to compute the following regression problem:_
\[\min_{x\in\mathbb{R}^{d}}\|f(x)-b\|_{2}.\]
The approach developed by [12] for analyzing the hyperbolic regression is to consider the normalization factor, namely \(\langle f(x),\mathbf{1}_{n}\rangle^{-1}=\langle\exp(Ax),\mathbf{1}_{n}\rangle ^{-1}\). By focusing on the exp, [12] transform the hyperbolic regression problem (see Definition 1.3) to the softmax regression problem (see Definition 1.1). Later on, [12] studies the in-context learning based on a softmax regression of attention mechanism in the Transformer, which is an essential component within LLMs since it allows the model to focus on particular input elements. Moreover, [11] utilize a tensor-trick from [12, 13, 14, 15, 16] simplifying the multiple softmax regression into a single softmax regression.
ResNet is a certain type of deep learning model: the weight layers can learn the residual functions [10]. It is characterized by skip connections, which may perform identity mappings by adding the layer's output to the initial input. This mechanism is similar to the Highway Network in [11] that the gates are opened through highly positive bias weights. This innovation facilitates the training of deep learning models with a substantial number of layers, allowing them to achieve better accuracy as they become deeper. These identity skip connections, commonly known as "residual connections", are also employed in various other systems, including Transformer [16], BERT [17], and ChatGPT [18]. Moreover, ResNets have achieved state-of-the-art performance across many computer vision tasks, including image classification [19, 10], object detection [1, 15, 16, 17], and semantic segmentation [14, 15, 16, 17, 18]. Mathematically, it is defined as
\[Y_{j+1}=Y_{j}+F(Y_{j},\theta_{j}) \tag{1}\]
where \(Y_{j+1},Y_{j},F(Y_{j},\theta_{j})\in\mathbb{R}^{d}\): \(Y_{j}\) represents the feature values at the \(j\)-th layer, while \(\theta_{j}\) denotes the network parameters specific to that layer. The objective of the training process is to learn the network parameters \(\theta\).
In this paper, we combine the softmax regression (see Definition 1.1) with ResNet and give a theoretical analysis of this problem. We formally define it as follows:
**Definition 1.4** (Soft-Residual Regression).: _Given a matrix \(A\in\mathbb{R}^{n\times d}\) and a vector \(b\in\mathbb{R}^{n}\), the goal is to compute the following regression problem:_
\[\|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle^{-1}(\exp(Ax)+Ax)-b\|_{2}^{2}\]
We are motivated by the fact that the softmax regression and ResNets have mostly been studied separately in prior works. We would like to provide a theoretical analysis for combining them together. The unified perspective and analysis of the loss landscape could provide insights into optimization and generalization for emerging overparametrized models. We firmly believe that this lays the groundwork for further research at the intersection of softmax classification and residual architectures.
RoadmapIn Section 2, we introduce the related work. In Section 3, we introduce the basic notations we use and present basic mathematical facts that may support the mathematical properties developed in this paper. In Section 4, we compute the gradient of the functions we defined earlier. In Section 5, we compute the Hessian of these functions based on their gradient. In Section 6, we formally define the key functions that appear in the Hessian and rewrite the Hessian functions in a more formal way. In Section 7, we show that the Hessian is positive semidefinite (PSD). In Section 8, we show that the Hessian is Lipschitz. In Section 9, we summarize the mathematical properties we developed in the previous sections and explain how they may support the main result of this paper. In Section 10, we conclude this paper, present the meaningfulness of our work, and discuss future research directions.
## 2 Related Work
In this section, we introduce the previous related research works.
Residual Neural NetworksResNets were introduced by [1] for the purpose of simplifying the training of networks that are significantly deeper than those that were used previously. Its design is inspired by residual learning. [1] demonstrated state-of-the-art performance on image recognition benchmarks using extremely deep ResNets with over 100 layers. By adding shortcut connections, ResNets were able to successfully train far deeper networks than previous architectures.
After being introduced, ResNets have become a prevalent research area in computer vision and its application. Many subsequent works were built based on the original ResNet model. In [17], ResNeXt is proposed: it splits each layer into smaller groups to increase the cardinality, which is defined as the size of the set of transformations. It is shown that the increase in cardinality leads to higher classification accuracy and is more effective than going deeper and wider when the capacity is increased. Moreover, [16] propose a novel architecture called wide residual networks (WRNs), and based on their experimental study, it is shown that residual networks with a reduced number of layers and an increased number of the network's width are far superior to the thin and deep counterparts. In addition, ResNets is also studied with efficiency [1, 18, 19], video analysis [20, 21, 22, 23, 24], and breast cancer [20, 21, 23, 25, 26].
Moreover, ResNets has been found a connection with ordinary differential equations (ODEs).
Figure 2: The visualization of the soft-residual regression (see Definition 1.4). \(A\) is a matrix in \(\mathbb{R}^{n\times d}\). \(b,\mathbf{1}_{n}\in\mathbb{R}^{n}\) and \(x\in\mathbb{R}^{d}\) are vectors. First, we compute \(\exp(Ax)\) by multiplying \(A\) with \(x\) and then calculating the exponential of each entry in their product. Second, we add \(\exp(Ax)\) with \(Ax\). Third, we compute the inner product of \(\exp(Ax)+Ax\) and \(\mathbf{1}_{n}\) and then find the multiplicative inverse of it. Fourth, we multiply the results of the third step with \(\exp(Ax)+Ax\) and subtract \(b\) from it. Finally, we compute the minimum of the square of the \(\ell_{2}\) norm of the result of the fourth step. The blue rectangles represent the \(n\times d\) matrices. The pink rectangles represent the \(n\)-dimensional vectors. The green rectangles represent the \(d\)-dimensional vectors.
ResNets, as shown in Eq. (1), is a difference equation (or a discrete dynamical system). ODEs consider the continuous change of a dynamical system with respect to time. Therefore, a small parameter \(h>0\) is introduced in Eq. (1), which makes this equation become continuous:
\[\frac{Y_{j+1}-Y_{j}}{h}=F(Y_{j},\theta_{j}),\]
which implies
\[\frac{\mathrm{d}Y(t)}{\mathrm{d}t}\approx F(Y(t),\theta(t)),\ Y(0)=Y_{0}.\]
Due to the lack of general guidance to network architecture design, [10] connect the concept of ResNets with numerical differential equations, showing that ResNet can be interpreted as forward Euler discretizations of differential equations. Follow-up works expand this connection: [14] improves the accuracy by introducing more stable and adaptive ODE solvers for the use of ResNets, [11] establishes a new architecture based on ODEs to resolve the challenge of the vanishing gradient, and [12] construct a theoretical framework studying stability and reversibility of deep neural networks: there are three reversible neural network architectures that are developed, which theoretically can go arbitrary deep.
AttentionThe attention matrix is a square matrix, which contains the associations between words or tokens in natural language text. Each row and column of an attention matrix align with the corresponding token, and the values within it signify the level of connection between these tokens. When generating the output, the attention matrix has a huge influence on determining the significance of individual input tokens within a sequence. Under this attention mechanism, each input token is assigned a weight or score that reflects its relevance with the current output generation.
There are various methods that have been developed to approximate the prominent entries of the attention matrix: methods like k-means clustering [15] and Locality Sensitive Hashing (LSH) [17, 18, 19] are to restrict the attention to nearby tokens, and other methods like [19] approximate the attention matrix by using the random feature maps based on Gaussian or exponential kernels. Furthermore, [18] presented that combining LSH-based and random feature-based methods is a more effective technique for estimating the attention matrix.
As presented in the recent works [16, 15, 14, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 285, 287, 288, 289, 286, 289, 291, 287, 288, 289, 288, 289, 292, 280, 281, 285, 286, 287, 289, 288, 289, 293, 281, 288, 289, 294, 280, 282, 285, 283, 286, 287, 288, 289, 295, 289, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 332, 334, 307, 309, 310, 311, 323, 334, 308, 314, 315, 316, 317, 318, 324, 319, 325, 326, 327, 328, 333, 34, 333, 34, 35, 36, 37, 38, 39, 310, 31, 32, 33, 34, 35, 36, 38, 39, 311, 33, 34, 31, 35, 36, 39, 31, 37, 38, 39, 31, 39, 32, 33, 34, 35, 36, 39, 31, 38, 39, 32, 34, 36, 39, 31, 35, 37, 39, 33, 38, 39, 33, 39, 30, 32, 35, 39, 31, 36, 39, 32, 37, 33, 38, 39, 33, 39, 34, 35, 37, 39, 35, 39, 36, 37, 38, 39, 38, 39, 39, 30, 33, 34, 36, 39, 32, 35, 39, 36, 37, 39, 38, 39, 31, 38, 39, 32, 39, 33, 33, 35, 39, 34, 36, 38, 39, 35, 37, 39, 38, 39, 30, 33, 39, 31, 39, 32, 33, 34, 35, 36, 39, 37, 38, 39, 39, 32, 39, 33, 38, 39, 30, 34, 35, 39, 36, 37, 39, 38, 39, 31, 39, 33, 32, 35, 39, 34, 36, 39, 37, 38, 39, 39, 30, 35, 39, 31, 39, 32, 36, 37, 38, 39, 31, 38, 39, 32, 39, 33, 39, 30, 33, 34, 35, 36, 39, 32, 37, 38, 39, 30, 39, 31, 38, 39, 32, 39, 33, 34, 35, 37, 39, 31, 35, 36, 39, 32, 38, 39, 33, 39, 34, 37, 39, 35, 38, 39, 30, 36, 39, 37, 38, 39, 39, 32, 39, 33, 38, 39, 30, 39, 31, 34, 35, 39, 36, 39, 37, 39, 38, 39, 31, 39, 33, 32, 39, 33, 34, 35, 37, 39, 32, 35, 39, 34, 36, 38, 39, 35, 37, 39, 38, 39, 30, 39, 32, 39, 33, 33, 36, 39, 34, 37, 38, 39, 35, 39, 36, 37, 39, 38, 39, 30, 39, 31, 39, 33, 32, 33, 34, 35, 39, 34, 36, 39, 35, 37, 39, 38, 39, 30, 39, 30, 31, 39, 32, 33, 35, 39, 36, 37, 38, 39, 31, 39, 30, 31, 32, 33, 34, 35, 39, 36, 37, 38, 39, 30, 39, 32, 39, 30, 31, 33, 34, 35, 39, 32, 35, 36, 37, 39, 38, 39, 30, 39, 33, 34, 35, 39, 32, 36, 39, 33, 37, 38, 39, 30, 31, 34, 35, 39, 36, 37, 39, 38, 39, 30, 32, 39, 33, 32, 35, 39, 34, 36, 37, 38, 39, 35, 39, 31, 36, 39, 37, 39, 38, 39, 30, 39,
following words or phrases when presented with a sequence of input words. Additionally, the softmax unit may allow the models to adapt their neural network's weights and biases based on the available data. Under convex optimization, the softmax function is utilized for managing the progress and stability of potential functions, as shown in [1, 15]. Drawing inspiration from the concept of the softmax unit, [10] introduces a problem known as softmax regression. The studies study three particular formulations: exponential regression [11, 12], softmax regression [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], the rescaled softmax regression [12], and multiple softmax regression [11].
Convergence and OptimizationThere are numerous studies analyzing the optimization and convergence to enhance training methods. [10] reveals that stochastic gradient descent may efficiently optimize over-parameterized neural networks for the structured data. Similarly, [13] shows that the gradient descent can also optimize over-parameterized neural networks. After that, [1] introduces a convergence theory for over-parameterized deep neural networks using gradient descent. Meanwhile, [1] investigates the rate at which training recurrent neural networks converge.
[1] gives an in-depth analysis of the optimization and generalization of over-parameterized two-layer neural networks. Moreover, [1] analyzes the exact computation using infinitely wide neural networks. On the other hand, [18] proposes the Gram-Gauss-Newton method, which is used to optimize over-parameterized neural networks.
In [13], global convergence of stochastic gradient descent is analyzed during the training of deep neural networks, which requires less over-parameterization compared to previous research. Furthermore, the works like [13, 12, 20] focus on the optimization and generalization aspects, whereas [14, 11] emphasize the convergence rate and stability.
Moreover, there are works such as [15, 16, 17, 18, 19, 20] that concentrate on specialized optimization algorithms and techniques for training neural networks. Finally, [10, 14] centers their efforts on harnessing the structural aspects of neural networks for specific purposes.
In addition, there is a significant amount of work [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39] that analyze sketching: a technique to speed up machine learning algorithms and optimization.
## 3 Preliminary
In this section, we first introduce the basic notations we use. Then, in section 3.1, we introduce the definition of the functions we analyze in the later sections. In Section 3.2, we present the basic mathematical properties of the derivative, vectors, norms, and matrices.
NotationsNow, we define the notations used in this paper.
First, we define the notations related to sets. Let \(\mathbb{Z}_{+}\) be the set containing all the positive integers, namely \(\{1,2,3,\dots\}\). Let \(n,d\) be arbitrary elements in \(\mathbb{Z}_{+}\). We define \([n]:=\{1,2,\dots,n\}\). We define \(\mathbb{R},\mathbb{R}^{n},\mathbb{R}^{n\times d}\) to be the set containing all real numbers, the set containing all \(n\)-dimensional vectors whose entries are all real numbers, and the set containing all \(n\times d\) matrices whose entries are all real numbers, respectively.
Then, we define the notations related to vectors. Let \(x,y\) be arbitrary elements in \(\mathbb{R}^{n}\). We use \(x_{i}\) to denote the \(i\)-th entry of \(x\), for all \(i\in[n]\). \(\|x\|_{2}\in\mathbb{R}\) denotes the \(\ell_{2}\) norm of the vector \(x\), which
is defined as \(\|x\|_{2}:=(\sum_{i=1}^{n}x_{i}^{2})^{1/2}\). \(\langle x,y\rangle\in\mathbb{R}\) represents the inner product of \(x\) and \(y\), which is defined as \(\langle x,y\rangle:=\sum_{i=1}^{n}x_{i}y_{i}\). We use \(\circ\) to denote a binary operation between \(x\) and \(y\), called the Hadamard product. \(x\circ y\in\mathbb{R}^{n}\) is defined as \((x\circ y)_{i}:=x_{i}\cdot y_{i}\), for all \(i\in[n]\). \(\mathbf{1}_{n}\in\mathbb{R}^{n}\) denotes a vector, where \((\mathbf{1}_{n})_{i}:=1\) for all \(i\in[n]\), and \(\mathbf{0}_{n}\in\mathbb{R}^{n}\) denotes a vector, where \((\mathbf{0}_{n})_{i}:=0\) for all \(i\in[n]\).
After that, we introduce the notations related to matrices. Let \(A\) be an arbitrary element in \(\mathbb{R}^{n\times d}\). We use \(A_{i,j}\) to denote the entry of \(A\) which is at the \(i\)-th row and \(j\)-th column, for all \(i\in[n]\) and \(j\in[d]\). We define \(A_{*,i}\in\mathbb{R}^{n}\) as \((A_{*,i})_{j}:=A_{j,i}\), for all \(j\in[n]\) and \(i\in[d]\). We use \(\|A\|\) to denote the spectral norm of \(A\), i.e., \(\|A\|:=\max_{x\in\mathbb{R}^{d}}\|Ax\|_{2}/\|x\|_{2}\). This also implies that for any \(x\in\mathbb{R}^{d}\), \(\|Ax\|_{2}\leq\|A\|\cdot\|x\|_{2}\). For any \(x\in\mathbb{R}^{d}\), we define \(\operatorname{diag}(x)\in\mathbb{R}^{d\times d}\) as \((\operatorname{diag}(x))_{i,j}:=x_{i}\) for all \(i=j\) and \((\operatorname{diag}(x))_{i,j}:=0\) for all \(i\neq j\), where \(i,j\in[d]\). We use \(A^{\top}\in\mathbb{R}^{d\times n}\) to denote the transpose of \(A\), namely \((A^{\top})_{i,j}:=A_{j,i}\), for all \(i\in[d]\) and \(j\in[n]\). We use \(I_{n}\) to denote the \(n\)-dimensional identity matrix. Let \(B\) and \(C\) be arbitrary symmetric matrices. We say \(B\preceq C\) if, for all vector \(x\), we have \(x^{\top}Bx\leq x^{\top}Cx\). We say \(B\) is positive semidefinite (or \(B\) is a PSD matrix), denoted as \(B\succeq 0\), if, for all vectors \(x\), we have \(x^{\top}Bx\geq 0\).
Finally, we define the notations related to functions. We define \(\phi:\mathbb{R}\to\mathbb{R}\) as \(\phi(z):=\max\{z,0\}\). For a differentiable function \(f\), we use \(\frac{\mathrm{d}f}{\mathrm{d}x}\) to denote the derivative of \(f\).
### Basic Definitions
In this section, we define the basic functions which are analyzed in the later sections.
**Definition 3.1** (Basic functions).: _Let \(A\in\mathbb{R}^{n\times d}\) be an arbitrary matrix. Let \(x\in\mathbb{R}^{d}\) be an arbitrary vector. Let \(b\in\mathbb{R}^{n}\) be a given vector. Let \(i\in[d]\) be an arbitrary positive integer. We define the functions \(u_{1},u_{2},u,f,c,z,v_{i}:\mathbb{R}^{d}\to\mathbb{R}^{n}\) and \(\alpha,L,\beta_{i}:\mathbb{R}^{d}\to\mathbb{R}\) as_
\[u_{1}(x) :=Ax u_{2}(x) :=\exp(Ax)\] \[u(x) :=u_{1}(x)+u_{2}(x) \alpha(x) :=\langle u(x),\mathbf{1}_{n}\rangle\] \[f(x) :=\alpha(x)^{-1}u(x) c(x) :=f(x)-b\] \[L(x) :=0.5\|c(x)\|_{2}^{2} z(x) :=u_{2}(x)+\mathbf{1}_{n}\] \[v_{i}(x) :=(u_{2}(x)+\mathbf{1}_{n})\circ A_{*,i} \beta_{i}(x) :=\langle v_{i}(x),\mathbf{1}_{n}\rangle.\]
### Basic Facts
In this section, we present the basic mathematical properties which are used to support our analysis in later sections.
**Fact 3.2**.: _Let \(f\) be a differentiable function. Then, we have_
* _Part 1._ \(\frac{\mathrm{d}}{\mathrm{d}x}\exp(x)=\exp(x)\)__
* _Part 2. For any_ \(j\neq i\)_,_ \(\frac{\mathrm{d}}{\mathrm{d}x_{i}}f(x_{j})=0\)__
**Fact 3.3**.: _For all vectors \(u,v,w\in\mathbb{R}^{n}\), we have_
* \(\langle u,v\rangle=\langle u\circ v,\mathbf{1}_{n}\rangle=u^{\top}\mathrm{ diag}(v)\mathbf{1}_{n}\)__
* \(\langle u\circ v,w\rangle=\langle u\circ w,v\rangle\)__
* \(\langle u\circ v,w\rangle=\langle u\circ v\circ w,\mathbf{1}_{n}\rangle=u^{ \top}\,\mathrm{diag}(v)w\)__
* \(\langle u\circ v\circ w\circ z,\mathbf{1}_{n}\rangle=u^{\top}\operatorname{diag}(v \circ w)z\)
* \(u\circ v=v\circ u=\operatorname{diag}(u)\cdot v=\operatorname{diag}(v)\cdot u\)
* \(u^{\top}(v\circ w)=v^{\top}(u\circ w)=w^{\top}(u\circ v)=u^{\top}\operatorname {diag}(v)w=v^{\top}\operatorname{diag}(u)w=w^{\top}\operatorname{diag}(u)v\)
* \(\operatorname{diag}(u)\cdot\operatorname{diag}(v)\cdot\mathbf{1}_{n}= \operatorname{diag}(u)v\)
* \(\operatorname{diag}(u\circ v)=\operatorname{diag}(u)\operatorname{diag}(v)\)
* \(\operatorname{diag}(u)+\operatorname{diag}(v)=\operatorname{diag}(u+v)\)
* \(\langle u,v\rangle=\langle v,u\rangle\)
* \(\langle u,v\rangle=u^{\top}v=v^{\top}u\)
* \(u+vw^{\top}a=u+vu^{\top}w=(I_{n}+vw^{\top})u\)
* \(u+v^{\top}wu=(1+v^{\top}w)u\)
**Fact 3.4**.: _Let \(f:\mathbb{R}^{d}\to\mathbb{R}^{n}\). Let \(q:\mathbb{R}^{d}\to\mathbb{R}\). Let \(g:\mathbb{R}^{d}\to\mathbb{R}^{n}\). Therefore, we have for any arbitrary \(x\in\mathbb{R}^{d}\), \(q(x)\in\mathbb{R}\), \(f(x)\in\mathbb{R}^{n}\), and \(g(x)\in\mathbb{R}^{n}\). Let \(a\in\mathbb{R}\) be an arbitrary constant._
_Then, we have_
* \(\frac{\operatorname{d}q(x)^{a}}{\operatorname{d}x}=a\cdot q(x)^{a-1}\cdot \frac{\operatorname{d}q(x)}{\operatorname{d}x}\)
* \(\frac{\operatorname{d}\|f(x)\|_{2}^{2}}{\operatorname{d}t}=2\langle f(x), \frac{\operatorname{d}f(x)}{\operatorname{d}t}\rangle\)
* \(\frac{\operatorname{d}\langle f(x),g(x)\rangle}{\operatorname{d}t}=\langle \frac{\operatorname{d}f(x)}{\operatorname{d}t},g(x)\rangle+\langle f(x), \frac{\operatorname{d}g(x)}{\operatorname{d}t}\rangle\)
* \(\frac{\operatorname{d}(g(x)\circ f(x))}{\operatorname{d}t}=\frac{ \operatorname{d}g(x)}{\operatorname{d}t}\circ f(x)+g(x)\circ\frac{ \operatorname{d}f(x)}{\operatorname{d}t}\) _(product rule for Hadamard product)_
**Fact 3.5** (Basic Vector Norm Bounds).: _For vectors \(u,v,w\in\mathbb{R}^{n}\), we have_
* _Part 1._ \(\langle u,v\rangle\leq\|u\|_{2}\cdot\|v\|_{2}\) _(Cauchy-Schwarz inequality)_
* _Part 2._ \(\|\operatorname{diag}(u)\|\leq\|u\|_{\infty}\)__
* _Part 3._ \(\|u\circ v\|_{2}\leq\|u\|_{\infty}\cdot\|v\|_{2}\)__
* _Part 4._ \(\|u\|_{\infty}\leq\|u\|_{2}\leq\sqrt{n}\|u\|_{\infty}\)__
* _Part 5._ \(\|u\|_{2}\leq\|u\|_{1}\leq\sqrt{n}\|u\|_{2}\)__
* _Part 6._ \(\|\exp(u)\|_{\infty}\leq\exp(\|u\|_{\infty})\leq\exp(\|u\|_{2})\)__
* _Part 7._ _Let_ \(\alpha\) _be a scalar, then_ \(\|\alpha\cdot u\|_{2}=|\alpha|\cdot\|u\|_{2}\)__
* _Part 8._ \(\|u+v\|_{2}\leq\|u\|_{2}+\|v\|_{2}\)__
* _Part 9._ \(\|uv^{\top}\|\leq\|u\|_{2}\|v\|_{2}\)__
* _Part 10. if_ \(\|u\|_{2},\|v\|_{2}\leq R\)_, then_ \(\|\exp(u)-\exp(v)\|_{2}\leq\exp(R)\|u-v\|_{2}\)__
**Fact 3.6** (Matrices Norm Basics).: _For any matrices \(U,V\in\mathbb{R}^{n\times n}\), given a scalar \(\alpha\in\mathbb{R}\) and a vector \(v\in\mathbb{R}^{n}\), we have_
* _Part 1._ \(\|U^{\top}\|=\|U\|\)__
* _Part 2._ \(\|U\|\geq\|V\|-\|U-V\|\)__
* _Part 3._ \(\|U+V\|\leq\|U\|+\|V\|\)__
* _Part 4._ \(\|U\cdot V\|\leq\|U\|\cdot\|V\|\)__
* _Part 5. If_ \(U\preceq\alpha\cdot V\)_, then_ \(\|U\|\preceq\alpha\cdot\|V\|\)__
* _Part 6._ \(\|\alpha\cdot U\|\leq|\alpha|\|U\|\)__
* _Part 7._ \(\|Uv\|_{2}\leq\|U\|\cdot\|v\|_{2}\)__
* _Part 8._ \(\|UU^{\top}\|\leq\|U\|^{2}\)__
**Fact 3.7** (Basic algebraic properties).: _Let \(x\) be an arbitrary element in \(\mathbb{R}\)._
_Then, we have_
* _Part 1._ \(\exp(x^{2})\geq 1\)_._
* _Part 2._ \(\exp(x^{2})\geq x\)_._
Proof.: **Proof of Part 1.**
Consider
\[\frac{\mathrm{d}\exp(x^{2})}{\mathrm{d}x}=2x\exp(x^{2})=0.\]
This implies that
\[x=0\]
since
\[\exp(x^{2})\neq 0,\forall x\in\mathbb{R}.\]
Furthermore, since
\[\frac{\mathrm{d}\exp(x^{2})}{\mathrm{d}x}<0,\text{ when }x<0\]
and
\[\frac{\mathrm{d}\exp(x^{2})}{\mathrm{d}x}>0,\text{ when }x>0,\]
we have that
\[(0,\exp(0))\]
is the local minimum of \(\exp(x^{2})\).
Since \(x=0\) is the only critical point of \(\exp(x^{2})\) and \(\exp(x^{2})\) is differentiable over all \(x\in\mathbb{R}\), so we have
\[\exp(x^{2})\geq\exp(0^{2})=1,\]
which completes the proof of the first part.
**Proof of Part 2.**
This strategy of proofing this part is the same as the first part by considering the derivative of \(\exp(x^{2})-x\) and showing that the local minimum of \(\exp(x^{2})-x\) is greater than \(0\), so we omit the proof here.
**Fact 3.8**.: _For any vectors \(u,v\in\mathbb{R}^{n}\), we have_
* _Part 1._ \(uu^{\top}\preceq\|u\|_{2}^{2}\cdot I_{n}\)__
* _Part 2._ \(\operatorname{diag}(u)\preceq\|u\|_{2}\cdot I_{n}\)__
* _Part 3._ \(\operatorname{diag}(u\circ u)\preceq\|u\|_{2}^{2}\cdot I_{n}\)__
* _Part 4._ \(uv^{\top}+vu^{\top}\preceq uu^{\top}+vv^{\top}\)__
* _Part 5._ \(uv^{\top}+vu^{\top}\succeq-(uu^{\top}+vv^{\top})\)__
* _Part 6._ \((v\circ u)(v\circ u)^{\top}\preceq\|v\|_{\infty}^{2}uu^{\top}\)__
* _Part 7._ \(\operatorname{diag}(u\circ v)\preceq\|u\|_{2}\|v\|_{2}\cdot I_{n}\)__
## 4 Gradient
In this section, we compute the first-order derivatives of the functions defined earlier.
**Lemma 4.1**.: _Let \(x\in\mathbb{R}^{d}\) be an arbitrary vector. Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) be defined as in Definition 3.1. Let \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1._
_Then for each \(i\in[d]\), we have_
* _Part 1._ \(\frac{\mathrm{d}u_{1}(x)}{\mathrm{d}x_{i}}=A_{*,i}\)__
* _Part 2._ \(\frac{\mathrm{d}u_{2}(x)}{\mathrm{d}x_{i}}=u_{2}(x)\circ A_{*,i}\)__
* _Part 3._ \(\frac{\mathrm{d}u(x)}{\mathrm{d}x_{i}}=v_{i}(x)\)__
* _Part 4._ \(\frac{\mathrm{d}\alpha(x)}{\mathrm{d}x_{i}}=\beta_{i}(x)\)__
* _Part 5._ \(\frac{\mathrm{d}\alpha(x)^{-1}}{\mathrm{d}x_{i}}=\alpha(x)^{-2}\cdot\beta_{i}(x)\)__
* _Part 6._ \(\frac{\mathrm{d}f(x)}{\mathrm{d}x_{i}}=\alpha(x)^{-1}(I_{n}-f(x)\cdot\mathbf{ 1}_{n}^{\top})\cdot v_{i}(x)\)__
* _Part 7._ \(\frac{\mathrm{d}c(x)}{\mathrm{d}x_{i}}=\frac{\mathrm{d}f(x)}{\mathrm{d}x_{i}}= \alpha(x)^{-1}(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v_{i}(x)\)__
* _Part 8._ \(\frac{\mathrm{d}L(x)}{\mathrm{d}x_{i}}=\alpha(x)^{-1}c(x)^{\top}\cdot(I_{n}-f (x)\cdot\mathbf{1}_{n}^{\top})\cdot v_{i}(x)\)__
* _Part 9._ \(\frac{\mathrm{d}\beta_{i}(x)}{\mathrm{d}x_{i}}=\langle u_{2}(x),A_{*,i}\circ A _{*,i}\rangle\)__
* _Part 10. For_ \(j\in[d]\backslash\{i\}\)_,_ \(\frac{\mathrm{d}\beta_{i}(x)}{\mathrm{d}x_{j}}=\langle u_{2}(x),A_{*,i}\circ A _{*,j}\rangle\)__
* _Part 11._ \(\frac{\mathrm{d}v_{i}(x)}{\mathrm{d}x_{i}}=u_{2}(x)\circ A_{*,i}\circ A_{*,i}\)__
* _Part 12. For_ \(j\in[d]\backslash\{i\}\)_,_ \(\frac{\mathrm{d}v_{i}(x)}{\mathrm{d}x_{j}}=u_{2}(x)\circ A_{*,j}\circ A_{*,i}\)__
Proof.: **Proof of Part 1.** For each \(i\in[d]\), we have
\[\frac{\mathrm{d}Ax}{\mathrm{d}x_{i}} = \frac{A\mathrm{d}x}{\mathrm{d}x_{i}}\] \[= A_{*,i}\]
where the first step follows from simple algebra and the last step follows from the fact that only the \(i\)-th entry of \(\frac{\mathrm{d}x}{\mathrm{d}x_{i}}\) is \(1\) and other entries of it are \(0\).
Note that by definition 3.1,
\[u_{1}(x)=Ax.\]
Therefore, we have
\[\frac{\mathrm{d}u_{1}(x)}{\mathrm{d}x_{i}}=A_{*,i}.\]
**Proof of Part 2.** For each \(i\in[d]\), we have
\[\frac{\mathrm{d}(u_{2}(x))_{i}}{\mathrm{d}x_{i}} = \frac{\mathrm{d}(\exp(Ax))_{i}}{\mathrm{d}x_{i}}\] \[= \exp(Ax)_{i}\cdot\frac{\mathrm{d}(Ax)_{i}}{\mathrm{d}x_{i}}\] \[= \exp(Ax)_{i}\cdot A_{*,i}\] \[= (u_{2}(x))_{i}\cdot A_{*,i},\]
where the first step follows from the definition of \((u_{2}(x))_{i}\) (see Definition 3.1), the second step follows from Fact 3.2, the third step follows from **Part 1**, and the last step follows from the definition of \((u_{2}(x))_{i}\) (see Definition 3.1).
Thus, we have
\[\frac{\mathrm{d}u_{2}(x)}{\mathrm{d}x_{i}}=u_{2}(x)\circ A_{*,i}\]
**Proof of Part 3.**
Figure 3: The visualization of Part 8 of Lemma 4.1. We have \(\alpha(x)\in\mathbb{R}\), \(c(x),f(x),\mathbf{1}_{n},v_{i}(x)\in\mathbb{R}^{n}\), and \(I_{n}\in\mathbb{R}^{n\times n}\). First, we subtract the product of \(f(x)\) and \(\mathbf{1}_{n}^{\top}\) from \(I_{n}\). Then, we multiply the multiplicative inverse of \(\alpha(x)\), \(c(x)^{\top}\), the result from the first step, and \(v_{i}(x)\), which gives us a scalar. The blue rectangles represent the \(n\)-dimensional vectors. The pink rectangles represent the transposes of \(n\)-dimensional vectors. The red squares represent the scalar. The green squares represent the identity matrix \(I_{n}\).
We have
\[\frac{\mathrm{d}u(x)}{\mathrm{d}x_{i}} = \frac{\mathrm{d}(u_{1}(x)+u_{2}(x))}{\mathrm{d}x_{i}}\] \[= \frac{\mathrm{d}(u_{1}(x))}{\mathrm{d}x_{i}}+\frac{\mathrm{d}(u_{2 }(x))}{\mathrm{d}x_{i}}\] \[= A_{*,i}+u_{2}(x)\circ A_{*,i}\] \[= (u_{2}(x)+\mathbf{1}_{n})\circ A_{*,i}\] \[= v_{i}(x),\]
where the first step follows from the definition of \(u(x)\) (see Definition 3.1), the second step follows from the basic derivative rule, the third step follows from results from **Part 1** and **Part 2**, the fourth step follows from the basic properties of Hadamard product, and the last step follows from the definition of \(v_{i}(x)\) (see Definition 3.1).
**Proof of Part 4.**
\[\frac{\mathrm{d}\alpha(x)}{\mathrm{d}x_{i}} = \frac{\mathrm{d}(\langle u(x),\mathbf{1}_{n}\rangle)}{\mathrm{d}x _{i}}\] \[= \langle\frac{\mathrm{d}u(x)}{\mathrm{d}x_{i}},\mathbf{1}_{n}\rangle\] \[= \langle v_{i}(x),\mathbf{1}_{n}\rangle\] \[= \beta_{i}(x)\]
where the first step follows from the definition of \(\alpha(x)\) (see Definition 3.1), the second step follows from Fact 3.4, the third step follows from **Part 3**, and the fourth step follows from the definition of \(\beta_{i}(x)\) (see Definition 3.1).
**Proof of Part 5.**
\[\frac{\mathrm{d}\alpha(x)^{-1}}{\mathrm{d}x_{i}} = -1\cdot\alpha(x)^{-2}\cdot\frac{\mathrm{d}\alpha(x)}{\mathrm{d}x _{i}}\] \[= -\alpha(x)^{-2}\cdot\beta_{i}(x)\]
where the first step follows from the Fact 3.4, where the second step follows from the results of **Part 4**.
**Proof of Part 6.**
\[\frac{\mathrm{d}f(x)}{\mathrm{d}x_{i}} = \frac{\mathrm{d}\alpha(x)^{-1}}{\mathrm{d}x_{i}}u(x)+\alpha(x)^{- 1}\cdot\frac{\mathrm{d}u(x)}{\mathrm{d}x_{i}}\] \[= -\alpha(x)^{-2}\cdot\beta_{i}(x)\cdot u(x)+\alpha(x)^{-1}\cdot v_ {i}(x)\] \[= -\alpha(x)^{-1}f(x)\cdot\beta_{i}(x)+\alpha(x)^{-1}\cdot v_{i}(x)\] \[= \alpha(x)^{-1}\cdot(v_{i}(x)-f(x)\cdot\beta_{i}(x))\] \[= \alpha(x)^{-1}\cdot(v_{i}(x)-f(x)\cdot\langle v_{i}(x),\mathbf{1 }_{n}\rangle)\] \[= \alpha(x)^{-1}\cdot(v_{i}(x)-f(x)\cdot\mathbf{1}_{n}^{\top}v_{i} (x))\] \[= \alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v _{i}(x)\]
where the first step follows from the product rule and the definition of \(f(x)\) (see Definition 3.1), the second step follows from results of **Part 3, 5**, the third step follows from the definition of \(f(x)\)
(see Definition 3.1), the fourth step follows from simple algebra, the fifth step follows from the definition of \(\beta_{i}\) (see Definition 3.1), the sixth step follows from Fact 3.3, and the last step follows from simple algebra.
**Proof of Part 7.**
\[\frac{\mathrm{d}c(x)}{\mathrm{d}x_{i}} = \frac{\mathrm{d}(f(x)-b)}{\mathrm{d}x_{i}}\] \[= \frac{\mathrm{d}f(x)}{\mathrm{d}x_{i}}\]
where the first step follows from the definition of \(c(x)\) (see Definition 3.1), the second step follows from derivative rules.
**Proof of Part 8.**
\[\frac{\mathrm{d}L(x)}{\mathrm{d}x_{i}} = \frac{\mathrm{d}0.5\|c(x)\|_{2}^{2}}{\mathrm{d}x_{i}}\] \[= c(x)^{\top}\cdot\frac{\mathrm{d}c(x)}{\mathrm{d}x_{i}}\] \[= \alpha(x)^{-1}\cdot c(x)^{\top}\cdot(I_{n}-f(x)\cdot\mathbf{1}_ {n}^{\top})\cdot v_{i}(x)\]
where the first step follows from the definition of \(L(x)\) (see Definition 3.1), the second step follows from Fact 3.4, and the last step follows from the results from **Part 6 and 7**.
**Proof of Part 9.**
\[\frac{\mathrm{d}\beta_{i}(x)}{\mathrm{d}x_{i}} = \frac{\mathrm{d}(\langle v_{i}(x),\mathbf{1}_{n}\rangle)}{\mathrm{ d}x_{i}}\] \[= \frac{\mathrm{d}(\langle(u_{2}(x)+\mathbf{1}_{n})\circ A_{*,i}, \mathbf{1}_{n}\rangle)}{\mathrm{d}x_{i}}\] \[= \frac{\mathrm{d}\langle u_{2}(x)+\mathbf{1}_{n},A_{*,i}\rangle}{ \mathrm{d}x_{i}}\] \[= \langle\frac{\mathrm{d}(u_{2}(x)+\mathbf{1}_{n})}{\mathrm{d}x_{i }},A_{*,i}\rangle\] \[= \langle u_{2}(x)\circ A_{*,i},A_{*,i}\rangle\] \[= \langle u_{2}(x),A_{*,i}\circ A_{*,i}\rangle\]
where the first step follows from the definition of \(\beta_{i}(x)\) (see Definition 3.1), the second step follows from the definition of \(v_{i}(x)\) (see Definition 3.1), the third step follows from Fact 3.3, the fourth step follows from Fact 3.4, the fifth step follows from **Part 2**, and the last step follows from Fact 3.3.
**Proof of Part 10.**
\[\frac{\mathrm{d}\beta_{i}(x)}{\mathrm{d}x_{j}} = \frac{\mathrm{d}(\langle v_{i}(x),\mathbf{1}_{n}\rangle)}{\mathrm{ d}x_{j}}\] \[= \frac{\mathrm{d}(\langle(u_{2}(x)+\mathbf{1}_{n})\circ A_{*,i}, \mathbf{1}_{n}\rangle)}{\mathrm{d}x_{j}}\] \[= \frac{\mathrm{d}\langle u_{2}(x)+\mathbf{1}_{n},A_{*,i}\rangle}{ \mathrm{d}x_{j}}\]
\[=\langle\frac{\mathrm{d}(u_{2}(x)+\mathbf{1}_{n})}{\mathrm{d}x_{j}},A_{ *,i}\rangle\] \[=\langle u_{2}(x)\circ A_{*,j},A_{*,i}\rangle\] \[=\langle u_{2}(x),A_{*,j}\circ A_{*,i}\rangle\]
where the first step follows from the definition of \(\beta_{i}(x)\) (see Definition 3.1), the second step follows from the definition of \(v_{i}(x)\) (see Definition 3.1), the third step follows from Fact 3.3, the fourth step follows from Fact 3.4, the fifth step follows from **Part 2**, and the last step follows from Fact 3.3.
**Proof of Part 11.**
\[\frac{\mathrm{d}v_{i}(x)}{\mathrm{d}x_{i}} = \frac{\mathrm{d}(u_{2}(x)+\mathbf{1}_{n})\circ A_{*,i}}{\mathrm{d} x_{i}}\] \[= \frac{\mathrm{d}(u_{2}(x)+\mathbf{1}_{n})}{\mathrm{d}x_{i}} \circ A_{*,i}\] \[= u_{2}(x)\circ A_{*,i}\circ A_{*,i}\]
where the first step follows from the definition of \(v_{i}(x)\) (see Definition 3.1), the second step follows from Fact 3.4 as \(\frac{\mathrm{d}A_{*,i}}{\mathrm{d}x_{i}}=0\), and the last step follows from the results of **Part 2**.
**Proof of Part 12.**
\[\frac{\mathrm{d}v_{i}(x)}{\mathrm{d}x_{j}} = \frac{\mathrm{d}(u_{2}(x)+\mathbf{1}_{n})\circ A_{*,i}}{\mathrm{ d}x_{j}}\] \[= \frac{\mathrm{d}(u_{2}(x)+\mathbf{1}_{n})}{\mathrm{d}x_{j}} \circ A_{*,i}\] \[= u_{2}(x)\circ A_{*,j}\circ A_{*,i}\]
where the first step follows from the definition of \(v_{i}(x)\) (see Definition 3.1), the second step follows from Fact 3.4 as \(\frac{\mathrm{d}A_{*,i}}{\mathrm{d}x_{j}}=0\), and the last step follows from the results of **Part 2**.
## 5 Hessian
In Section 5.1, we introduce the basic definition of the matrices containing \(B_{1}(x),B_{2}(x),B_{3}(x)\in\mathbb{R}^{n\times n}\), used for simplifying the expression of Hessian. In Section 5.2, we compute the second-order derivatives of the functions defined earlier. In Section 5.3, we present a helpful lemma. In Section 5.4, we decompose the matrices \(B_{1}(x),B_{2}(x),B_{3}(x)\in\mathbb{R}^{n\times n}\) into low-rank matrices and diagonal matrices.
### Basic Definition
In this section, we give the definition of the matrices containing \(B_{1}(x),B_{2}(x),B_{3}(x)\in\mathbb{R}^{n\times n}\).
**Definition 5.1**.: _Let \(x\in\mathbb{R}^{d}\) be an arbitrary vector. Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) and \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1. Let \(K(x)=(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\in\mathbb{R}^{n\times n}\). Let \(\widetilde{c}(x)=K(x)^{\top}c(x)\in\mathbb{R}^{n}\). We define_
* \(B_{1}(x)\in\mathbb{R}^{n\times n}\) _as_ \[A_{*,i}^{\top}B_{1}(x)A_{*,j}\] \[:= \underbrace{\alpha(x)^{-2}}_{\text{scalar}}\underbrace{v_{i}(x) ^{\top}}_{1\times n}\underbrace{K(x)^{\top}}_{n\times n}\underbrace{K(x)}_{n \times n}\underbrace{v_{i}(x)}_{n\times 1}\]
* \(B_{2}(x)\in\mathbb{R}^{n\times n}\) _as_ \[A_{*,i}^{\top}B_{2}(x)A_{*,j}\] \[:= -\underbrace{\alpha(x)^{-2}}_{\text{\rm scalar}}\cdot\widetilde{ \underline{c}}(\underline{c})^{\top}\cdot(\underbrace{v_{j}(x)}_{n\times 1} \cdot\underbrace{\beta_{i}(x)}_{\text{\rm scalar}}+\underbrace{v_{i}(x) \cdot\beta_{j}(x)}_{n\times 1})\]
* \(B_{3}(x)\in\mathbb{R}^{n\times n}\) _as_ \[A_{*,i}^{\top}B_{3}(x)A_{*,j}:=\underbrace{\alpha(x)^{-1}}_{\text{\rm scalar}} \cdot\underbrace{A_{*,i}^{\top}}_{1\times n}\text{\rm diag}(\widetilde{ \underline{c}}(x)\circ\underbrace{u_{2}(x)}_{n\times 1})\underbrace{A_{*,j}}_{n \times 1}\]
### Computation of Hessian
In this section, we present the computation of Hessian.
**Lemma 5.2**.: _Let \(x\in\mathbb{R}^{d}\) be an arbitrary vector. Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) and \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1. Let \(K(x)\in\mathbb{R}^{n\times n}\) and \(\widetilde{c}(x)\in\mathbb{R}^{n}\) be defined as in Definition 5.1._
_Then for each \(i,j\in[d]\), and \(j\neq i\), we have_
* _Part 1._ \[\frac{\mathrm{d}^{2}u_{1}(x)}{\mathrm{d}x_{i}^{2}}=\mathbf{0}_{n}\]
* _Part 2._ \[\frac{\mathrm{d}^{2}u_{1}(x)}{\mathrm{d}x_{i}\mathrm{d}x_{j}}=\mathbf{0}_{n}\]
* _Part 3._ \[\frac{\mathrm{d}^{2}u_{2}(x)}{\mathrm{d}x_{i}^{2}}=A_{*,i}\circ u_{2}(x)\circ A _{*,i}\]
* _Part 4._ \[\frac{\mathrm{d}^{2}u_{2}(x)}{\mathrm{d}x_{i}\mathrm{d}x_{j}}=A_{*,i}\circ u_{ 2}(x)\circ A_{*,j}\]
* _Part 5._ \[\frac{\mathrm{d}^{2}u(x)}{\mathrm{d}x_{i}^{2}}=A_{*,i}\circ u_{2}(x)\circ A_{*,i}\]
* _Part 6._ \[\frac{\mathrm{d}^{2}u(x)}{\mathrm{d}x_{i}\mathrm{d}x_{j}}=A_{*,i}\circ u_{2}(x) \circ A_{*,j}\]
* _Part 7._ \[\frac{\mathrm{d}^{2}\alpha(x)}{\mathrm{d}x_{i}^{2}}=\langle u_{2}(x),A_{*,i}\circ A _{*,i}\rangle\]
* _Part 8._ \[\frac{\mathrm{d}^{2}\alpha(x)}{\mathrm{d}x_{i}\mathrm{d}x_{j}}=\langle u_{2}(x), A_{*,j}\circ A_{*,i}\rangle\]
* _Part 9._ \[\frac{\mathrm{d}^{2}\alpha(x)^{-1}}{\mathrm{d}x_{i}^{2}}=\underbrace{\alpha(x )^{-2}}_{\mathrm{scalar}}\langle\underbrace{\langle u_{2}(x),A_{*,i}\circ A_ {*,i}\rangle}_{\mathrm{scalar}}-2\underbrace{\alpha(x)^{-1}\cdot\beta_{i}(x)^ {2}}_{\mathrm{scalar}}\]
* _Part 10._ \[\frac{\mathrm{d}^{2}\alpha(x)^{-1}}{\mathrm{d}x_{i}\mathrm{d}x_{j}}=\underbrace {\alpha(x)^{-2}}_{\mathrm{scalar}}\langle\underbrace{\langle u_{2}(x),A_{*,i} \circ A_{*,j}\rangle}_{\mathrm{scalar}}-2\underbrace{\alpha(x)^{-1}\cdot \beta_{i}(x)\beta_{j}(x)}_{\mathrm{scalar}}\rangle\]
* _Part 11._ \[\frac{\mathrm{d}^{2}f(x)}{\mathrm{d}x_{i}^{2}}= -2\underbrace{\alpha(x)^{-2}}_{\mathrm{scalar}}\cdot\underbrace{ \beta_{i}(x)}_{\mathrm{scalar}}\cdot\underbrace{(I_{n}-f(x)\cdot\mathbf{1}_{n }^{\top})}_{n\times n}\cdot\underbrace{v_{i}(x)}_{n\times 1}+\underbrace{ \alpha(x)^{-1}}_{\mathrm{scalar}}\cdot\underbrace{(I_{n}-f(x)\cdot\mathbf{1}_ {n}^{\top})}_{n\times n}\cdot\underbrace{(u_{2}(x)\circ A_{*,i}\circ A_{*,i}) }_{n\times 1}\]
* _Part 12._ \[\frac{\mathrm{d}^{2}f(x)}{\mathrm{d}x_{i}\mathrm{d}x_{j}}= -\underbrace{\alpha(x)^{-2}}_{\mathrm{scalar}}\cdot\underbrace{ (I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})}_{n\times n}\cdot\underbrace{(v_{j}(x) \cdot\underbrace{\cdot}\cdot\underbrace{\beta_{i}(x)}_{\mathrm{scalar}}+ \underbrace{v_{i}(x)}_{n\times 1}\cdot\underbrace{\beta_{j}(x)}_{\mathrm{scalar}})}_{n \times 1}\] \[+ \underbrace{\alpha(x)^{-1}}_{\mathrm{scalar}}\underbrace{(I_{n}- f(x)\cdot\mathbf{1}_{n}^{\top})}_{n\times n}\cdot\underbrace{(u_{2}(x)\circ A_{*,j} \circ A_{*,i})}_{n\times 1}\]
* _Part 13._ \[\frac{\mathrm{d}^{2}L(x)}{\mathrm{d}x_{i}^{2}}= \underbrace{\alpha(x)^{-2}}_{\mathrm{scalar}}\cdot\underbrace{A_{*, i}^{\top}}_{1\times n}\underbrace{B_{1}(x)}_{n\times n}A_{*,i}-\underbrace{ \alpha(x)^{-2}}_{\mathrm{scalar}}\cdot\underbrace{A_{*,i}^{\top}}_{1\times n} \underbrace{B_{2}(x)}_{n\times n}A_{*,i}+\underbrace{\alpha(x)^{-1}}_{\mathrm{ scalar}}\cdot\underbrace{A_{*,i}^{\top}}_{1\times n}\underbrace{B_{3}(x)}_{n\times n }A_{*,i}\]
* _Part 14._ \[\frac{\mathrm{d}^{2}L(x)}{\mathrm{d}x_{i}\mathrm{d}x_{j}}= \underbrace{\alpha(x)^{-2}}_{\mathrm{scalar}}\cdot\underbrace{A_{*,i}^{ \top}}_{1\times n}\underbrace{B_{1}(x)}_{n\times n}A_{*,j}-\underbrace{\alpha( x)^{-2}}_{\mathrm{scalar}}\cdot\underbrace{A_{*,i}^{\top}}_{1\times n} \underbrace{B_{2}(x)}_{n\times n}A_{*,j}+\underbrace{\alpha(x)^{-1}}_{\mathrm{ scalar}}\cdot\underbrace{A_{*,i}^{\top}}_{1\times n}\underbrace{B_{3}(x)}_{n\times n }A_{*,j}\]
Proof.: **Proof of Part 1**
\[\frac{\mathrm{d}^{2}u_{1}(x)}{\mathrm{d}x_{i}^{2}}=\frac{\mathrm{d}}{\mathrm{d }x_{i}}(\frac{\mathrm{d}u_{1}(x)}{\mathrm{d}x_{i}})\]
\[= \frac{\mathrm{d}A_{*,i}}{\mathrm{d}x_{i}}\] \[= \mathbf{0}_{n}\]
where the first step follows from the expansion of the Hessian, the second step follows from **Part 1** of Lemma 4.1, and the last step follows from derivative rules.
**Proof of Part 3**
\[\frac{\mathrm{d}^{2}u_{2}(x)}{\mathrm{d}x_{i}^{2}} = \frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}u_{2}(x)}{ \mathrm{d}x_{i}})\] \[= \frac{\mathrm{d}(u_{2}(x)\circ A_{*,i})}{\mathrm{d}x_{i}}\] \[= A_{*,i}\circ\frac{\mathrm{d}u_{2}(x)}{\mathrm{d}x_{i}}\] \[= A_{*,i}\circ u_{2}(x)\circ A_{*,i}\]
where the first step follows from the expansion of Hessian, the second step follows from **Part 2** of Lemma 4.1.
**Proof of Part 4**
\[\frac{\mathrm{d}^{2}u_{2}(x)}{\mathrm{d}x_{i}x_{j}} = \frac{\mathrm{d}}{\mathrm{d}x_{j}}(\frac{\mathrm{d}u_{2}(x)}{ \mathrm{d}x_{i}})\] \[= \frac{\mathrm{d}(u_{2}(x)\circ A_{*,i})}{\mathrm{d}x_{j}}\]
\[=A_{*,i}\circ u_{2}(x)\circ A_{*,j}\] \[=A_{*,i}\circ u_{2}(x)\circ A_{*,j}\]
where the first step follows from the expansion of Hessian, the second step follows from **Part 2** of Lemma 4.1, the third step follows from Fact 3.4, and the last step follows from **Part 2** of Lemma 4.1.
**Proof of Part 5**
\[\frac{\mathrm{d}^{2}u(x)}{\mathrm{d}x_{i}^{2}} =\frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}u_{1}(x)+u_{2 }(x)}{\mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}}{\mathrm{d}x_{i}}\frac{\mathrm{d}u_{1}(x)}{ \mathrm{d}x_{i}}+\frac{\mathrm{d}}{\mathrm{d}x_{i}}\frac{\mathrm{d}u_{2}(x)}{ \mathrm{d}x_{i}}\] \[=A_{*,i}\circ u_{2}(x)\circ A_{*,i}\]
where the first step follows from the expansion of Hessian and Definition 3.1, the second step follows from the expansion of derivative, the third step follows from **Part 1** and **Part 3** of this Lemma.
**Proof of Part 6**
\[\frac{\mathrm{d}^{2}u(x)}{\mathrm{d}x_{i}x_{j}} =\frac{\mathrm{d}}{\mathrm{d}x_{j}}(\frac{\mathrm{d}u_{1}(x)+u_{2 }(x)}{\mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}}{\mathrm{d}x_{j}}\frac{\mathrm{d}u_{1}(x)}{ \mathrm{d}x_{i}}+\frac{\mathrm{d}}{\mathrm{d}x_{j}}\frac{\mathrm{d}u_{2}(x)}{ \mathrm{d}x_{i}}\] \[=A_{*,i}\circ u_{2}(x)\circ A_{*,j}\]
where the first step follows from the expansion of Hessian and Definition 3.1, the second step follows from the expansion of derivative, the third step follows from **Part 2** and **Part 4** of this Lemma.
**Proof of Part 7**
\[\frac{\mathrm{d}^{2}\alpha(x)}{\mathrm{d}x_{i}^{2}} =\frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}\alpha(x)}{ \mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}\beta_{i}(x)}{\mathrm{d}x_{i}}\] \[=\langle u_{2}(x),A_{*,i}\circ A_{*,i}\rangle\]
where the first step follows from the expansion of Hessian, the second step follows from **Part 4** of Lemma 4.1, and the last step follows from **Part 9** of Lemma 4.1.
**Proof of Part 8**
\[\frac{\mathrm{d}^{2}\alpha(x)}{\mathrm{d}x_{i}x_{j}} =\frac{\mathrm{d}}{\mathrm{d}x_{j}}(\frac{\mathrm{d}\alpha(x)}{ \mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}\beta_{i}(x)}{\mathrm{d}x_{j}}\] \[=\langle u_{2}(x),A_{*,j}\circ A_{*,i}\rangle\]
where the first step follows from the expansion of Hessian, the second step follows from **Part 4** of Lemma 4.1, and the last step follows from **Part 10** of Lemma 4.1.
**Proof of Part 9**
\[\frac{\mathrm{d}^{2}\alpha(x)^{-1}}{\mathrm{d}x_{i}^{2}}=\frac{\mathrm{d}}{ \mathrm{d}x_{i}}(\frac{\mathrm{d}\alpha(x)^{-1}}{\mathrm{d}x_{i}})\]
\[= \frac{\mathrm{d}(\alpha(x)^{-2}\cdot\beta_{i}(x))}{\mathrm{d}x_{i}}\] \[= \frac{\mathrm{d}\alpha(x)^{-2}}{\mathrm{d}x_{i}}\cdot\beta_{i}(x)+ \alpha(x)^{-2}\cdot\frac{\mathrm{d}\beta_{i}(x)}{\mathrm{d}x_{i}}\] \[= -2\alpha(x)^{-3}\cdot\frac{\mathrm{d}\alpha(x)}{\mathrm{d}x_{i}} \cdot\beta_{i}(x)+\alpha(x)^{-2}\cdot\langle u_{2}(x),A_{*,i}\circ A_{*,i}\rangle\] \[= -2\alpha(x)^{-3}\cdot\beta_{i}(x)^{2}+\alpha(x)^{-2}\cdot\langle u _{2}(x),A_{*,i}\circ A_{*,i}\rangle\] \[= \alpha(x)^{-2}(\langle u_{2}(x),A_{*,i}\circ A_{*,i}\rangle-2 \alpha(x)^{-1}\cdot\beta_{i}(x)^{2})\]
where the first step follows from the expansion of Hessian, the second step follows from **Part 5** of Lemma 4.1, the third step follows from the product rule of derivative, the fourth step follows from Fact 3.4 and **Part 9** of Lemma 4.1, the fifth step follows from **Part 4** of Lemma 4.1, and the last step follows from simple algebra.
**Proof of Part 10**
\[\frac{\mathrm{d}^{2}\alpha(x)^{-1}}{\mathrm{d}x_{i}x_{j}} = \frac{\mathrm{d}}{\mathrm{d}x_{j}}(\frac{\mathrm{d}\alpha(x)^{-1} }{\mathrm{d}x_{i}})\] \[= \frac{\mathrm{d}(\alpha(x)^{-2}\cdot\beta_{i}(x))}{\mathrm{d}x_{j}}\] \[= \frac{\mathrm{d}\alpha(x)^{-2}}{\mathrm{d}x_{j}}\cdot\beta_{i}(x) +\alpha(x)^{-2}\cdot\frac{\mathrm{d}\beta_{i}(x)}{\mathrm{d}x_{j}}\] \[= -2\alpha(x)^{-3}\cdot\frac{\mathrm{d}\alpha(x)}{\mathrm{d}x_{j}} \cdot\beta_{i}(x)+\alpha(x)^{-2}\cdot\langle u_{2}(x),A_{*,j}\circ A_{*,i}\rangle\] \[= -2\alpha(x)^{-3}\cdot\beta_{i}(x)\cdot\beta_{j}(x)+\alpha(x)^{-2 }\cdot\langle u_{2}(x),A_{*,j}\circ A_{*,i}\rangle\] \[= \alpha(x)^{-2}(\langle u_{2}(x),A_{*,j}\circ A_{*,i}\rangle-2 \alpha(x)^{-1}\cdot\beta_{i}(x)\cdot\beta_{j}(x))\]
where the first step follows from the expansion of Hessian, the second step follows from **Part 5** of Lemma 4.1, the third step follows from the product rule of derivative, the fourth step follows from Fact 3.4 and **Part 10** of Lemma 4.1, the fifth step follows from **Part 4** of Lemma 4.1, and the last step follows from simple algebra.
**Proof of Part 11**
We first analyze the following equation:
\[\alpha(x)^{-1}\cdot(\frac{\mathrm{d}(I_{n}-f(x)\cdot\mathbf{1}_{ n}^{\top})}{\mathrm{d}x_{i}}\cdot v_{i}(x)) = \alpha(x)^{-1}\cdot(\frac{\mathrm{d}I_{n}}{\mathrm{d}x_{i}}-\frac {\mathrm{d}f(x)\cdot\mathbf{1}_{n}^{\top}}{\mathrm{d}x_{i}}\cdot v_{i}(x)) \tag{2}\] \[= \alpha(x)^{-1}\cdot(-\frac{\mathrm{d}f(x)}{\mathrm{d}x_{i}}\cdot \mathbf{1}_{n}^{\top}\cdot v_{i}(x))\] \[= \alpha(x)^{-1}\cdot(-\alpha(x)^{-1}(I_{n}-f(x)\cdot\mathbf{1}_{n}^ {\top})\cdot v_{i}(x)\cdot\mathbf{1}_{n}^{\top}\cdot v_{i}(x))\] \[= -\alpha(x)^{-2}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot \beta_{i}(x)\cdot v_{i}(x),\]
where the first step follows from the basic derivative rule, the second step follows from the product rule, the third step follows from **Part 6** of Lemma 4.1, and the last step follows from simple algebra and the definition of \(\beta_{i}(x)\) (see Definition 3.1).
Then, we have
\[\frac{\mathrm{d}^{2}f(x)}{\mathrm{d}x_{i}^{2}}=\frac{\mathrm{d}}{\mathrm{d}x_{i }}(\frac{\mathrm{d}f(x)}{\mathrm{d}x_{i}})\]
\[= -\alpha(x)^{-2}\cdot\beta_{j}(x)\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{ \top})\cdot v_{i}(x)+\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top}) \cdot(u_{2}(x)\circ A_{*,j}\circ A_{*,i})\] \[= -\alpha(x)^{-2}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v _{j}(x)+\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot(u_{2}(x )\circ A_{*,i}\circ A_{*,i})\] \[= -\alpha(x)^{-2}\cdot\beta_{i}(x)\cdot(I_{n}-f(x)\cdot\mathbf{1}_{ n}^{\top})\cdot v_{i}(x)\] \[= -\alpha(x)^{-2}\cdot\beta_{i}(x)\cdot(I_{n}-f(x)\cdot\mathbf{1}_ {n}^{\top})\cdot v_{i}(x)+\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{ \top})\cdot(u_{2}(x)\circ A_{*,i}\circ A_{*,i})\] \[= -\alpha(x)^{-2}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v _{j}(x)\cdot\beta_{i}(x)+\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{ \top})\cdot(u_{2}(x)\circ A_{*,i}\circ A_{*,i})\] \[= -\alpha(x)^{-2}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v _{j}(x)\cdot\beta_{i}(x)+\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{ \top})\cdot(u_{2}(x)\circ A_{*,i}\circ A_{*,i})\] \[= -\alpha(x)^{-2}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v _{j}(x)\cdot\beta_{i}(x)+\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{ \top})\cdot(u_{2}(x)\circ A_{*,i}\circ A_{*,i})\] \[= -\alpha(x)^{-2}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v _{j}(x)\cdot\beta_{i}(x)+v_{i}(x)\cdot\beta_{j}(x))\] \[+ \alpha(x)^{-1}(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot(u_{2}( x)\circ A_{*,j}\circ A_{*,i})\]
where the first step follows from the expansion of Hessian, the second step follows from **Part 6** of Lemma 4.1, the third step follows from the product rule of derivative, the fourth step follows from **Part 5** of Lemma 4.1 and product rule, the fifth step follows from Eq. (3) and **Part 12** of Lemma 4.1, and the last step follows from simple algebra.
**Proof of Part 13**
\[\frac{\mathrm{d}^{2}L(x)}{\mathrm{d}x_{i}^{2}} = \frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}L(x)}{\mathrm{ d}x_{i}})\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i}}\langle c(x),\frac{\mathrm{d}c (x)}{\mathrm{d}x_{i}}\rangle\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{i}}\langle c(x),\frac{\mathrm{d}f (x)}{\mathrm{d}x_{i}}\rangle\] \[= \langle\frac{\mathrm{d}c(x)}{\mathrm{d}x_{i}},\frac{\mathrm{d}f( x)}{\mathrm{d}x_{i}}\rangle+c(x)^{\top}\cdot\frac{\mathrm{d}^{2}f(x)}{\mathrm{d}^{2}x_ {i}}\] \[= (\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v _{i}(x))^{\top}\cdot\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top}) \cdot v_{i}(x)\] \[+ c(x)^{\top}\cdot-2\alpha(x)^{-2}\cdot\beta_{i}(x)\cdot(I_{n}-f (x)\cdot\mathbf{1}_{n}^{\top})\cdot v_{i}(x)\] \[+ \alpha(x)^{-1}\cdot c(x)^{\top}\cdot(I_{n}-f(x)\cdot\mathbf{1}_ {n}^{\top})\cdot(u_{2}(x)\circ A_{*,i}\circ A_{*,i})\] \[= \alpha(x)^{-2}v_{i}(x)^{\top}K(x)^{\top}K(x)v_{i}(x)\] \[+ -2\alpha(x)^{-2}\cdot\widetilde{c}(x)^{\top}\cdot v_{i}(x) \cdot\beta_{i}(x)\] \[+ \alpha(x)^{-1}\cdot A_{*,i}^{\top}\mathrm{diag}(\widetilde{c}(x) \circ u_{2}(x))A_{*,i}\] \[= A_{*,i}^{\top}B_{1}(x)A_{*,i}+A_{*,i}^{\top}B_{2}(x)A_{*,i}+A_{*,i}^{\top}B_{3}(x)A_{*,i}\]
where the first step follows from the expansion of Hessian, the second step follows from Fact 3.4, the third step follows from **Part 7** of Lemma 4.1, the fourth step follows from Fact 3.4, the fifth step follows from **Part 7** of Lemma 4.1 and **Part 11** of Lemma 5.2, the sixth step follows from Definition of \(\widetilde{c},K\) (See Definition 5.1), and the last step follow from Definitions of \(B_{1},B_{2},B_{3}\) (See Definition 5.1).
**Proof of Part 14**
\[\frac{\mathrm{d}^{2}L(x)}{\mathrm{d}x_{i}x_{j}} = \frac{\mathrm{d}}{\mathrm{d}x_{j}}(\frac{\mathrm{d}L(x)}{\mathrm{ d}x_{i}})\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{j}}\langle c(x),\frac{\mathrm{d}c (x)}{\mathrm{d}x_{i}}\rangle\] \[= \frac{\mathrm{d}}{\mathrm{d}x_{j}}\langle c(x),\frac{\mathrm{d}f (x)}{\mathrm{d}x_{i}}\rangle\] \[= \frac{\mathrm{d}c(x)^{\top}}{\mathrm{d}x_{j}}\cdot\frac{\mathrm{ d}f(x)}{\mathrm{d}x_{i}}+c(x)^{\top}\cdot\frac{\mathrm{d}^{2}f(x)}{\mathrm{d}x_{i}x_{j}}\] \[= (\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot v _{j}(x))^{\top}\cdot\alpha(x)^{-1}\cdot(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top}) \cdot v_{i}(x)\] \[+ c(x)^{\top}\cdot(-\alpha(x)^{-2}\cdot(I_{n}-f(x)\cdot\mathbf{1} _{n}^{\top})\cdot(v_{j}(x)\cdot\beta_{i}(x)+v_{i}(x)\cdot\beta_{j}(x))\] \[+ \alpha(x)^{-1}(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\cdot(u_{2}(x) \circ A_{*,j}\circ A_{*,i}))\] \[= \alpha(x)^{-2}v_{i}(x)^{\top}K(x)^{\top}K(x)v_{j}(x)\] \[+ -\alpha(x)^{-2}\cdot\widetilde{c}(x)^{\top}\cdot(v_{j}(x)\cdot \beta_{i}(x)+v_{i}(x)\cdot\beta_{j}(x))\] \[+ \alpha(x)^{-1}\cdot A_{*,i}^{\top}\mathrm{diag}(\widetilde{c}(x) \circ u_{2}(x))A_{*,j}\]
\[= A_{*,i}^{\top}B_{1}(x)A_{*,j}+A_{*,i}^{\top}B_{2}(x)A_{*,j}+A_{*,i}^{ \top}B_{3}(x)A_{*,j}\]
where the first step follows from the expansion of Hessian, the second step follows from Fact 3.4, the third step follows from **Part 7** of Lemma 4.1, the fourth step follows from Fact 3.4, the fifth step follows from **Part 7** of Lemma 4.1 and **Part 12** of Lemma 5.2, the sixth step follows from Definition of \(\widetilde{c},K\) (See Definition 5.1), and the last step follow from Definitions of \(B_{1},B_{2},B_{3}\) (See Definition 5.1).
### Helpful Lemma
In this section, we present a helpful lemma that is used for further analysis of Hessian.
**Lemma 5.3**.: _Let \(x\in\mathbb{R}^{d}\) be an arbitrary vector. Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) and \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1. Let \(K(x)\in\mathbb{R}^{n\times n}\) and \(\widetilde{c}(x)\in\mathbb{R}^{n}\) be defined as in Definition 5.1._
_Then, for each \(i,j\in[d]\),_
* _Part 1._ \[= \underbrace{A_{*,i}^{\top}\cdot\underbrace{\alpha(x)^{-2}}_{1 \times n}\cdot\underbrace{\alpha(x)^{-2}}_{\text{scalar}}\cdot\underbrace{ \operatorname{diag}(z(x))}_{n\times n}\] \[\cdot \underbrace{K(x)^{\top}}_{n\times n}\cdot\underbrace{K(x)}_{n \times n}\cdot\operatorname{diag}(\underbrace{z(x)}_{n\times 1})\cdot \underbrace{A_{*,j}}_{n\times 1}\]
* _Part 2._ \[= \underbrace{A_{*,i}^{\top}}_{1\times n}\cdot\underbrace{\alpha(x)^ {-2}}_{\text{scalar}}\cdot\underbrace{z(x)}_{n\times 1}\cdot\underbrace{ \widetilde{c}(x)^{\top}}_{1\times n}\cdot\underbrace{\operatorname{diag}(z(x) )}_{n\times n}\cdot\underbrace{A_{*,j}}_{n\times 1}\] \[+ \underbrace{A_{*,i}^{\top}}_{1\times n}\cdot\underbrace{\alpha(x) ^{-2}}_{\text{scalar}}\cdot\underbrace{\operatorname{diag}(z(x))}_{n\times n} \cdot\underbrace{\widetilde{c}(x)}_{n\times 1}\cdot\underbrace{z(x)^{\top}}_{1 \times n}\cdot\underbrace{A_{*,j}}_{n\times 1}\]
* _Part 3._ \[= \underbrace{A_{*,i}^{\top}}_{1\times n}\cdot\underbrace{\alpha(x) ^{-1}}_{\text{scalar}}\cdot\underbrace{\operatorname{diag}(\widetilde{c}(x) \circ u_{2}(x))}_{n\times n}\underbrace{A_{*,j}}_{n\times 1}\]
Proof.: **Proof of Part 1.**
\[\alpha(x)^{-2}\cdot v_{i}(x)^{\top}K(x)^{\top}K(x)v_{j}(x)\] \[= \alpha(x)^{-2}\cdot\left(\left(z(x)\circ A_{*,i}\right)^{\top} \cdot K(x)^{\top}K(x)\cdot\left(z(x)\circ A_{*,j}\right)\right)\] \[= \alpha(x)^{-2}\cdot\left(\operatorname{diag}(z(x))\cdot A_{*,i} \right)^{\top}\cdot K(x)^{\top}K(x)\cdot\operatorname{diag}(z(x))\cdot A_{*,j}\] \[= A_{*,i}^{\top}\cdot\alpha(x)^{-2}\cdot\operatorname{diag}(z(x)) \cdot K(x)^{\top}K(x)\cdot\operatorname{diag}(z(x))\cdot A_{*,j}\]
where the first step follows from the definition of \(v_{i}(x)\) (see Definition 3.1), the second step follows from Fact 3.3, and the last step follows from simple algebra and the definition of \(z(x)\) (see Definition 3.1).
**Proof of Part 2.**
\[\alpha(x)^{-2}\cdot\widetilde{c}(x)^{\top}\cdot(v_{j}(x)\cdot\beta _{i}(x)+v_{i}(x)\cdot\beta_{j}(x))\] \[= \ \alpha(x)^{-2}\cdot\widetilde{c}(x)^{\top}\cdot z(x)\circ A_{ *,j}\cdot\langle z(x),A_{*,i}\rangle\] \[+ \ \alpha(x)^{-2}\cdot\widetilde{c}(x)^{\top}\cdot z(x)\circ A_{ *,i}\cdot\langle z(x),A_{*,j}\rangle\] \[= \ \alpha(x)^{-2}\cdot\widetilde{c}(x)^{\top}\operatorname{diag} (z(x))\cdot A_{*,j}\cdot z(x)^{\top}\cdot A_{*,i}\] \[+ \ \alpha(x)^{-2}\cdot\widetilde{c}(x)^{\top}\operatorname{diag} (z(x))\cdot A_{*,i}\cdot z(x)^{\top}\cdot A_{*,j}\] \[= \ (A_{*,j}^{\top}\cdot\alpha(x)^{-2}\cdot\operatorname{diag} (z(x))\cdot\widetilde{c}(x)\cdot z(x)^{\top}\cdot A_{*,i})^{\top}\] \[+ \ A_{*,i}^{\top}\cdot\alpha(x)^{-2}\cdot\operatorname{diag}(z(x) )\cdot\widetilde{c}(x)\cdot(z(x))^{\top}\cdot A_{*,j}\] \[= \ A_{*,i}^{\top}\cdot\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x )^{\top}\cdot\operatorname{diag}(z(x))\cdot A_{*,j}\] \[+ \ A_{*,i}^{\top}\cdot\alpha(x)^{-2}\cdot\operatorname{diag}(z(x) )\cdot\widetilde{c}(x)\cdot z(x)^{\top}\cdot A_{*,j}\]
where the first step follows from the definition of \(\beta_{i}(x)\) and \(v_{i}(x)\) (see Definition 3.1), the second step follows from Fact 3.3, the third step follows from Fact 3.3, and the last step follows from simple algebra and the definition of \(z(x)\) (see Definition 3.1).
**Proof of Part 3.**
\[\alpha(x)^{-1}\cdot A_{*,i}^{\top}\operatorname{diag}(\widetilde {c}(x)\circ u_{2}(x))A_{*,i}\] \[= \ A_{*,i}^{\top}\cdot\alpha(x)^{-1}\cdot\operatorname{diag}( \widetilde{c}(x)\circ u_{2}(x))A_{*,i}\]
where the first step follows from the simple algebra.
### Decomposing \(B_{1}(x),B_{2}(x)\) and \(B_{3}(x)\) into low rank plus diagonal
In this section, we decompose the matrices \(B_{1}(x)\), \(B_{2}(x)\), and \(B_{3}(x)\) into low rank plus diagonal.
**Lemma 5.4**.: _Let \(x\in\mathbb{R}^{d}\) be an arbitrary vector. Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) and \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1. Let \(K(x),B_{1}(x),B_{2}(x),B_{3}(x)\in\mathbb{R}^{n\times n}\) and \(\widetilde{c}(x)\in\mathbb{R}^{n}\) be defined as in Definition 5.1 and \(B(x)=B_{1}(x)+B_{2}(x)+B_{3}(x)\in\mathbb{R}^{n\times n}\)._
_Then, we show that_
* _Part 1. For_ \(B_{1}(x)\in\mathbb{R}^{n\times n}\)_, we have_ \[B_{1}(x)=\underbrace{\alpha(x)^{-2}}_{\text{\rm scalar}}\cdot\underbrace{ \operatorname{diag}(z(x))}_{n\times n}\cdot\underbrace{K(x)^{\top}}_{n\times n }\cdot\underbrace{\operatorname{diag}(z(x))}_{n\times n}\]
* _Part 2. For_ \(B_{2}(x)\in\mathbb{R}^{n\times n}\)_, we have_ \[B_{2}(x)= \ -\underbrace{\alpha(x)^{-2}}_{\text{\rm scalar}}\cdot\underbrace{ \widetilde{c}(x)}_{n\times 1}\cdot\underbrace{z(x)^{\top}}_{1\times n}\cdot \underbrace{\operatorname{diag}(z(x))}_{n\times n}\] \[- \underbrace{\alpha(x)^{-2}}_{\text{\rm scalar}}\cdot\underbrace{ \operatorname{diag}(z(x))}_{n\times n}\cdot\underbrace{z(x)}_{n\times 1}\cdot \underbrace{\widetilde{c}(x)^{\top}}_{1\times n}\]
* _Part 3. For_ \(B_{3}(x)\in\mathbb{R}^{n\times n}\)_, we have_ \[B_{3}(x)=\underbrace{\alpha(x)^{-1}}_{\text{scalar}}\cdot\operatorname{diag}( \underbrace{\widetilde{c}(x)}_{n\times 1}\circ\underbrace{u_{2}(x)}_{n\times 1})\]
* _Part 4. For_ \(B(x)\in\mathbb{R}^{n\times n}\)_, we have_ \[B(x) = \underbrace{\alpha(x)^{-2}}_{\text{scalar}}\underbrace{ \operatorname{diag}(z(x))}_{n\times n}\cdot\underbrace{K(x)^{\top}}_{n\times n }\cdot\operatorname{diag}(\underbrace{z(x)}_{n\times 1})\] \[- \underbrace{\alpha(x)^{-2}}_{\text{scalar}}\cdot\underbrace{ \operatorname{diag}(z(x))}_{n\times n}\cdot\underbrace{z(x)}_{n\times 1}\cdot \underbrace{\widetilde{c}(x)}_{1\times n}^{\top}\] \[+ \underbrace{\alpha(x)^{-1}}_{\text{scalar}}\cdot\operatorname{ diag}(\widetilde{c}(x)_{n\times 1}\circ\underbrace{u_{2}(x)}_{n\times 1})\]
Proof.: **Proof of Part 1**
\[A_{*,i}^{\top}B_{1}(x)A_{*,j} = \underbrace{\alpha(x)^{-2}}_{\text{scalar}}\cdot\underbrace{v_{i} (x)^{\top}}_{1\times n}\underbrace{K(x)^{\top}}_{n\times n}\underbrace{K(x)}_{ n\times n}\underbrace{v_{i}(x)}_{n\times 1}\] \[= A_{*,i}^{\top}\cdot\alpha(x)^{-2}\cdot\operatorname{diag}(z(x) )^{\top}\cdot K(x)^{\top}K(x)\cdot\operatorname{diag}(z(x))\cdot A_{*,j}\]
where the first step follows from Definition 5.1, and the last step follows from Lemma 5.3.
Thus, by extracting \(A_{*,i}^{\top}\) and \(A_{*,j}\), we get:
\[B_{1}(x)=\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))^{\top}\cdot K(x)^{\top} K(x)\cdot\operatorname{diag}(z(x))\]
**Proof of Part 2.**
\[A_{*,i}^{\top}B_{2}(x)A_{*,j}\] \[= -\alpha(x)^{-2}\cdot\widetilde{c}(x)^{\top}\cdot(v_{j}(x)\cdot \beta_{i}(x)+v_{i}(x)\cdot\beta_{j}(x))\] \[= -(A_{*,i}^{\top}\cdot\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}( x)^{\top}\cdot\operatorname{diag}(z(x))\cdot A_{*,j}\] \[+ A_{*,i}^{\top}\cdot\alpha(x)^{-2}\cdot\operatorname{diag}(z(x) )\cdot\widetilde{c}(x)\cdot z(x)^{\top}\cdot A_{*,j})\]
where the first step follows from the Definition of \(A_{*,i}^{\top}B_{2}(x)A_{*,j}\) (see Definition 5.1), and the last step follows from Lemma 5.3.
Thus, by extracting \(A_{*,i}^{\top}\) and \(A_{*,j}\), we get:
\[B_{2}(x) = -(\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\] \[+ \alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot\widetilde{c}( x)\cdot z(x)^{\top})\]
**Proof of Part 3.**
\[A_{*,i}^{\top}B_{3}(x)A_{*,j}=A_{*,i}^{\top}\cdot\alpha(x)^{-1}\cdot \operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))A_{*,j}\]
where the first step follows from Lemma 5.3.
Thus, by extracting \(A_{*,i}^{\top}\) and \(A_{*,j}\), we get:
\[B_{3}(x)=\alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))\]
**Proof of Part 4.**
Since \(B(x)=B_{1}(x)+B_{2}(x)+B_{3}(x)\), by combining the first three part, we can get \(B(x)\).
## 6 Rewrite Hessian
In this section, we rewrite the hessian. For convenience of analysis, we formally make a definition block for \(B(x)\).
**Definition 6.1**.: _Let \(x\in\mathbb{R}^{d}\) be an arbitrary vector. Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) and \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1. Let \(K(x)\in\mathbb{R}^{n\times n}\) and \(\widetilde{c}(x)\in\mathbb{R}^{n}\) be defined as in Definition 5.1._
_Then, we define \(B(x)\in\mathbb{R}^{n\times n}\) as follows:_
\[B(x):= \ \alpha(x)^{-2}\cdot\mathrm{diag}(z(x))\cdot K(x)^{\top}K(x) \cdot\mathrm{diag}(z(x))\] \[- \ \alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \mathrm{diag}(z(x))-\alpha(x)^{-2}\] \[\cdot \ \mathrm{diag}(z(x))\cdot\widetilde{c}(x)\cdot z(x)^{\top}\] \[+ \ \alpha(x)^{-1}\cdot\mathrm{diag}(\widetilde{c}(x)\circ u_{2}(x )).\]
_Furthermore, we defined \(B_{\mathrm{mat}}(x),B_{\mathrm{rank}}(x),B_{\mathrm{diag}}(x)\in\mathbb{R}^{n \times n}\) as follows:_
\[B_{\mathrm{mat}}(x):= \ \alpha(x)^{-2}\cdot\mathrm{diag}(z(x))\cdot K(x)^{\top}K(x) \cdot\mathrm{diag}(z(x))\] \[B_{\mathrm{rank}}(x):= \ \alpha(x)^{-2}\cdot(z(x)\cdot\widetilde{c}(x)^{\top}\cdot \mathrm{diag}(z(x))\] \[+ \ \mathrm{diag}(z(x))\cdot\widetilde{c}(x)\cdot z(x)^{\top})\] \[B_{\mathrm{diag}}(x):= \ \alpha(x)^{-1}\cdot\mathrm{diag}(\widetilde{c}(x)\circ u_{2}(x )),\]
_so that_
\[B(x)=B_{\mathrm{mat}}(x)-B_{\mathrm{rank}}(x)+B_{\mathrm{diag}}(x).\]
## 7 Hessian is PSD
In this section, we mainly prove Lemma 7.1.
### PSD Lower Bound
**Lemma 7.1**.: _Let \(x\in\mathbb{R}^{d}\) be an arbitrary vector. Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) and \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1. Let \(K(x),B(x),B_{\mathrm{mat}}(x),B_{\mathrm{rank}}(x),B_{\mathrm{diag}}(x)\in \mathbb{R}^{n\times n}\) and \(\widetilde{c}(x)\in\mathbb{R}^{n}\) be defined as in Definition 6.1. Let \(\beta\in(0,0.1)\) and \(\beta<\alpha(x)\)._
_Then, we have_
* _Part 1._ \[0\preceq B_{\mathrm{mat}}(x)\preceq\beta^{-2}\cdot 16n^{2}\exp(2R^{2})\cdot I _{n}\]
* _Part 2._ \[-10\beta^{-2}n\exp(R^{2})\cdot I_{n}\preceq-B_{\mathrm{rank}}(x) \preceq 10\beta^{-2}n\exp(R^{2})\cdot I_{n}\]
* _Part 3._ \[-4\beta^{-1}n\exp(R^{2})\cdot I_{n}\preceq B_{\mathrm{diag}}(x) \preceq 4\beta^{-1}n\exp(R^{2})\cdot I_{n}\]
* _Part 4._ \[-14\beta^{-2}n\exp(R^{2})\cdot I_{n}\preceq B(x)\preceq 30\beta^{-2}n^{2}\exp(2R^{2} )\cdot I_{n}\]
Proof.: **Proof of Part 1.**
On the one hand,
\[B_{\rm mat} = \alpha(x)^{-2}\cdot{\rm diag}(z(x))\cdot K(x)^{\top}K(x)\cdot{\rm diag }(z(x))\] \[\preceq \alpha(x)^{-2}\|\operatorname{diag}(z(x))K(x)^{\top}\|^{2}\cdot I _{n}\] \[\preceq \alpha(x)^{-2}\|\operatorname{diag}(z(x))\|^{2}\|K(x)^{\top}\|^{2} \cdot I_{n}\] \[\preceq \alpha(x)^{-2}\|z(x)\|^{2}_{2}\cdot 4n\cdot I_{n}\] \[\preceq \beta^{-2}\cdot 16n^{2}\exp(2R^{2})\cdot I_{n}\]
where the first step follows from definition of \(B_{\rm mat}\), the second step follows from **Part 1** of Fact 3.8, the third step follows from **Part 4** of Fact 3.6, the fourth step follows from **Part 2,4** of Fact 3.5 and **Part 7** of Lemma 8.2, and the final step follows from **Part 8** of Lemma 8.2 and \(\alpha(x)>\beta\).
On the other hand, since \(B_{\rm mat}\) is a positive semi-definite matrix, then \(B_{\rm mat}\succeq 0\).
**Proof of Part 2**
On the one hand
\[B_{\rm rank}(x) = \alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\] \[+ \alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot\widetilde{c}(x )\cdot z(x)^{\top}\] \[\preceq \alpha(x)^{-2}\cdot(z(x)z(x)^{\top}\] \[+ \widetilde{c}(x)^{\top}\cdot\operatorname{diag}(z(x))\cdot( \widetilde{c}(x)^{\top}\cdot\operatorname{diag}(z(x)))^{\top})\] \[\preceq \alpha(x)^{-2}(\|z(x)\|^{2}_{2}+\|\widetilde{c}(x)^{\top} \operatorname{diag}(z(x))\|^{2}_{2})\cdot I_{n}\] \[\preceq \alpha(x)^{-2}(2\sqrt{n}\exp(R^{2})+\|\widetilde{c}(x)\|^{2}_{2} \|z(x)\|^{2}_{2})\cdot I_{n}\] \[\preceq \alpha(x)^{-2}(2\sqrt{n}\exp(R^{2})+8n\exp(R^{2}))\cdot I_{n}\] \[\preceq 10\beta^{-2}n\exp(R^{2})\cdot I_{n}\]
where the first step follows from the definition of \(B_{\rm rank}(x)\), the second step follows from **Part 4** of Fact 3.8, the third step follows from **Part 1** of Fact 3.8, the fourth step follows from **Part 8** of Lemma 8.2 and **Part 9** of Fact 3.5, the fifth step follows from **Part 8, 10** of Lemma 8.2, and the last step follows from \(n>1\) and \(\alpha(x)>\beta\).
Then, by multiplying \(-1\) on both sides, we can get
\[-B_{\rm rank}(x)\succeq-10\beta^{-2}n\exp(R^{2})\cdot I_{n}\]
On the other hand, the proof of the lower bound is similar to the previous one, so we omit it here.
**Proof of Part 3**
On the one hand
\[B_{\rm diag}(x) = \alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u_{ 2}(x))\] \[\preceq \alpha(x)^{-1}\|\widetilde{c}(x)\|_{2}\|u_{2}(x)\|_{2}\cdot I_{n}\] \[\preceq 4\beta^{-1}n\exp(R^{2})\cdot I_{n}\]
where the first step follows from the definition of \(B_{\mathrm{diag}}(x)\), the second step follows from **Part 7** of Fact 3.8, and the last step follows from **Part 1, 10** of Lemma 8.2 and \(\alpha(x)>\beta\).
On the other hand, the proof of the lower bound is similar to the previous one, so we omit it here.
**Proof of Part 4**
On the one hand
\[B(x) = B_{\mathrm{mat}}(x)-B_{\mathrm{rank}}(x)+B_{\mathrm{diag}}(x)\] \[\preceq \beta^{-2}\cdot 16n^{2}\exp(2R^{2})\cdot I_{n}+10\beta^{-2}n\exp(R ^{2})\cdot I_{n}\] \[+ 4\beta^{-1}n\exp(R^{2})\cdot I_{n}\] \[\preceq 30\beta^{-2}n^{2}\exp(2R^{2})\cdot I_{n}\]
where the first step follows from Definition 6.1, the second step follows **Part 1, 2, 3**, and the last step follows from \(\beta^{-1}>1,n>1\), and \(\exp(2R^{2})>\exp(R^{2})\).
On the other hand, we have
\[B(x) = B_{\mathrm{mat}}(x)-B_{\mathrm{rank}}(x)+B_{\mathrm{diag}}(x)\] \[\succeq -10\beta^{-2}n\exp(R^{2})\cdot I_{n}-4\beta^{-1}n\exp(R^{2})\cdot I _{n})\] \[\succeq -14\beta^{-2}n\exp(R^{2})\cdot I_{n}\]
where the first step follows from Definition 6.1, the second step follows **Part 1, 2, 3**, and the last step follows from \(\beta^{-1}>1\).
## 8 Hessian is Lipschitz
In this section, we find the upper bound of \(\|\nabla^{2}L(x)-\nabla^{2}L(y)\|\) and thus proved that \(\nabla^{2}L\) is Lipschitz. More specifically, in section 8.1, we give a summary of the main properties developed in this whole section. In Section 8.2, we present the upper bound of the norms of the functions we analyzed before. In Section 8.3, we present the Lipschitz properties of the functions we analyzed before. In Section 8.4, we summarize the four steps of the Lipschitz for matrix functions. In Section 8.5, we analyze the first step of the Lipschitz for matrix function \(\alpha(x)^{-2}\cdot\mathrm{diag}(z(x))^{\top}\cdot K(x)^{\top}K(x)\cdot \mathrm{diag}(z(x))\). In Section 8.6, we analyze the second step of the Lipschitz for matrix function \(\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot\mathrm{diag}(z(x))\). In Section 8.7, we analyze the third step of the Lipschitz for matrix function \(\alpha(x)^{-2}\cdot\mathrm{diag}(z(x))\cdot\widetilde{c}(x)\cdot z(x)^{\top}\). In Section 8.8, we analyze the fourth step of the Lipschitz for matrix function \(\alpha(x)^{-1}\cdot\mathrm{diag}(\widetilde{c}(x)\circ u_{2}(x))\).
### Main results
In this section, we present the main lemma which is the summary of the properties developed in this whole section.
**Lemma 8.1**.: _Let \(H(x)=\frac{\mathrm{d}^{2}L}{\mathrm{d}x^{2}}\)._
_Then we have_
\[\|H(x)-H(y)\|\leq 20\beta^{-5}n^{3.5}\exp(8R^{2})\|x-y\|_{2}\]
Proof.: The definition of \(G_{i}\) is as follows
* \(G_{1}(x)=\alpha(x)^{-2}\cdot\mathrm{diag}(z(x))\cdot K(x)^{\top}K(x)\cdot \mathrm{diag}(z(x))\)
* \(G_{2}(x)=-\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot\mathrm{diag}(z(x))\)
* \(G_{3}(x)=-\alpha(x)^{-2}\cdot\mathrm{diag}(z(x))\cdot\widetilde{c}(x)\cdot z(x)^ {\top}\)
* \(G_{4}(x)=\alpha(x)^{-1}\cdot\mathrm{diag}(\widetilde{c}(x)\circ u_{2}(x))\),
which we define and analyze in Lemma 8.5, 8.6, 8.7, 8.8, respectively.
Then, we have
\[\|H(x)-H(y)\| =\|A\|\|\sum_{i=1}^{4}G_{i}(x)-G_{i}(y)\|\|A\|\] \[\leq R^{2}\cdot\|\sum_{i=1}^{4}G_{i}(x)-G_{i}(y)\|\] \[\leq R^{2}\cdot 20\beta^{-5}n^{3.5}\exp(7R^{2})\|x-y\|_{2}\] \[\leq 20\beta^{-5}n^{3.5}\exp(8R^{2})\|x-y\|_{2}\]
where the first step follows from the definition of \(G_{i}\) (see Lemma 8.4) and matrix spectral norm, the second step follows from \(\|A\|\leq R\), the second step follows from Lemma 8.4, and the last step follows from \(R^{2}\leq\exp(R^{2})\)
### A core Tool: Upper Bound for Several Basic Functions
In this section, we find the upper bound for the norms of the functions we analyze.
**Lemma 8.2**.: _Let \(R\geq 4\). Let \(A\in\mathbb{R}^{n\times d}\) and \(x\in\mathbb{R}^{d}\) satisfy \(\|A\|\leq R\) and \(\|x\|_{2}\leq R\). Let \(b\in\mathbb{R}^{n}\) satisfy \(\|b\|_{1}\leq 1\). Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) and \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1. Let \(K(x),B(x),B_{\mathrm{mat}}(x),B_{\mathrm{rank}}(x),B_{\mathrm{diag}}(x)\in \mathbb{R}^{n\times n}\) and \(\widetilde{c}(x)\in\mathbb{R}^{n}\) be defined as in Definition 6.1. Let \(\beta\in(0,0.1)\), and \(\langle\exp(Ax),\mathbf{1}_{n}\rangle\), \(\langle\exp(Ay),\mathbf{1}_{n}\rangle\), \(\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle\), and \(\langle\exp(Ay)+Ay,\mathbf{1}_{n}\rangle\) be greater than or equal to \(\beta\), respectively. Let \(R_{f}=2\beta^{-1}\cdot(R\exp(R^{2})+R)\cdot(n\cdot\exp(R^{2})+\sqrt{n}\cdot R ^{2})\). We define \(R_{f,2}\in\mathbb{R}\) as \(R_{f,2}:=2\sqrt{n}\beta^{-1}\exp(R^{2})\)._
_Then, we have_
* _Part 1._ \(\|\exp(Ax)\|_{2}\leq\sqrt{n}\exp(R^{2})\)__
* _Part 2._ \(\|\exp(Ax)+Ax\|_{2}\leq 2\sqrt{n}\exp(R^{2})\)__
* _Part 3._ \(|\alpha(x)|\geq\beta\)__
* _Part 4._ \(|\alpha(x)^{-1}|\leq\beta^{-1}\)__
* _Part 5._ \(\|f(x)\|_{2}\leq R_{f,2}\)__
* _Part 6._ \(\|c(x)\|_{2}\leq 2R_{f,2}\)__
* _Part 7._ \(\|K(x)\|\leq 3\sqrt{n}\cdot R_{f,2}\)__
* _Part 8_ \(\|z(x)\|_{2}\leq 2\sqrt{n}\cdot\exp(R^{2})\)__
* _Part 9_ \(|\alpha(x)^{-2}|\leq\beta^{-2}\)__
* _Part 10_ \(\|\widetilde{c}(x)\|_{2}\leq 10\sqrt{n}R_{f,2}^{2}\)__
Proof.: **Proof of Part 1**
\[\|\exp(Ax)\|_{2} \leq\sqrt{n}\cdot\|\exp(Ax)\|_{\infty}\] \[\leq\sqrt{n}\cdot\exp(\|(Ax)\|_{\infty})\] \[\leq\sqrt{n}\cdot\exp(\|(Ax)\|_{2})\] \[\leq\sqrt{n}\cdot\exp(R^{2})\]
where the first step follows from Fact 3.5, the second step follows from **Part 6** of Fact 3.5, the third step follows from **Part 6** of Fact 3.5, and the last step follows from \(\|A\|\leq R\) and \(\|x\|_{2}\leq R\).
**Proof of Part 2**
\[\|\exp(Ax)+Ax\|_{2} \leq\|\exp(Ax)\|_{2}+\|Ax\|_{2}\] \[\leq\sqrt{n}\cdot\exp(R^{2})+R^{2}\] \[\leq 2\sqrt{n}\exp(R^{2})\]
where the first step follows from **Part 8** of Fact 3.5, the second step follows from **Part 1** and \(\|A\|\leq R,\|x\|_{2}\leq R\), and the last step follows from \(n>1,\exp(R^{2})\geq R^{2}\).
**Proof of Part 3**
\[|\alpha(x)| =|\langle u(x),\mathbf{1}_{n}\rangle|\] \[\geq|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle|\] \[\geq\beta\]
where the first step follows from the definition of \(\alpha(x)\) (see Definition 3.1), the second step follows the definition of \(u(x)\) (see Definition 3.1), and the last step follows from the assumption \(\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle\geq\beta\).
**Proof of Part 4**
We have
\[|\alpha(x)^{-1}| \leq|\beta^{-1}|\] \[\leq\beta^{-1}\]
where the first step follows from **Part 3** of Lemma 8.2, the second step follows from \(\beta^{-1}>0\).
**Proof of Part 5**
First, we analyze the following equation:
\[|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle^{-1}| =|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle|^{-1}\] \[\leq\beta^{-1}, \tag{4}\]
where the first step follows from simple algebra and the second step follows from the assumption in the Lemma statement.
Then, we have
\[\|f(x)\|_{2} =\|\alpha(x)^{-1}u(x)\|_{2}\] \[=\|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle^{-1}(\exp(Ax)+Ax)\|_{2}\] \[=|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle^{-1}|\cdot\|(\exp(Ax) +Ax)\|_{2}\] \[\leq\beta^{-1}\cdot\|(\exp(Ax)+Ax)\|_{2}\]
\[\leq 2\sqrt{n}\beta^{-1}\exp(R^{2})\] \[= R_{f,2},\]
where the first step follows from the definition of \(f(x)\) (see Definition 3.1), the second step follows from the definition of \(\alpha(x)\) and \(u(x)\) (see Definition 3.1), the third step follows from Fact 3.5, the fourth step follows from Eq. (4), and the fifth step follows from **Part 2**, and the last step follows from the definition of \(R_{f,2}\).
**Proof of Part 6**
\[\|c(x)\|_{2} = \|f(x)-b\|_{2}\] \[\leq \|f(x)\|_{2}+\|b\|_{2}\] \[\leq R_{f,2}+1\] \[\leq 2R_{f,2},\]
where the first step follows from the definition of \(c(x)\) (see Definition 3.1), the second step follows from **Part 8** of Fact 3.5, the third step follows from **Part 5** of Lemma 8.2 and \(\|b\|_{2}\leq\|b\|_{1}\leq 1\), and the last step follows from \(R_{f,2}\geq 1\).
**Proof of Part 7**
\[\|K(x)\| = \|(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})\|\] \[\leq \|I_{n}\|+\|f(x)\cdot\mathbf{1}_{n}^{\top}\|\] \[\leq 1+\|f(x)\|_{2}\cdot\|\mathbf{1}_{n}^{\top}\|_{2}\] \[\leq 1+2\sqrt{n}\beta^{-1}\exp(R^{2})\cdot\sqrt{n}\] \[\leq 3\sqrt{n}R_{f,2},\]
where the first step follows from the definition of \(K(x)\), the second step follows from the **Part 3** of Fact 3.6, the third step follows from \(\|I_{n}\|=1\) and **Part 9** of Fact 3.5, and the fourth step follows from **Part 5** of Lemma 8.2, and the last step follows from the simple algebra.
**Proof of Part 8**
\[\|z(x)\|_{2} = \|u_{2}(x)+\mathbf{1}_{n}\|\] \[\leq \|u_{2}(x)\|_{2}+\|\mathbf{1}_{n}\|_{2}\] \[\leq \sqrt{n}\cdot(\exp(R^{2})+1)\] \[\leq 2\sqrt{n}\exp(R^{2})\]
where the first step follows from the the definition of \(z(x)\) (see Definition 3.1), the second step follows from **Part 8** of Fact 3.5, the third step follows from **Part 1** of Lemma 8.2, and the last step follows from Fact 3.7.
**Proof of Part 9**
\[|\alpha(x)^{-2}| = |\alpha(x)^{-1}|^{2}\] \[\leq \beta^{-2}\]
where the first step follows from simple algebra, and the last step follows from **Part 4** of Lemma 8.2.
**Proof of Part 10**
\[\|\widetilde{c}(x)\|_{2}= \|K(x)^{\top}c(x)\|_{2}\]
\[\leq \|K(x)\|\|c(x)\|_{2}\] \[\leq 3\sqrt{n}R_{f,2}\cdot 2R_{f,2}\] \[\leq 10\sqrt{n}R_{f,2}^{2},\]
where the first step follows from Definition of \(\widetilde{c}(x)\), the second step follows from **Part 7** of Fact 3.6, the third step follows from **Part 6 and 7** of Lemma 8.2, and the last step follows from simple algebra.
### A core Tool: Lipschitz Property for Several Basic Functions
In this section, we present the Lipschitz property for the functions we analyze.
**Lemma 8.3** (Basic Functions Lipschitz Property).: _Let \(R\geq 4\). Let \(A\in\mathbb{R}^{n\times d}\) and \(x\in\mathbb{R}^{d}\) satisfy \(\|A\|\leq R\) and \(\|x\|_{2}\leq R\). Let \(b\in\mathbb{R}^{n}\) satisfy \(\|b\|_{1}\leq 1\). Let \(u_{1}(x),u_{2}(x),u(x),f(x),c(x),z(x),v_{i}(x)\in\mathbb{R}^{n}\) and \(\alpha(x),L(x),\beta_{i}(x)\in\mathbb{R}\) be defined as in Definition 3.1. Let \(K(x),B(x),B_{\mathrm{mat}}(x),B_{\mathrm{rank}}(x),B_{\mathrm{diag}}(x)\in \mathbb{R}^{n\times n}\) and \(\widetilde{c}(x)\in\mathbb{R}^{n}\) be defined as in Definition 6.1. Let \(\beta\in(0,0.1)\), and \(\langle\exp(Ax),\mathbf{1}_{n}\rangle\), \(\langle\exp(Ay),\mathbf{1}_{n}\rangle\), \(R_{f}=6\beta^{-2}\cdot n\cdot\exp(3R^{2})\)._
_Then, we have_
* _Part 1._ \(\|Ax-Ay\|_{2}\leq R\cdot\|x-y\|_{2}\)__
* _Part 2._ \(\|\exp(Ax)-\exp(Ay)\|_{2}\leq R\exp(R^{2})\cdot\|x-y\|_{2}\)__
* _Part 3._ \(|\alpha(x)-\alpha(y)|\leq 2\sqrt{n}R\exp(R^{2})\|x-y\|_{2}\)__
* _Part 4._ \(|\alpha(x)^{-1}-\alpha(y)^{-1}|\leq\beta^{-2}\cdot|\alpha(x)-\alpha(y)|\)__
* _Part 5._ \(\|f(x)-f(y)\|_{2}\leq R_{f}\cdot\|x-y\|_{2}\)__
* _Part 6._ \(\|c(x)-c(y)\|_{2}\leq R_{f}\cdot\|x-y\|_{2}\)__
* _Part 7._ \(\|z(x)-z(y)\|_{2}\leq R\exp(R^{2})\|x-y\|_{2}\)__
* _Part 8._ \(\|K(x)-K(y)\|\leq\sqrt{n}\cdot R_{f}\cdot\|x-y\|_{2}\)__
* _Part 9._ \(\|\operatorname{diag}(z(x))-\operatorname{diag}(z(y))\|\leq R\exp(R^{2})\|x-y\|_ {2}\)__
* _Part 10._ \(|\alpha(x)^{-2}-\alpha(y)^{-2}|\leq 4\beta^{-3}\sqrt{n}R\exp(R^{2})\cdot\|x-y\|_ {2}\)__
* _Part 11._ \(\|\widetilde{c}(x)-\widetilde{c}(y)\|_{2}\leq 5\sqrt{n}\cdot R_{f}\cdot R_{f,2} \cdot\|x-y\|_{2}\)__
* _Part 12._ \(\|\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))-\operatorname{diag}( \widetilde{c}(y)\circ u_{2}(y))\|\leq 10n\cdot R_{f}\cdot R_{f,2}\cdot\exp(2R^{2})\|x-y\|_ {2}\)__
Proof.: **Proof of Part 1**
\[\|Ax-Ay\|_{2} \leq \|A\|\|x-y\|_{2}\] \[\leq R\cdot\|x-y\|_{2}\]
where the first step follows from **Part 7** of Fact 3.6, and the last step follows from \(\|A\|\leq R\).
**Proof of Part 2**
\[\|\exp(Ax)-\exp(Ay)\|_{2} \leq \exp(R^{2})\|Ax-Ay\|_{2}\]
\[\leq \ \beta^{-1}\cdot(\|\exp(x)-\exp(y)\|_{2}+\|Ax-Ay\|_{2})\] \[\leq \ \beta^{-1}\cdot(R\exp(R^{2})\|x-y\|_{2}+R\cdot\|x-y\|_{2})\] \[= \ \beta^{-1}\cdot(R\exp(R^{2})+R)\cdot\|x-y\|_{2}\] \[\leq \ 2\beta^{-1}\cdot R\exp(R^{2})\cdot\|x-y\|_{2} \tag{6}\]
where the first step follows from \(\alpha(x)\geq\beta\), the second step follows from **Part 8** of Fact 3.5, the third step follows from **Part 1 and Part 2** of Lemma 8.3, the fourth step follows from simple algebra, and the last step follows from Fact 3.7.
For the second term in the above, we have
\[|\alpha(x)^{-1}-\alpha(y)^{-1}|\|\exp(Ay)+Ay\|_{2}\] \[\leq \ \beta^{-2}\cdot|\alpha(x)-\alpha(y)|\cdot\|\exp(Ay)+Ay\|_{2}\] \[\leq \ \beta^{-2}\cdot|\alpha(x)-\alpha(y)|\cdot 2\sqrt{n}\exp(R^{2})\] \[\leq \ \beta^{-2}\cdot 2R\exp(R^{2})\cdot\|x-y\|_{2}\cdot\sqrt{n}\cdot 2 \sqrt{n}\exp(R^{2})\] \[= \ 4\beta^{-2}\cdot R\cdot n\exp(2R^{2})\cdot\|x-y\|_{2} \tag{7}\]
where the first step follows from the result of **Part 4** of Lemma 8.3, the second step follows from the result of **Part 2** of Lemma 8.2, the third step follows from the result of **Part 3** of Lemma 8.3, and the last step follows from simple algebra.
Combining Eq. (5), Eq. (6), and Eq. (7) together, we have
\[\|f(x)-f(y)\|_{2} \leq \ 2\beta^{-1}\cdot R\exp(R^{2})\cdot\|x-y\|_{2}\] \[+ \ 4\beta^{-2}\cdot n\cdot R\exp(2R^{2})\cdot\|x-y\|_{2}\] \[\leq \ 6\beta^{-2}\cdot n\cdot\exp(3R^{2})\cdot\|x-y\|_{2}\]
where the last step follows from \(\beta^{-1}\geq 1\), \(n\geq 1\), \(R\geq 4\), and Fact 3.7.
**Proof of Part 6**
\[\|c(x)-c(y)\|_{2} = \ \|f(x)-f(y)\|_{2}\] \[\leq \ R_{f}\cdot\|x-y\|_{2}\]
where the first step follows from the definition of \(c(x)\) (see Definition 3.1), and the last step follows from **Part 5** of Lemma 8.3.
**Proof of Part 7**
\[\|z(x)-z(y)\|_{2} = \ \|u_{2}(x)+\mathbf{1}_{n}-u_{2}(y)-\mathbf{1}_{n}\|\] \[= \ \|u_{2}(x)-u_{2}(y)\|_{2}\] \[\leq \ R\exp(R^{2})\|x-y\|_{2}\]
where the first step follows from the definition of \(z(x)\) (see Definition 3.1), the second step follows from simple algebra, and the last step follows from the definition of \(u_{2}(x)\) (see Definition 3.1) and **Part 2** of Lemma 8.3.
**Proof of Part 8**
\[\|K(x)-K(y)\| = \ \|(I_{n}-f(x)\cdot\mathbf{1}_{n}^{\top})-(I_{n}-f(y)\cdot \mathbf{1}_{n}^{\top})\|\] \[= \ \|-(f(x)-f(y))\cdot\mathbf{1}_{n}^{\top}\|\] \[\leq \ \|f(x)-f(y)\|_{2}\cdot\|\mathbf{1}_{n}^{\top}\|_{2}\] \[\leq \ \sqrt{n}\cdot R_{f}\cdot\|x-y\|_{2}\]
where the first step follows from the definition of \(K(x)\), the second step follows from the simple algebra, the third step follows from **Part 9** of Fact 3.5, and the last step follows from **Part 5** of Lemma 8.3.
**Proof of Part 9**
\[\|\operatorname{diag}(z(x))-\operatorname{diag}(z(y))\| = \|\operatorname{diag}(z(x)-z(y))\|\] \[\leq \|z(x)-z(y)\|_{\infty}\] \[\leq \|z(x)-z(y)\|_{2}\] \[\leq R\exp(R^{2})\|x-y\|_{2}\]
where the first step follows from the simple algebra, the second step follows from **Part 2** of Fact 3.5, the third step follows from **Part 4** of Fact 3.5, and the last step follows from **Part 7** of Lemma 8.3.
**Proof of Part 10**
\[|\alpha(x)^{-2}-\alpha(y)^{-2}| = |(\alpha(x)^{-1}-\alpha(y)^{-1})(\alpha(x)^{-1}+\alpha(y)^{-1})|\] \[\leq |\alpha(x)^{-1}-\alpha(y)^{-1}||\alpha(x)^{-1}+\alpha(y)^{-1}|\] \[\leq \beta^{-2}\cdot|\alpha(x)-\alpha(y)|\cdot|2\beta^{-1}|\] \[\leq 2\beta^{-3}|\alpha(x)-\alpha(y)|\] \[\leq 4\beta^{-3}\sqrt{n}R\exp(R^{2})\cdot\|x-y\|_{2}\]
where the first step follows from the simple algebra, the second step follows from the simple algebra, the third step follows from **Part 4** of Lemma 8.2 and **Part 4** of Lemma 8.3, the fourth step follows from the simple algebra, and the last step follows from **Part 3** of Lemma 8.3.
**Proof of Part 11**
\[\|\widetilde{c}(x)-\widetilde{c}(y)\|_{2}\] \[= \|K(x)^{\top}c(x)-K(y)^{\top}c(y)\|\] \[\leq \|K(x)^{\top}c(x)-K(y)^{\top}c(x)\|+\|K(y)^{\top}c(x)-K(y)^{\top} c(y)\|\] \[\leq \|K(x)^{\top}-K(y)^{\top}\|\cdot\|c(x)\|_{2}+\|K(y)^{\top}\|\cdot \|c(x)-c(y)\|_{2}\] \[\leq \sqrt{n}R_{f}\cdot\|x-y\|_{2}\cdot 2R_{f,2}+3\sqrt{n}\cdot R_{f,2} \cdot R_{f}\cdot\|x-y\|_{2}\] \[\leq 5\sqrt{n}\cdot R_{f}\cdot R_{f,2}\cdot\|x-y\|_{2}\]
where the first step follows from the definition of \(\widetilde{c}(x)\), the second step follows from the triangle inequality, the third step follows from **Part 7** of Fact 3.6, the fourth step follows from **Part 6, 8** of Lemma 8.3 and **Part 6, 7** of Lemma 8.2, and the last step follows from the simple algebra.
**Proof of Part 12**
\[\|\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))-\operatorname {diag}(\widetilde{c}(y)\circ u_{2}(y))\|\] \[\leq \|\operatorname{diag}(\widetilde{c}(x))\operatorname{diag}(u_{2}( x))-\operatorname{diag}(\widetilde{c}(y))\operatorname{diag}(u_{2}(y))\|\] \[\leq \|\operatorname{diag}(\widetilde{c}(x))\operatorname{diag}(u_{2} (x))-\operatorname{diag}(\widetilde{c}(x))\operatorname{diag}(u_{2}(y))\|\] \[+ \|\operatorname{diag}(\widetilde{c}(y))\operatorname{diag}(u_{2} (x))-\operatorname{diag}(\widetilde{c}(y))\operatorname{diag}(u_{2}(y))\|\] \[\leq \|\operatorname{diag}(\widetilde{c}(x))-\operatorname{diag}( \widetilde{c}(y))\|\|\operatorname{diag}(u_{2}(x))\|\] \[+ \|\operatorname{diag}(\widetilde{c}(y))\|\|\operatorname{diag}(u_{ 2}(x))-\operatorname{diag}(u_{2}(y))\|\] \[\leq \|\widetilde{c}(x)-\widetilde{c}(y)\|_{2}\cdot\|u_{2}(x)\|_{2}+ \|\widetilde{c}(y)\|_{2}\cdot\|u_{2}(x)-u_{2}(y)\|_{2}\] \[\leq 5\sqrt{n}\cdot R_{f,2}\cdot R_{f}\|x-y\|_{2}\sqrt{n}\exp(R^{2})+ 4\sqrt{n}R\exp(R^{2})\|x-y\|_{2}\] \[\leq 10n\cdot R_{f}\cdot R_{f,2}\cdot\exp(2R^{2})\|x-y\|_{2}\]
where the first step follows from Fact 3.3, the second step follows from triangle inequality, the third step follows from Fact 3.6, the fourth step follows from Fact 3.5, the fifth step follows from **Part 2,11** of Lemma 8.3 and **Part 1,10** of Lemma 8.2, and the last step follows from \(R_{f}>1\), \(R_{f,2}>1\), \(n>1\), \(\beta^{-1}>1\), and Fact 3.7.
### Summary of Four Steps
In this section, we summarize the four steps which are discussed in the next four sections, respectively.
**Lemma 8.4**.: _If the following conditions hold_
* \(G_{1}(x)=\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot K(x)^{\top}K(x) \cdot\operatorname{diag}(z(x))\)__
* \(G_{2}(x)=-\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\)__
* \(G_{3}(x)=-\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot\widetilde{c}(x) \cdot z(x)^{\top}\)__
* \(G_{4}(x)=\alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))\)__
_Then, we have_
\[\sum_{i=1}^{4}\|G_{i}(x)-G_{i}(y)\|\leq 20\beta^{-5}n^{3.5}\exp(7R^{2})\|x-y \|_{2}\]
Proof.: \[\sum_{i=1}^{4}\|G_{i}(x)-G_{i}(y)\| \leq 5\beta^{-5}n^{3.5}\exp(7R^{2})\cdot\|x-y\|_{2}\] \[+ 4\beta^{-5}n^{3}\exp(7R^{2})\cdot\|x-y\|_{2}\] \[+ 4\beta^{-5}n^{3}\exp(7R^{2})\cdot\|x-y\|_{2}\] \[+ 2\beta^{-4}n^{3}\cdot\exp(7R^{2})\|x-y\|_{2}\] \[\leq 20\beta^{-5}n^{3.5}\exp(7R^{2})\|x-y\|_{2}\]
where the first step follows from Lemma 8.5, 8.6, 8.7, 8.8, the last step follows from \(\beta^{-1}>1,n>1,R>4\).
Calculation: Step 1 Lipschitz for Matrix Function \(\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))^{\top}\cdot K(x)^{\top}K(x) \cdot\operatorname{diag}(z(x))\)
In this section, we analyze the first step, namely the Lipschitz for the matrix function \(\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))^{\top}\cdot K(x)^{\top}K(x) \cdot\operatorname{diag}(z(x))\).
**Lemma 8.5**.: _Let \(G_{1}(x)=\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot K(x)^{\top}K(x) \cdot\operatorname{diag}(z(x))\)._
_Then we have_
\[\|G_{1}(x)-G_{1}(y)\|\leq 5\beta^{-5}n^{3.5}\exp(7R^{2})\cdot\|x-y\|_{2}\]
Proof.: We define
\[G_{1,1} := \alpha(x)^{-2}\operatorname{diag}(z(x))K(x)^{\top}K(x) \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\operatorname{diag}(z(x))K(x)^{\top}K(x) \operatorname{diag}(z(x))\] \[G_{1,2} := \alpha(y)^{-2}\operatorname{diag}(z(x))K(x)^{\top}K(x) \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\operatorname{diag}(z(y))K(x)^{\top}K(x) \operatorname{diag}(z(x))\] \[G_{1,3} := \alpha(y)^{-2}\operatorname{diag}(z(y))K(x)^{\top}K(x) \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\operatorname{diag}(z(y))K(y)^{\top}K(x) \operatorname{diag}(z(x))\] \[G_{1,4} := \alpha(y)^{-2}\operatorname{diag}(z(y))K(y)^{\top}K(x) \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\operatorname{diag}(z(y))K(y)^{\top}K(y) \operatorname{diag}(z(x))\] \[G_{1,5} := \alpha(y)^{-2}\operatorname{diag}(z(y))K(y)^{\top}K(y) \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\operatorname{diag}(z(y))K(y)^{\top}K(y) \operatorname{diag}(z(y))\]
we have
\[G_{1}=G_{1,1}+G_{1,2}+G_{1,3}+G_{1,4}+G_{1,5}\]
Let's prove the \(G_{1,1}\) first
\[\|G_{1,1}\|\] \[= \|\alpha(x)^{-2}\operatorname{diag}(z(x))K(x)^{\top}K(x) \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\operatorname{diag}(z(x))K(x)^{\top}K(x) \operatorname{diag}(z(x))\|\] \[\leq |\alpha(x)^{-2}-\alpha(y)^{-2}|\cdot\|\operatorname{diag}(z(x))K (x)^{\top}K(x)\operatorname{diag}(z(x)\|\] \[\leq |\alpha(x)^{-2}-\alpha(y)^{-2}|\] \[\cdot \|\operatorname{diag}(z(x))\|\cdot\|K(x)^{\top}\|\|K(x)\|\cdot\| \operatorname{diag}(z(x))\|\] \[\leq |\alpha(x)^{-2}-\alpha(y)^{-2}|\cdot\|z(x)\|_{\infty}^{2}\cdot\| K(x)\|^{2}\] \[\leq 4\beta^{-3}\sqrt{n}R\exp(R^{2})\cdot\|x-y\|_{2}\cdot\|z(x)\|_{2 }^{2}\cdot(3\sqrt{n}R_{f,2})^{2}\] \[\leq 4\beta^{-3}\sqrt{n}R\exp(R^{2})\cdot\|x-y\|_{2}\cdot(2\sqrt{n} \exp(R^{2}))^{2}\cdot(3\sqrt{n}R_{f,2})^{2}\] \[\leq 200\beta^{-3}n^{2.5}\exp(4R^{2})\cdot(2\sqrt{n}\beta^{-1}\exp(R ^{2}))^{2}\|x-y\|_{2}\] \[\leq \beta^{-5}n^{3.5}\exp(7R^{2})\|x-y\|_{2}\]
where the first step follows from the definition of \(G_{1,1}\), the second step follows from **Part 6** of Fact 3.6, the third step follows from **Part 4** of Fact 3.6, the fourth step follows from **Part 2** of Fact 3.5, the fifth step follows from **Part 4** of Fact 3.5, **Part 10** of Lemma 8.3, and **Part 7** of Lemma 8.2, the sixth step follows from **Part 8** of Lemma 8.2, the seventh step follows from the definition of \(R_{f,2}\), and the last step follows from simple algebra.
Then let's prove the \(G_{1,2}\)
\[\|G_{1,2}\|\] \[= \|\alpha(y)^{-2}\operatorname{diag}(z(x))K(x)^{\top}K(x) \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\operatorname{diag}(z(y))K(x)^{\top}K(x) \operatorname{diag}(z(x))\|\] \[\leq \|\operatorname{diag}(z(x))-\operatorname{diag}(z(y))\|\]
\[\cdot\ \|\alpha(y)^{-2}K(x)^{\top}K(x)\operatorname{diag}(z(x))\|\] \[\leq R\exp(R^{2})\|x-y\|_{2}\cdot|\alpha(y)^{-2}\|\|K(x)^{\top}\|\|K( x)\|\cdot\|\operatorname{diag}(z(x))\|\] \[\leq R\exp(R^{2})\|x-y\|_{2}\cdot\beta^{-2}\cdot 2\sqrt{n}\exp(R^{2}) \cdot(3\sqrt{n}R_{f,2})^{2}\] \[\leq 24\beta^{-2}\cdot n^{1.5}\cdot R\exp(3R^{2})\cdot(2\sqrt{n} \beta^{-1}\exp(R^{2}))^{2}\|x-y\|_{2}\] \[\leq \beta^{-4}n^{2.5}\exp(6R^{2})\|x-y\|_{2}\]
where the first step follows from the Definition of \(G_{1,2}\), the second step follows from the **Part 4** of Fact 3.6, the third step follows from **Part 4 and 6** of Fact 3.6 and **Part 9** of Lemma 8.3, the fourth step follows from the **Part 7, 8, 9** of Lemma 8.2, the fifth step follows from the definition of \(R_{f,2}\), and the last step follows from simple algebra.
Let's prove the \(G_{1,3}\)
\[\|G_{1,3}\|\] \[= \|\alpha(y)^{-2}\operatorname{diag}(z(y))K(x)^{\top}K(x) \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\operatorname{diag}(z(y))K(y)^{\top}K(x) \operatorname{diag}(z(x))\|\] \[\leq \|K(x)^{\top}-K(y)^{\top}\|\cdot\|\alpha(y)^{-2}\operatorname{ diag}(z(y))K(x)\operatorname{diag}(z(x))\|\] \[\leq \|K(x)^{\top}-K(y)^{\top}\|\|\alpha(y)^{-2}|\cdot\|\operatorname{ diag}(z(y))\|\cdot\|K(x)\|\] \[\cdot\ \|\operatorname{diag}(z(x))\|\] \[\leq \sqrt{n}\cdot R_{f}\cdot\|x-y\|_{2}\cdot\beta^{-2}\cdot\|z(y)\|_ {2}\cdot\|z(x)\|_{2}\cdot 3\sqrt{n}R_{f,2}\] \[\leq 3n\cdot\beta^{-2}\cdot R_{f}\cdot R_{f,2}\cdot(2\sqrt{n} \cdot\exp(R^{2}))^{2}\cdot\|x-y\|_{2}\] \[\leq 12\beta^{-2}\cdot n^{2}\cdot\exp(2R^{2})\cdot(6\beta^{-2} \cdot n\cdot\exp(3R^{2}))\cdot(2\sqrt{n}\beta^{-1}\exp(R^{2}))\cdot\|x-y\|_{2}\] \[\leq \beta^{-5}\cdot n^{3.5}\cdot\exp(7R^{2})\cdot\|x-y\|_{2}\]
where the first step follows from the Definition of \(G_{1,3}\), the second step follows from **Part 4** of Fact 3.6, the third step follows from **Part 4, 6** of Fact 3.6, the fourth step follows from **Part 8** of Lemma 8.3, **Part 9** of Lemma 8.2, and **Part 3** of Fact 3.5, the fifth step follows from **Part 8** of Lemma 8.2, the sixth step follows from the definition of \(R_{f},R_{f,2}\), and the last step follows from the simple algebra.
Proof of \(G_{1,4}\) is similar to \(G_{1,3}\), and the proof of \(G_{1,5}\) is similar to \(G_{1,2}\), so we omit them.
Then, by combining all results we get
\[\|G_{1}(x)-G_{1}(y)\| = \|G_{1,1}+G_{1,2}+G_{1,3}+G_{1,4}+G_{1,5}\|\] \[\leq \beta^{-5}n^{3.5}\exp(7R^{2})\|x-y\|_{2}\] \[+ 2\beta^{-4}n^{2.5}\exp(6R^{2})\|x-y\|_{2}\] \[+ 2\beta^{-5}n^{3.5}\exp(7R^{2})\|x-y\|_{2}\] \[\leq 5\beta^{-5}n^{3.5}\exp(7R^{2})\cdot\|x-y\|_{2}\]
where the first step follows from the Definitions of \(G_{1,1},G_{1,2},G_{1,3},G_{1,4},G_{1,5}\), the second step follows from previous results, and the last step follows from simple algebra
Calculation: Step 2 Lipschitz for Matrix Function \(\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot\operatorname{diag}(z (x))\)
In this section, we analyze the second step, namely the Lipschitz for the matrix function \(\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot\operatorname{diag}(z (x))\).
**Lemma 8.6**.: _Let \(G_{2}(x)=\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot\operatorname{ diag}(z(x))\)._
_Then we have_
\[\|G_{2}(x)-G_{2}(y)\|\leq 4\beta^{-5}n^{3}\exp(7R^{2})\cdot\|x-y\|_{2}\]
Proof.: We define
\[G_{2,1} := -(\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x)))\] \[G_{2,2} := -(\alpha(y)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x)))\] \[G_{2,3} := -(\alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(y)^{\top}\cdot \operatorname{diag}(z(x)))\] \[G_{2,4} := -(\alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(y)^{\top}\cdot \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(y)^{\top}\cdot \operatorname{diag}(z(y)))\]
Then let's prove \(G_{2,1}\) first
\[\|G_{2,1}\|\] \[= \|-(\alpha(x)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x)))\|\] \[\leq |\alpha(x)^{-2}-\alpha(y)^{-2}|\cdot\|z(x)\cdot\widetilde{c}(x)^{ \top}\cdot\operatorname{diag}(z(x))\|\] \[\leq 4\beta^{-3}\sqrt{n}R\exp(R^{2})\cdot\|x-y\|_{2}\cdot\|z(x)\|_{2 }\cdot\|\widetilde{c}(x)^{\top}\|_{2}\] \[\cdot \|z(x)\|_{2}\] \[\leq 4\beta^{-3}\sqrt{n}R\exp(R^{2})\cdot\|x-y\|_{2}\cdot 4n\cdot \exp(2R^{2})\cdot 10\sqrt{n}R_{f,2}^{2}\] \[\leq 160\beta^{-3}n^{2}\cdot\exp(4R^{2})\cdot(2\sqrt{n}\beta^{-1} \exp(R^{2}))^{2}\|x-y\|_{2}\] \[\leq \beta^{-5}n^{3}\cdot\exp(7R^{2})\|x-y\|_{2}\]
where the first step follows from definition of \(G_{2,1}\), the second step follows from **Part 6** of Fact 3.6, the third step follows from **Part 10** of Lemma 8.3 and **Part 7** of Fact 3.6, the fourth step follows from **Part 8, 10** of Lemma 8.2, the fifth step follow from the definition of \(R_{f,2}\), and the last step follows from the simple algebra.
Let's prove \(G_{2,2}\)
\[\|G_{2,2}\|\] \[= \|-(\alpha(y)^{-2}\cdot z(x)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x)))\|\] \[\leq \|z(x)-z(y)\|_{2}\cdot\|\alpha(y)^{-2}\cdot\widetilde{c}(x)^{ \top}\cdot\operatorname{diag}(z(x))\|_{2}\] \[\leq R\exp(R^{2})\cdot\|x-y\|_{2}\cdot|\alpha(y)^{-2}|\cdot\| \widetilde{c}(x)^{\top}\|_{2}\cdot\|z(x)\|_{2}\] \[\leq R\exp(R^{2})\cdot\|x-y\|_{2}\cdot\beta^{-2}\cdot 10\sqrt{n}R_{f,2} ^{2}\cdot 2\sqrt{n}\exp(R^{2})\] \[\leq 20\beta^{-2}n\exp(3R^{2})\cdot(2\sqrt{n}\beta^{-1}\exp(R^{2}))^ {2}\|x-y\|_{2}\]
\[\leq\beta^{-4}n^{2}\exp(6R^{2})\|x-y\|_{2}\]
where the first step follows from the definition of \(G_{2,2}\), the second step follows from **Part 9** of Fact 3.5, the third step follows from **Part 7** of Lemma 8.3 and **Part 2, 4, and 7** of Fact 3.5, the fourth step follows from **Part 8, 9, and 10** of Lemma 8.2, the fifth step follows from the definition of \(R_{f,2}\) and Fact 3.7, and the last step follows from the simple algebra.
Let's prove \(G_{2,3}\)
\[\|G_{2,3}\| = \|-(\alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(x)^{\top}\cdot \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(y)^{\top}\cdot \operatorname{diag}(z(x)))\|\] \[\leq \|\widetilde{c}(x)^{\top}-\widetilde{c}(y)^{\top}\|_{2}\cdot\| \alpha(y)^{-2}\cdot z(y)\cdot\operatorname{diag}(z(x))\|_{2}\] \[\leq 5\sqrt{n}\cdot R_{f}\cdot R_{f,2}\cdot\|x-y\|_{2}\cdot|\alpha(y )^{-2}|\cdot\|z(y)\|_{2}\cdot\|z(x)\|_{2}\] \[\leq 5\sqrt{n}\cdot R_{f}\cdot R_{f,2}\cdot\beta^{-2}4n\exp(2R^{2}) \cdot\|x-y\|_{2}\] \[\leq \beta^{-5}n^{3}\exp(7R^{2})\|x-y\|_{2}\]
where the first step follows from the definition of \(G_{2,3}\), the second step follows from **Part 9** of Fact 3.5, the third step follows from **Part 11** of Lemma 8.3, the fourth step follows from **Part 8 and 9** of Lemma 8.2, and the last step follows from \(R_{f}=6\beta^{-2}\cdot n\cdot\exp(3R^{2})\) and \(R_{f,2}=2\sqrt{n}\beta^{-1}\exp(R^{2})\).
Let's prove \(G_{2,4}\)
\[\|G_{2,4}\| = \|-(\alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(y)^{\top}\cdot \operatorname{diag}(z(x))\] \[- \alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(y)^{\top}\cdot \operatorname{diag}(z(y)))\|\] \[\leq \|\alpha(y)^{-2}\cdot z(y)\cdot\widetilde{c}(y)^{\top}\|\|\operatorname {diag}(z(x))-\operatorname{diag}(z(y))\|\] \[\leq |\alpha(y)^{-2}|\cdot\|z(y)\|_{2}\cdot\|\widetilde{c}(x)^{\top} \|_{2}\cdot R\exp(R^{2})\cdot\|x-y\|_{2}\] \[\leq \beta^{-2}\cdot 2\sqrt{n}\cdot\exp(R^{2})\cdot 10\sqrt{n}R_{f,2}^ {2}\cdot\|x-y\|_{2}\] \[\leq \beta^{-4}n^{2}\exp(4R^{2})\|x-y\|_{2}\]
where the first step follows from the definition of \(G_{2,4}\), the second step follows from **Part 4** of Fact 3.6, the third step follows from **Part 7** of Fact 3.5 and **Part 9** of Lemma 8.3, the fourth step follows from **Part 8, 9, and 10** of Lemma 8.2, and the last step follows from \(R_{f,2}=2\sqrt{n}\beta^{-1}\exp(R^{2})\).
Finally, by combining above results we can get
\[\|G_{2}(x)-G_{2}(y)\| = \|G_{2,1}+G_{2,2}+G_{2,3}+G_{2,4}\|\] \[\leq \beta^{-5}n^{3}\cdot\exp(7R^{2})\|x-y\|_{2}\] \[+ \beta^{-4}n^{2}\exp(6R^{2})\|x-y\|_{2}\] \[+ \beta^{-5}n^{3}\exp(7R^{2})\|x-y\|_{2}\] \[+ \beta^{-4}n^{2}\exp(4R^{2})\|x-y\|_{2}\] \[\leq 4\beta^{-5}n^{3}\exp(7R^{2})\cdot\|x-y\|_{2}\]
where the first step follows from the definitions of \(G_{2,1},G_{2,2},G_{2,3},G_{2,4}\), the second step follows from previous results, and the last step follows from simple algebra.
Calculation: Step 3 Lipschitz for Matrix Function \(\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot\widetilde{c}(x)\cdot z(x)^{\top}\)
In this section, we analyze the third step, namely the Lipschitz for the matrix function \(\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot\widetilde{c}(x)\cdot z(x )^{\top}\).
**Lemma 8.7**.: _Let \(G_{3}=\alpha(x)^{-2}\cdot\operatorname{diag}(z(x))\cdot\widetilde{c}(x)\cdot z(x)^ {\top}\)._
_Then we have_
\[\|G_{3}(x)-G_{3}(y)\|\leq 4\beta^{-5}n^{3}\exp(7R^{2})\cdot\|x-y\|_{2}\]
Proof.: The proof of \(\|G_{3}(x)-G_{3}(y)\|\) is similar to \(\|G_{2}(x)-G_{2}(y)\|\), so we omit it here.
Calculation: Step 4 Lipschitz for Matrix Function \(\alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))\)
In this section, we analyze the fourth step, namely the Lipschitz for the matrix function \(\alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))\).
**Lemma 8.8**.: _Let \(G_{4}=\alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))\)._
_Then we have_
\[\|G_{4}(x)-G_{4}(y)\|\leq 2\beta^{-4}n^{3}\cdot\exp(7R^{2})\|x-y\|_{2}\]
Proof.: We define
\[G_{4,1} :=\alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u _{2}(x))\] \[-\alpha(y)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u _{2}(x))\] \[G_{4,2} :=\alpha(y)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u _{2}(x))\] \[-\alpha(y)^{-1}\cdot\operatorname{diag}(\widetilde{c}(y)\circ u _{2}(y))\]
Let's prove \(G_{4,1}\) first,
\[\|G_{4,1}\|\] \[= \|\alpha(x)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u _{2}(x))-\alpha(y)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u_{2}( x))\|\] \[\leq |\alpha(x)^{-1}-\alpha(y)^{-1}|\cdot\|\operatorname{diag}( \widetilde{c}(x))\|\cdot\|\operatorname{diag}(u_{2}(x))\|\] \[\leq 4\beta^{-2}\cdot R\cdot n\exp(2R^{2})\cdot\|x-y\|_{2}\cdot\| \widetilde{c}(x)\|_{2}\cdot\|u_{2}(x)\|_{2}\] \[\leq 4\beta^{-2}\cdot R\cdot n\exp(2R^{2})\cdot\|x-y\|_{2}\cdot 1 0\sqrt{n}R_{f,2}^{2}\sqrt{n}\exp(R^{2})\] \[\leq \beta^{-4}n^{3}\exp(7R^{2})\|x-y\|_{2}\]
where the first step follows from definition of \(G_{4,1}\), the second step follows from **Part 6** of Fact 3.6, the third step follows from Fact 3.3, the fourth step follows from **Part 4** of Lemma 8.3 and **Part 2 and 4** of Fact 3.5, the fifth step follows from **Part 1, 10** of Lemma 8.2, and the last step follows from Fact 3.7 and \(R_{f,2}=2\sqrt{n}\beta^{-1}\exp(R^{2})\).
Then let's prove \(G_{4,2}\)
\[\|G_{4,2}\|\] \[= \|\alpha(y)^{-1}\cdot\operatorname{diag}(\widetilde{c}(x)\circ u _{2}(x))-\alpha(y)^{-1}\cdot\operatorname{diag}(\widetilde{c}(y)\circ u_{2}( y))\|\] \[\leq \|\operatorname{diag}(\widetilde{c}(x)\circ u_{2}(x))- \operatorname{diag}(\widetilde{c}(y)\circ u_{2}(y))\||\alpha(y)^{-1}|\] \[\leq 10\beta^{-1}n\cdot R_{f}\cdot R_{f,2}\cdot\exp(2R^{2})\|x-y\|_{2}\] \[\leq \beta^{-4}n^{2.5}\cdot\exp(7R^{2})\|x-y\|_{2}\]
where the first step follows from the definition of \(G_{4,2}\), the second step follows from **Part 7** of Fact 3.5, and the second step follows from **Part 12** of Lemma 8.3 and **Part 4** of Lemma 8.2, and the last step follows from \(R_{f}=6\beta^{-2}\cdot n\cdot\exp(3R^{2})\) and \(R_{f,2}=2\sqrt{n}\beta^{-1}\exp(R^{2})\).
By combining the above results, we can get
\[\|G_{4}(x)-G_{4}(y)\|\] \[= \|G_{4,1}+G_{4,2}\|\] \[\leq \beta^{-4}n^{3}\exp(7R^{2})\|x-y\|_{2}+\beta^{-4}n^{2.5}\cdot\exp (7R^{2})\|x-y\|_{2}\] \[\leq 2\beta^{-4}n^{3}\cdot\exp(7R^{2})\|x-y\|_{2}\]
where the first step follows from the definitions of \(G_{4,1},G_{4,2}\), the second step follows from previous results, and the last step follows from simple algebra.
## 9 Main Result
Now, we present our main theorem and algorithm.
```
1:procedureIterativeSoftResidualRegression(\(A\in\mathbb{R}^{n\times d},b\in\mathbb{R}^{n},w\in\mathbb{R}^{n},\epsilon,\delta\))\(\triangleright\)Theorem 9.1
2: Choose \(x_{0}\) (suppose it satisfies Definition A.4)
3: We use \(T\leftarrow\log(\|x_{0}-x^{*}\|_{2}/\epsilon)\) to denote the number of iterations.
4:for\(t=0\to T\)do
5:\(D\gets B_{\text{diag}}(x_{t})+\text{diag}(w\circ w)\)
6:\(\widetilde{D}\leftarrow\textsc{SubSample}(D,A,\epsilon_{1}=\Theta(1),\delta_ {1}=\delta/T)\)\(\triangleright\)Lemma A.8
7:\(g\gets A^{\top}(f(x_{t})\langle c(x_{t}),f(x_{t})\rangle+\text{diag}(f(x_ {t}))c(x_{t}))\)
8:\(\widetilde{H}\gets A^{\top}DA\)
9:\(x_{t+1}\gets x_{t}+\widetilde{H}^{-1}g\)
10:endfor
11:\(\widetilde{x}\gets x_{T+1}\)
12:return\(\widetilde{x}\)
13:endprocedure
```
**Algorithm 1** Main algorithm of solving the Soft-Residual Regression problem in Definition 1.4.
**Theorem 9.1** (Main theorem).: _Let \(A\) be an arbitrary matrix in \(\mathbb{R}^{n\times d}\). Let \(b\) and \(w\) be arbitrary vectors in \(\mathbb{R}^{n}\). Let \(f(x)=\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle^{-1}(\exp(Ax)+Ax)\in\mathbb{R}^ {n}\) be defined as in Definition 3.1. Let \(x^{*}\) as the optimal solution of_
\[\|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle^{-1}(\exp(Ax)+Ax)-b\|_{2}^{2},\]
_for \(g(x^{*})=\nabla f(x^{*})=\mathbf{0}_{d}\) and \(\|x^{*}\|_{2}\leq R\), with \(R>4\). Suppose \(\|A\|\leq R\), each entry of \(b\) is greater than or equal to \(0\), \(\|b\|_{1}\leq 1\), \(w_{i}^{2}\geq 100+l/\sigma_{\min}(A)^{2}\) for all \(i\in[n]\), and \(M=n^{1.5}\exp(30R^{2})\)._
_Let \(x_{0}\) denote an initial point for which it holds that \(M\|x_{0}-x^{*}\|_{2}\leq 0.1l\)._
_Then for all accuracy parameter \(\epsilon\in(0,0.1)\) and failure probability \(\delta\in(0,0.1)\), there exists a randomized algorithm (Algorithm 1) such that, with probability at least \(1-\delta\), it runs \(T=\log(\|x_{0}-x^{*}\|_{2}/\epsilon)\) iterations and outputs a vector \(\widetilde{x}\in\mathbb{R}^{d}\) such that_
\[\|\widetilde{x}-x^{*}\|_{2}\leq\epsilon,\]
_and the time cost per iteration is_
\[O((\mathrm{nnz}(A)+d^{\omega})\cdot\mathrm{poly}(\log(n/\delta)).\]
_Here \(\omega\) denotes the exponent of matrix multiplication. Currently \(\omega\approx 2.373\)[20, 12, 14]._
Proof.: The proof follows from Lemma A.8, Lemma A.10 and Lemma A.11.
## 10 Conclusion
In this paper, we propose a unified scheme of combining the softmax regression and ResNet by analyzing the regression problem
\[\|\langle\exp(Ax)+Ax,\mathbf{1}_{n}\rangle^{-1}(\exp(Ax)+Ax)-b\|_{2},\]
where \(A\in\mathbb{R}^{n\times d}\) and \(b\in\mathbb{R}^{n}\). The softmax regression focuses on analyzing \(\exp(Ax)\), and the ResNet focuses on analyzing \(F(x)+x\). We combine these together and study \(\exp(Ax)+Ax\).
Specifically, we formally define this regression problem. We show that the Hessian matrix is positive semidefinite with the loss function \(L(x)\). We analyze the Lipschitz properties and approximate Newton's method. Our unified scheme builds a connection between two previously thought unrelated areas in machine learning, providing new insight into the loss landscape and optimization for the emerging over-parametrized neural networks.
In the future, researchers may implement an experiment with the proposed unified scheme on large datasets to test our theoretical analysis. Moreover, extending the current analysis to multi-layer networks is another promising direction. We believe that our unified perspective between softmax regression and ResNet will inspire more discoveries at the intersection of theory and practice of deep learning.
|
2307.00152 | LensLeech: On-Lens Interaction for Arbitrary Camera Devices | Cameras provide a vast amount of information at high rates and are part of
many specialized or general-purpose devices. This versatility makes them
suitable for many interaction scenarios, yet they are constrained by geometry
and require objects to keep a minimum distance for focusing. We present the
LensLeech, a soft silicone cylinder that can be placed directly on or above
lenses. The clear body itself acts as a lens to focus a marker pattern from its
surface into the camera it sits on. This allows us to detect rotation,
translation, and deformation-based gestures such as pressing or squeezing the
soft silicone. We discuss design requirements, describe fabrication processes,
and report on the limitations of such on-lens widgets. To demonstrate the
versatility of LensLeeches, we built prototypes to show application examples
for wearable cameras, smartphones, and interchangeable-lens cameras, extending
existing devices by providing both optical input and output for new
functionality. | Christopher Getschmann, Florian Echtler | 2023-06-30T22:04:06Z | http://arxiv.org/abs/2307.00152v1 | # LensLeech: On-Lens Interaction for Arbitrary Camera Devices
###### Abstract.
Cameras provide a vast amount of information at high rates and are part of many specialized or general-purpose devices. This versatility makes them suitable for many interaction scenarios, yet they are constrained by geometry and require objects to keep a minimum distance for focusing. We present the LensLeech, a soft silicone cylinder that can be placed directly on or above lenses. The clear body itself acts as a lens to focus a marker pattern from its surface into the camera it sits on. This allows us to detect rotation, translation, and deformation-based gestures such as pressing or squeezing the soft silicone. We discuss design requirements, describe fabrication processes, and report on the limitations of such on-lens widgets. To demonstrate the versatility of LensLeeches, we built prototypes to show application examples for wearable cameras, smartphones, and interchangeable-lens cameras, extending existing devices by providing both optical input and output for new functionality.
Mobile Interfaces, Elastomer Sensors, Optical Widgets +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
cameras touch-sensitive, or introduce novel optical attachments for smartphones.
In summary, we contribute:
* a tangible deformation sensor to create buttons, knobs, and d-pads, combining soft body, optical elements, and sensing pattern in a single object
* discussions on the design and fabrication of on-lens widgets
* an image processing pipeline for analyzing position, rotation, and deformation
* application examples for integration with new and existing devices
Many research approaches aim at providing novel functionality with new or existing sensors for future devices, often built on the assumption or requirement of a possible miniaturization and integration of this external sensing hardware into a new device with a new form factor. However, we explicitly aim at retrofitting existing and well-proven interaction techniques to sensors that make them available both to legacy devices today as well as new ones in the future. This could help to extend the lifetime of devices in circulation by improving their usability and reducing incentives to update to newer hardware prematurely. A user study (with regard to the input capabilities of the widgets) is not presented as these input modalities are well understood and can be directly applied to this new form factor.
The remainder of this paper is organized as follows: related work is discussed with a broad overview of vision-based elastomer sensors and on-lens/around-lens interaction techniques, then we explain our concept of soft silicone attachments for on-lens interaction sensing. The image processing pipeline and fabrication procedure are summarized subsequently. We built upon that by presenting a set of scenarios and prototypes created to show real-world applications. Finally, we discuss the limitations of using soft silicone attachments for on-lens interaction and conclude with specific directions for future work.
## 2. Related Work
Relevant to the presented work are both optical deformation sensors primarily developed for robotic applications as well as human-computer interaction techniques and prototypes that gather input from the space on and around camera lenses.
### Optical Elastomer Sensors
Optical deformation sensing of soft materials is performed either by measuring light altered by the surface or by detecting displacement of high-contrast markers, on the surface or encapsulated in the material. Surface deformation measurements have been proposed based on total internal reflection (Krause et al., 2015), Lambertian reflection (Krause et al., 2015; Krause et al., 2016; Krause et al., 2017; Krause et al., 2018; Krause et al., 2018) and polarization (Krause et al., 2018). The most common type of sensor, the _GelSight_ family, makes use of Lambertian reflection by coating the clear elastomer with a reflective membrane. Multispectral illumination from below allows to derive deformation depth and thus a detailed 2.5d geometry of the reflective surface. For marker-based sensing, high-contrast points are painted on the clear surface of the elastomer (Krause et al., 2016; Krause et al., 2018; Krause et al., 2018), on the interior of an opaque hull for TacTip sensors (Krause et al., 2018; Krause et al., 2018) or colored balls are directly encapsulated in the soft material (Krause et al., 2018; Krause et al., 2018; Krause et al., 2018).
These sensors have been used extensively for tactile sensing in robotic applications, mounting the sensor on the end effector to measure gripping force and detect slipping. For this, the sensor assembly is designed as a monolithic unit consisting of camera sensor, lens, and elastomer block. While mirrors (Krause et al., 2016; Krause et al., 2018) and fish-eye lenses (Krause et al., 2018) have been used to shorten optical paths to create more compact grippers these sensors are still of considerable size and rely on a tight integration of all components, making them incompatible with arbitrary cameras. Modular approaches offer only exchangeable elastomers while still using a specialized camera (Krause et al., 2018). Additionally, all gel-based sensors with the exception of the sensor by Obinata et al. (Obinata et al., 2018) and _Fingervision_(O'Connor et al., 2018) block environmental light and require white, RGB or ultraviolet illumination by integrated LEDs. For a detailed overview refer to the reviews by Shimonomura (Shimonura, 2018) and Abad et al. (Abad et al., 2018).
In the domain of human-computer interaction, elastomer sensors have been used for interactive surfaces (Abad et al., 2018), clay-like projection displays (Krause et al., 2018) and tangibles on tabletops (Krause et al., 2018; Krause et al., 2018) to support novel interaction techniques.
### On-Lens/Around-Lens Interaction
Placing a fingertip directly on a smartphone camera lens has been proposed as an interaction technique in _LensGestures_(O'Connor et al., 2018). The unfocused environmental light passing through a finger's tissue is used to approximate finger positions and recognize gestures. _CamTrackPoint_(O'Connor et al., 2018) improves on this concept by providing tactile feedback. A spring-actuated plastic ring is integrated with a smartphone case directly over the lens for the finger to rest on. The thin ring blocks light with a sharp transition to black and provides a higher precision compared to tracking the blurred finger. A proof-of-concept for more complex on-lens input techniques is presented by Watanabe et al. (Watanabe et al., 2018): soft and optically clear toys with a reflective surface coating are placed on the camera while a neural network is trained to recognize deformation/gestures from internal reflections observed through a hole in the bottom. This represents the simplest and most basic on-lens widget: unfocused, untagged, unpowered and depending on natural illumination, but very easy to manufacture and not obstructing the camera when not in use.
Interaction in the space around lenses requires mirrors to both shorten the optical path and redirect light. _Clipwidgets_(O'Connor et al., 2018) makes use of a conical mirror in a bulky smartphone case to read the state of physical widgets such as buttons and sliders. Similar approaches have been presented for back-of-device interaction concepts with smartphones (Krause et al., 2018; Krause et al., 2018; Krause et al., 2018). Without relying on physical input objects _Handsee_(O'Connor et al., 2018) utilizes a prism to track hands touching and floating above a smartphone display while _Surroundsee_(O'Connor et al., 2018) tracks objects in the whole room with a circular 360-degree mirror above the smartphone camera.
Similar techniques have been used without mirrors or lenses in the context of tangibles with silicone feet for pressure sensing (Krause et al., 2018), deformation sensing on small wearables (O'Connor et al., 2018), and surface position sensing with fibers (O'Connor et al., 2018). Other work that is related to the presented concept is _Bokode_(Krause et al., 2018), a marker made of a lenslet and microfilm which magnifies a grid of 2D barcodes into the defocused lens of a camera and _Sauron_(Krause et al., 2018), a design tool to integrate cameras in hollow objects that read the state of mechanical input elements.
While physical input similar to the LensLeech can be achieved on smartphones in particular by simply redirecting electrodes of the capacitive touchscreen (Leses, 2012; Wang et al., 2013; Wang et al., 2014), the LensLeech is not limited to touchscreens and can be applied across a range of devices, see the application examples in section 5.
Since on-lens interaction concepts such as _CamTrackPoint_ and _LensGestures_ make use of unfocused light, they are limited in their expressiveness due to the low amount of information available. While they are suitable for smartphones and their scratch-resistant camera assemblies, these concepts translate poorly to interchange-able lens cameras or action cams with lens front elements often using coatings sensitive to scratches or prints from fingertips. This is one of the fundamental issues we intend to address with our generalizable approach.
## 3. The Design of on-lens Widgets
We propose that any physical attachment enabling on or around-lens interaction with both existing and future devices should--ideally--adhere to these basic design considerations:
* **safe** to use near or on optical components and providing credible reassurance to the user about this. This is a prerequisite for user acceptance.
* **non-invasive**, requiring no hardware modifications of the host device or its camera. This ensures compatibility with existing devices that benefit most from optical attachments.
* **passive** and unpowered, requiring only ambient illumination (if possible) to reduce size and complexity.
* **universal**; compatible with arbitrary camera/lens combinations across a wide range of device types.
Elastomer sensors in general fulfill the first and most important of these requirements by virtue of their nature: they are soft. However, existing sensors fall short in most or all other points.
As discussed, these sensors combine camera and elastomer in a permanent assembly with a fixed position and rotation, limiting the way objects can interact with them. Additionally, they make use of known sensor and lens combinations to allow camera calibration and optimizations of sensor geometry (for example by backprojecting through a calibrated lens to find optimal marker point placements). This makes these sensors more precise and reliable but prevents them from being used with arbitrary lenses and cameras. Finally, most sensors require constant internal illumination. Reflective membranes (GelSight) and rubber skins (TacTip) are blocking ambient light to avoid interference. Only sensors relying solely on point patterns (Wang et al., 2013; Wang et al., 2014) can tolerate ambient illumination.
We propose an elastomer sensor design suitable for interaction sensing. The LensLeech is a tangible soft input device that resembles the gel part of an elastomer sensor. Our all-silicone design combines a lens, compliant body, and a colored marker pattern in a single unit (see fig. 2). This addresses all design requirements at the cost of reduced reliability and precision compared to elastomer sensor assemblies that are designed to measure precise gripping forces on robot actuators.
The small form factor of the LensLeech attachment (33mm diameter, 25.5mm height) makes it easy to grip it with two fingers and place it on a camera. By using the lower surface of the clear silicone body as a lens, light reflected by the deformation-sensing pattern on the surface is collimated and can be focused on the sensor at any distance from the camera. This makes it possible to place the silicone foot of the LensLeech directly on or slightly above the front element of a wide range of lenses. The combined optical system of sensor, camera lens, silicone lens, and deformation sensing pattern is limited by the field of view of the camera, its entrance pupil, and the distance to the silicone attachment. This is discussed in more detail in section 6 (Limitations).
#### Marker Pattern
When choosing a marker pattern for deformation sensing, we need to take into account that a positive lens required to move the focal point to the surface of the silicone body will introduce a strong magnification effect. This amplifies any defects or irregularities in the pattern and requires the fabrication of very small features. The most precise and reliable method is the deposition of single droplets of silicone paint. This makes a point pattern the preferred choice.
A point pattern is used by other optical tactile sensors such as TacTip and GelSight as well, however, these sensors are fixed assemblies that can compute deviations from a static reference frame. This does not apply to a silicone sensor that can be moved and rotated freely, thus a method to align the currently visible region of interest with the overall marker grid is required.
A common method for identifying sections of point grids are two-dimensional DeBruijn sequences. These are sequences that contain every subsequence of a defined size at most once. Printed as microdots on paper these have been used for position-tracking with digital pens (Dez et al., 2012) (encoding bits as a displacement from a regular grid) and tangibles (Dez et al., 2013) (encoding bits as black and white). However, unlike a rigid piece of paper which allows displacement coding of dots, the soft silicone is easily bent or compressed and requires coding by color or contrast.
Figure 2. Illustrative ray diagram of the combined optical system. The field of view inside the silicone (dashed line) depends on the field of view of the camera, the position and diameter of the entrance pupil, as well as the distance between silicone and camera lens.
While a hexagonal arrangement of points offers the densest packing it is incompatible with a 2D-DeBruijn sequence. Hence, we computed a DeBruijn-like pattern with 7-point hexagons instead of 3x3 matrices using a brute-force approach. Each overlapping hexagonal sliding window in the pattern is unique in the given rotation (see fig. 3). An optimal pattern does contain only hexagons that are unique in all rotations which is simplifying pattern matching during image processing, however, this requires a minimum of three colors at a suitable pattern size. Higher robustness to adverse lighting conditions and a less error-prone fabrication with only two different paints is the reason why a less-than-optimal two-color pattern is preferable.
Our hexagon patterns consist of 127 points and require 91 unique sliding windows. This is sufficient to cover the visible region of the silicone attachment even when placed on a wide-angle camera. While other sensors such as TacTip and GelSight Wedge use the same or similar number of points, only a subset of points is visible at a time for our application due to the magnification of the silicone lens. If a unique center hexagon is enforced during pattern generation up to 28 patterns can be discerned. This allows to map different silicone attachments to specific input modalities.
#### Image Processing
The DeBruijn-like point pattern is color-coded in blue and green to offer a high contrast across the range of human skin tones. Coincidentally, the fingertips have fewer variations due to smaller differences in the skin tones of palms overall. The detection and classification of the points is the first step of the image processing pipeline. Background removal is performed by thresholding in the HSV color space. The diffuse top surface of the silicone body improves this step considerably without blocking any ambient light. From these point candidates, colors are extracted and classified by thresholding the two classes in the hue component of the HSV colorspace using Otsu's method (Otsu, 2010). This is robust to errors in white balance caused by tinted ambient illumination or fingertips and computationally less expensive than other classification methods. Robustness is especially important since most auto white balance algorithms overshoot for several dozen frames when a finger is placed on the point pattern.
For pattern matching each detected point is grouped with its 6 closest neighbors and all 6 rotation variants are checked against a lookup table. Correct rotation is assumed when the highest number of matches is found between neighboring sliding windows in the camera image and ground truth pattern. Given an optimal pattern, only one rotation would result in a match (in the absence of any errors), yet this computationally-expensive step is necessary to limit the pattern to only two colors. This makes the pipeline's processing speed highly dependent on the number of detected points. On a laptop computer (2.3 GHz 8-Core Intel i9) 34 frames per second are processed when 39 points are visible (30% of the pattern) and 21 FPS when 69 points (70%) are visible. Smartphone performance numbers are not reported since the Android implementation performs segmentation on-device but outsources the pattern-matching step of the pipeline to a server.
The input gestures are derived directly from the matched point pattern (see fig. 4). A press on the top is recognized by detecting locally increased distances between neighboring points, pushing sideways by computing the centroid of all detected points, rotation by Kabsch's algorithm (Kabsch, 2013), and squeeze by a global change of point distances along the squeeze axis. The gesture detection relies on algorithm implementations in SciPy (Kabsch, 2013), while processing of image data is done using OpenCV (Deng et al., 2015).
Before discussing how these input types inform examples for real-world application, the fabrication process of both silicone body and color-coding pattern is described briefly.
## 4. Fabrication
The clear silicone body is created by mixing, degassing, and pouring liquid silicone (Trollfactory Type 19) into a mold and letting it cure. The mold itself requires two precisely manufactured features. The
Figure 4. The four types of input: a) Pressing on the silicone body b) lateral pushing in any direction c) rotation on the optical axis d) squeezing the silicone.
Figure 3. Each hexagonal sliding window appears only once. Some sliding windows are unique in all six orientations, some can be found in multiple locations when rotated.
lower cavity is an optical surface (sufficiently smooth to refract light for imaging applications) to create the spherical convex lens of 7.5mm radius for focusing. The curved top surface of 30mm radius diffuses light. Both surfaces are CNC-milled from acrylic before being ground and polished. For this, the spherical surface of the acrylic part is coated with lapping paste and pressed against a rotating steel ball of matching radius (widely available as high-precision replacement parts for large ball bearings). After polishing the acrylic plates are fastened to a 3d-printed center part to complete the mold (see fig. 5). Once cured and de-molded the point pattern is applied to the clear silicone body with two 3d-stencils milled from acrylic (one per color). The stencil is fabricated by drilling a duplicate of the mold top part with a circuit board drill (1.0mm) to create channels. The soft body is pressed into the matching cavity of the stencil from below and the pigmented silicone can be poured on the channels (see fig. 6) before removing the remaining air from the channels in a vacuum chamber. The stencil guarantees correct placement and uniform point size. Only silicone itself bonds reliably to cured silicone parts, thus uncured silicone mixed with color pigments is the most suitable paint. The main issues in this process are ensuring that the high-viscosity silicone reliably fills the channels and avoiding oversaturation of the silicone with pigments in a silicone oil solution, which may inhibit the curing process. A mixture (by weight) of Smooth-On's Psycho Paint silicone with 15 percent dry UV-reactive pigment powder and 25 percent solvent (toluene) to lower the viscosity worked best. After 24 hours the pigmented silicone binds reliably to the optically-clear silicone body, creating a point pattern on the surface that is flexible and wear-resistant.
The choice of lens curvature during mold production is a trade-off. A mold for a lens with a stronger curvature is more demanding in fabrication but the shorter focal length allows to reduce the height of the silicone body. At the same time, it decreases the depth of field and the field of view, allowing to track a lower number of points in the pattern. Additionally, interacting with the LensLeech deforms both top surface and lens. A strong press on the top will reduce the height of the body by several millimeters depending on the hardness of the silicone. If the height of the silicone body does not match the focal length the light will not exit the system collimated, resulting in a pattern that would be out of focus. In reality, this is rarely an issue since autofocus cameras can compensate for this, fixed-focus cameras often have a sufficient depth of field, and the image processing pipeline is robust to low levels of blurring.
A paraxial approximation of the focal length can be obtained using the lensmaker's equation. Only the refraction of the first surface is relevant for the LensLeech geometry, so a thin, plano-convex lens in air (\(d=0,R_{2}=\infty\)) can be assumed:
\[\frac{1}{f}=(n-1)\left(\frac{1}{R_{1}}-\frac{1}{R_{2}}+\frac{(n-1)d}{nR_{1}R_ {2}}\right)=\frac{n}{R_{1}}-\frac{1}{R_{1}}\]
We have chosen a radius of \(R_{1}=7.5mm\) for the lens surface and assume that Trollfactory Type 19 has a refractive index of \(n=1.41\), similar to other platinum-curve silicenes. This would result in a total focal length of 18.29mm. To account for the deformation during interaction and the strong curvature of the lens we increase the height of the silicone body by a factor of 1.3 to 25mm. While the LensLeech should be able to touch the glass surface of the camera lens, the silicone lens surface in the center requires an air gap to refract light. Thus we extend the foot around the lens by 1.0mm to account for deformations (cross section can be seen in fig. 2). This provided the most reliable results during testing for high forces when pressing and squeezing while still keeping the point pattern well within the depth of field of most cameras when not deformed.
While fabrication is the primary challenge to making close-focus silicone lens attachments, a thorough description would go beyond the scope of this paper. Please refer to the companion repository1 for detailed information about the fabrication process.
Figure 5. Cross section of the mold. The curved surfaces are ground and polished each with a precision steel ball of the required curvature. The optical surface is polished to a 2-micron finish and the diffuse surface to 40 microns. All three sections of the mold are aligned with metal dowel pins (not pictured). Liquid silicone is poured through a horizontal channel in the 3d-printed plastic part.
Figure 6. Cross section of the stenciling fixture. The silicone body is pressed upwards against the curved surface to create a seal. Once locked in place by clamps, the liquid pigmented silicone is poured into the recess at the top of the stencil and makes its way through the micro-drilled channels. When the stencil is lifted a small domed blob of partially-cured silicone paint remains on the surface. The stencil and fixture for the silicone body are aligned with metal dowel pins to ensure precise placement for each consecutive stencil and color.
## 5. Application Examples
Tactile on-lens input can be used in a variety of scenarios for devices with cameras in many sizes. We present two application examples that show how on-lens interaction can be utilized to make input on both large and small cameras more convenient and transfer well-established tangible interaction techniques to smartphones.
**Interactive Lens Caps for Digital Cameras**
Small action cameras offer a very limited number of buttons, a tiny display (if any), and only optionally, a touch interface on this display. While there are techniques to facilitate touch input on very small displays (Bradley et al., 2016), it may be cumbersome. Adjusting settings is often performed via a companion app on a smartphone which is paired with the wearable camera. However, there are scenarios in which the phone is unavailable and direct interaction with the device itself may be favorable. These can be casual, everyday situations like wearing gloves or very specific use cases such as interacting with a camera enclosed in a waterproof housing while swimming or diving. By adopting a lens cap or protective storage case that integrates a LensLeech (see fig. 7), we can add tangible controls such as a rotation knob or a d-pad, and extend the number of buttons on the device for easier navigation through nested menus. However, note that in the case of underwater usage when the LensLeech is pressed against a waterproof housing, a different silicone lens curvature will be required due to the refractive index of water.
This concept extends to larger cameras as well. Many digital consumer cameras are infamous for convoluted menus and poor usability in general. Browsing recorded videos and photos, changing settings in nested menus, and entering credentials to set up wireless connections requires prolonged attention and interaction while the sensor itself is not in use during these tasks. By integrating a silicone attachment into a lens cap we can leverage the unused hardware without interfering with the primary use case of the camera. Rotating or pushing the silicone sideways could be used to traverse large lists of settings or captured footage (see fig. 8). While the soft silicone can touch lens coatings without damaging them, the lens cap in this case prevents direct contact and may provide reassurance to the user for these very expensive lenses.
**Hybrid Viewfinders for Smartphones**
While the LensLeech provides optical input to camera-based devices, it can be combined with other components that offer optical output as well. This allows to create complex passive optical add-ons to existing devices. The hybrid viewfinder slides over the top section of a smartphone covering the front lens and a portion of the display. By adding a beamsplitter prism to a camera viewfinder, a section of the covered display can be reflected into the viewfinder's optical path (see fig. 9a) to create a hybrid optical/electronic viewfinder for smartphone photography. By integrating the LensLeech into the viewfinder attachment, the front camera can be used for input while the rear camera takes images and optionally provides data for the viewfinder overlay (see fig. 9c). Rotating the LensLeech changes the data overlay and pressing it triggers image capture. This allows optical input and output with no hardware modifications, transforming a smartphone into a modern rangefinder-style camera.
## 6. Evaluation & Limitations
When tested with artificially generated images (rendered images of the deformation point pattern with a pinhole aperture instead of a lens for focusing) the rotational error is negligible at an average of 0.03 degrees. More relevant and considerably more challenging is the real-world performance under low-light conditions and ambient light with color casts. As an evaluation setup, three cameras with an attached LensLeech (Pixel 3a smartphone, Sony A6000 + Sony 20mm 2.8 digital still camera, Raspberry Pi V1 embedded camera) were placed in complete darkness facing a display showing a subset (201 images, three per category) of the MIT indoor scene recognition dataset (Pascala et al., 2016). These were artificially darkened and brightened to simulate a low-light environment (resulting in a total of 1206 images). The illuminance of each scene was measured at the surface of the LensLeech with a TSL2591 ambient light sensor. Detection performance is depending on the combination of sensor, lens, and environment, but in general, it can be observed that above 150 lux reliable operation can be expected (see fig.10). Indoor lighting conditions usually exceed 150 lux while 300-500 lux are recommended for office work (Doming et al., 2016).
The main limitation when using any optical attachment on lenses is the entrance pupil diameter and its distance from the first surface of the lens. The entrance pupil is a virtual opening within the lens barrel through which all entering light rays pass. Size and position within lenses can vary across lens designs, even when an image of a
Figure 8. a) The silicone attachment is placed in a 3d-printed lens cap with a spring-loaded mechanism to allow lateral movement and rotation. b) The lens cap on an interchangeable-lens camera.
Figure 7. a) Cross section: the silicone attachment can be extended with a 3d-printed shoe matched to the specific device so it slides over the protruding wide-angle lens of an action camera. b) The LensLeech could be used as a rotating knob, press to confirm, squeeze to cancel.
distant object taken with different lenses would look identical (see fig. 11). The silicone attachment (in the size as presented) works well on small and medium-sized lenses but requires a different geometry on very large lenses such as professional photography or videography lenses with large front elements and entrance pupils for better low-light performance. As a rule of thumb: if the image of the aperture seen through the front element of the lens is considerably larger than the silicone lens (12mm in diameter) the number of visible points is strongly reduced. In general, a lower bound of 19 points is required to reliably recognize input gestures through the soft widget. Additional limiting factors on the optical system are the field of view of the lens and the curvature of the first glass element of the lens. A camera with a narrow field of view will reduce the number of visible points, similar to a large entrance pupil. This makes the presented concept more suitable for medium to wide-angle systems such as webcams, smart home devices, smartphones, and wearable cameras. If the LensLeech is used with hard attachments (such as the lens cap) it does not sit directly on the glass and a strong lens curvature is not an issue.
Figure 11: Three lenses with an identical field of view of 84\({}^{\circ}\) but increasing entrance pupil diameters: a) Google Pixel 3a front camera b) Sony SEL-P1650 lens (16mm focal length) c) Sigma 16mm 1.4 DC DN (16mm focal length). For all three images, the LensLeech is resting directly on the front element of the camera lens.
Figure 10: Mean number of identified points in relation to ambient illumination strength. Error bars specify standard deviation. Monochromatic or environmental light with a strong color tint will result in an undetectable point pattern regardless of illumination strength, this is the main cause for outliers in the plot. Note: the maximum number of points visible to the camera varies across devices depending on pupil size and field of view.
Figure 9: a) Cross section of the hybrid viewfinder. The beamsplitter overlays the light emitted by a section of the smartphone screen (blue) over the viewfinder image of the world (yellow), while the silicone attachment rests on the front camera (red). b) The finder can be slid over the top section of the smartphone. The silicone attachment provides rich tangible input to control settings and take a photo without visual confirmation as a touchscreen would require. c) View through the finder showing an overlay of the selection menu and a digital spirit level.
## 7. Discussion
Compared to other on-lens interaction concepts such as _CamTrackPoint_(Cordbaum et al., 2017) and _LensGestures_(Krishnan et al., 2017) that process unfocused light, the LensLeech is less limited in the amount of information it provides but it requires ambient light as well. While the LensLeech performs well in most situations, LEDs (such as smartphone flashlights and autofocus-assist lights of still cameras) or screens of devices can be used to provide additional artificial illumination. This can be seen in the hybrid viewfinder: a section of the covered display illuminates the point pattern from below. Depending on the device and application scenario this might not be a viable option. For usage within a predefined space, near-ultraviolet flood lights can be installed to brighten the UV-reactive pigments in the point pattern (see fig. 12) with little interference to the brightness of the environment.
In general, an attachment solely made from silicone is simple, robust, and\(-\)to an extent\(-\)expendable. Material cost per piece is about 2 USD/EUR when fabricated in small quantities. This makes the LensLeech comparable to other inexpensive attachments for mobile devices that extend I/O capabilities, such as Google Cardboard (Cordbaum et al., 2017) or Nintendo Labo (Nintendo Labo, 2017). Similar to the limited lifetime of corrugated cardboard, a LensLeech and its point pattern may eventually suffer from wear and tear after extended usage.
A possible negative perception of letting an object touch the front element of the lens is not an issue when used on smartphone lenses. The application examples show that in other scenarios it makes sense to use device-dependent additions such as a rigid shoe on protruding lens barrels for action cameras or lens caps on interchangeable-lens cameras.
## 8. Future Work
A LensLeech is uniformly made from a single silicone formula with consistent Shore hardness throughout the whole body. By making use of a two-stage mold, the lower part of the body containing the lens could be molded separately with a harder type of clear silicone, resulting in a lower deformation of the lens when compressed. Also by integrating air-filled cavities and compliant elements in multi-stage molds, tactile feedback can be provided, resulting in a sensation when a certain amount of force is applied.
The limitation of large entrance pupil sizes can be circumvented by replacing the single silicone lens with a grid of smaller lenses. This requires a different fabrication technique for the mold and a point pattern that is aligned with camera lens angle and microlens position. This limits the silicone attachment compatibility to only a single lens, yet this may not be an issue for applications such as model-specific lens caps.
While only a single type of LensLeech is presented, the concept is versatile. With additional illumination, microlens arrays would allow emulating a small touchscreen on the top surface while using an angled surface geometry makes sensing fingerprints possible. In the near future, the emergence of cameras under displays in smartphones would allow the use of silicone attachments as tangible input and output devices in tabletop-like scenarios. The display can be used both for illuminating the LensLeech to sense in dark spaces as well as to display output in or on the body itself by refracting and redirecting the light.
## 9. Conclusion
We presented the LensLeech, a soft silicone attachment that allows to sense pushing, pressing, rotating, and squeezing when placed directly on or above lenses of arbitrary cameras. This makes it possible to add tangible input methods to a wide range of existing and new devices, especially small action or lifelogging cameras and smartphones. We have shown application examples ranging from small, body-worn devices to lens caps for large cameras and complex smartphone attachments. While the attachments are limited in their compatibility mainly by lens geometry, the low-light performance allows them to be used with only ambient illumination without any need for hardware modifications on a wide range of existing devices. This simple and inexpensive approach opens up an interaction space on lenses for rich input that was previously inaccessible with a range of further applications in the (soft) robotics domain.
## Reproduction Note
The example applications, source code, CAD models of molds and fixtures, a detailed description of the fabrication process, and data/scripts for generating plots are available publicly:
[https://github.com/volzotan/LensLeech](https://github.com/volzotan/LensLeech)
## Acknowledgments
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through project EC437/1-1.
|
2309.07391 | EnCodecMAE: Leveraging neural codecs for universal audio representation
learning | The goal of universal audio representation learning is to obtain foundational
models that can be used for a variety of downstream tasks involving speech,
music and environmental sounds. To approach this problem, methods inspired by
works on self-supervised learning for NLP, like BERT, or computer vision, like
masked autoencoders (MAE), are often adapted to the audio domain. In this work,
we propose masking representations of the audio signal, and training a MAE to
reconstruct the masked segments. The reconstruction is done by predicting the
discrete units generated by EnCodec, a neural audio codec, from the unmasked
inputs. We evaluate this approach, which we call EnCodecMAE, on a wide range of
tasks involving speech, music and environmental sounds. Our best model
outperforms various state-of-the-art audio representation models in terms of
global performance. Additionally, we evaluate the resulting representations in
the challenging task of automatic speech recognition (ASR), obtaining decent
results and paving the way for a universal audio representation. | Leonardo Pepino, Pablo Riera, Luciana Ferrer | 2023-09-14T02:21:53Z | http://arxiv.org/abs/2309.07391v2 | # EncodeCMAE: Leveraging Neural Codes for
###### Abstract
The goal of universal audio representation learning is to obtain foundational models that can be used for a variety of downstream tasks involving speech, music or environmental sounds. To approach this problem, methods inspired by self-supervised models from NLP, like BERT, are often used and adapted to audio. These models rely on the discrete nature of text, hence adopting this type of approach for audio processing requires either a change in the learning objective or mapping the audio signal to a set of discrete classes. In this work, we explore the use of EnCodec, a neural audio codec, to generate discrete targets for learning an universal audio model based on a masked autoencoder (MAE). We evaluate this approach, which we call EncodeMAE, on a wide range of audio tasks spanning speech, music and environmental sounds, achieving performances comparable or better than leading audio representation models.
Leonardo Pepino\({}^{\star\dagger}\)+ Pablo Riera \({}^{\star}\) Luciana Ferrer \({}^{\star}\)+\({}^{\star}\)Instituto de Investigacion en Ciencias de la Computacion (ICC), CONICET-UBA, Argentina
\({}^{\dagger}\)Departamento de Computacion, FCEyN, Universidad de Buenos Aires (UBA), Argentina audio representations, self-supervised, transformer, music, speech
Footnote †: Correspondence: [email protected]. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101007666
## 1 Introduction
The search for universal audio models that can be used to solve a wide range of tasks related not only to speech, but also to music and environmental sounds, has received increased attention in recent years [1, 2, 3, 4, 5, 6]. Some works pretrain these models in a supervised way using existing large labeled datasets like Audioset [7, 8]. Other works use self-supervised learning (SSL), which is an effective way to leverage large unlabeled audio datasets. SSL methods resort to learning a pretext task, like masked language modelling, which consists of predicting masked parts of an utterance [9, 10] or contrasting the output on a masked time step with the correct output and a set of distractors taken from the same utterance [11]. These pretext tasks operate at the frame level and learn relationships between parts of an utterance. Contrastive learning has been also used to learn instance-level embeddings, by creating positive examples for a certain audio through augmentation techniques or by taking temporally-close fragments, and negative examples by extracting fragments from different audio samples [12, 13]. BYOL-A [14] is another recently proposed method to learn instance-level representations of audio signals without requiring negative examples, by minimizing the distance between two augmented views of a spectrogram encoded by a teacher and a student network.
The pretrained upstream models can be adapted to downstream tasks using relatively small labelled datasets by feeding the activations from one or more of its layers to a downstream network. The pretrained model can be frozen [2, 15] or fine-tuned [1, 4]. When the task of interest has instance-level labels but the pretrained model generates frame-level representations, an operation that pools over time is applied to generate instance-level representations [1, 2, 4].
Our work follows the ideas originally proposed in BERT [16] for NLP and then adapted for speech in HuBERT [9] and DiscreteBERT [10], which consist of masking regions from the input signal and learning to predict discrete targets corresponding to the masked regions. In HuBERT the discrete targets are obtained from k-means clustering of MFCCs and, in a second stage, from internal representations. In this work we propose to use the discrete units learned by EnCodec [17], a general audio neural codec, as targets for our model, and then introduce, as in HuBERT, an additional target by clustering internal representations. We use the Masked Autoencoder (MAE) architecture [18], which enables efficient pretraining.
Our proposed model, EnCodecMAE is novel in several aspects: (i) it uses EnCodec to represent the audio signals and generate discrete targets for the masked language modelling pretext task; (ii) the self-training stage originally proposed in HuBERT is explored for tasks and audio signals beyond speech; (iii) a MAE architecture is used with one-dimensional signals as input, instead of patches of spectrograms as in prior work [4, 5]. EnCodecMAE outperforms other SSL methods in music-related tasks and performs comparably to other methods in speech and environmental downstream tasks.
## 2 Proposed Model
In this work we propose EnCodecMAE, a masked autoencoder that takes EnCodec representations as input and as targets during learning. The EnCodecMAE architecture is de
picted in Figure 1. An audio signal is fed to the EnCodec's encoder resulting in a sequence of embeddings. Sinusoidal positional embeddings are added to this sequence. Similarly to wav2vec 2.0 and HuBERT, a proportion \(M_{\text{prop}}\) of the embeddings is randomly masked, sampling the starting indices for the mask without replacement, and masking the subsequent \(M_{\text{gap}}\) time steps for every sampled index. In contrast to wav2vec 2.0 and HuBERT, the masked embeddings are discarded from the sequence, instead of replaced by a mask token. As a consequence, the resulting sequence, which contains only the visible embeddings, is shorter than the original one. The masked sequence is passed through an encoder and then expanded to the original size by inserting mask tokens in the masked regions. Sinusoidal positional embeddings are added again to inform the decoder of the position of the masked regions that need to be reconstructed, and the resulting sequence is input to a decoder. The model is trained to minimize a weighted cross entropy loss between a set of posteriors output by the decoder and the discrete targets generated by EnCodec residual vector quantization (RVQ) layer. The following sections explain the components of the model. Source code will be available at [https://github.com/habla-liaa/encodecmae](https://github.com/habla-liaa/encodecmae).
### EnCodec
EnCodec [17] is a neural audio codec consisting of an encoder-decoder architecture with a quantized latent space. EnCodec achieves a high compression rate while also minimizing the perceptible distortion of the decoded audios, obtaining higher MUSHRA scores than hand-crafted codecs and Soundstream [19] using the same bitrate, both for speech and music signals. This suggests that the quantized latent space from EnCodec contains most of the perceptually-relevant information for reconstructing not only speech, but also more complex signals like music.
The EnCodec architecture consists of a CNN encoder with 4 convolutional blocks that downsample the audio signal from 24kHz to 75 Hz, followed by 2 LSTM layers and a final convolutional layer. The encoder's output is a sequence of 128-dimensional vectors, which are passed through a residual vector quantization (RVQ) block with 32 codebooks, each with 1024 possible values. RVQ maps the input to the closest entry in the first codebook. Then, the residual is computed and mapped to the closest entry of the second codebook. This process is repeated for every codebook, increasingly refining the quantized outputs. Finally, the decoder mirrors the encoder architecture, upsampling the encoded signal with transposed convolutions and recovering the original audio.
### Masked autoencoders
Masked autoencoders (MAE) were proposed in [18] as a self-supervised model for computer vision and an alternative to Vision Transformers (ViT), and have been recently used for audio representation learning [4, 5, 6], using spectrogram patches as input and as targets during training. They are trained in a similar way to BERT, by masking parts of the input signal and predicting the masked parts from the unmasked ones. In contrast to BERT, which replaces the masked regions with a mask token, MAEs discard the masked regions before feeding the sequence to the encoder. Before decoding, the encoder's output is expanded to its original length by adding mask tokens. We use the assymetric encoder-decoder architecture proposed in [18], with a lightweight decoder and a larger encoder. Unlike traditional autoencoders, our MAE does not directly generate an estimation of the input signal. Rather, it generates posteriors for indices into a set of codebooks with which the input signal can be reconstructed.
As the masked regions are discarded, the sequence length of the encoder input is shorter than the original sequence length. This makes the training of MAEs very efficient. Moreover, as no mask tokens are seen by the encoder, the mismatch between the training and inference stages observed in BERT is reduced. As usually done with transformers, positional embeddings are added to the encoder input. Importantly, this is done before masking so that the remaining embeddings carry information on their absolute location in the sequence. The decoder receives the full set of encoded visible parts, with mask tokens added at the time steps where the input was discarded. The mask token is a shared embedding learned during training. Positional embeddings are summed to the resulting sequence, providing information of the position of each mask token. The decoder consists of a stack of transformer layers followed by a classification
Figure 1: EnCodecMAE architecture. EnCodec’s encoder transforms the audio signal into a sequence of embeddings to which positional encoding is added. A percentage of the frames are masked and discarded, and the resulting sequence is processed by the MAE encoder. Before feeding the encoder output to the decoder, mask tokens are inserted in the positions that were dropped. The loss is finally computed between the posteriors generated by the MAE’s decoder and the discrete targets produced by the EnCodec’s RVQ block.
layer with softmax activation. This layer predicts the discrete targets from the EnCodec's RVQ, corresponding to neural codec representations of the input audio. This is subtly different from the original MAE as the decoder is not designed to output an estimate of the encoder's input before masking, but a discretized version of it. In our setup, for the encoder we use 5 transformer layers in the small model, 10 in the base, and 20 in the large. All our models use only 2 layers in the decoder as in [18]. The dimension of the embeddings is 768 for the small and base models, and 1024 for the large model.
### Training procedure
The training procedure consists of two stages. Initially, the targets for the model are generated by the EnCodec RVQ layer. These consist of an index between 1 and 1024 for each time step \(t\) and codebook \(q\), \(y_{q}(t)\). The loss function is a weighted cross-entropy, with tunable weights for masked versus unmasked time steps and each individual codebook:
\[L=\sum_{q=0}^{Q}\gamma_{q}\left[\alpha\sum_{t\in M}C(y_{q}(t),z_{q}(t))+\beta \sum_{t\notin M}C(y_{q}(t),z_{q}(t))\right]\]
where \(z_{q}(t)\) is the vector of 1024 posteriors generated by the decoder for time step \(t\) and codebook \(q\), \(M\) is the set of masked indices, and \(\alpha=\delta/|M|\) and \(\beta=(1-\delta)/(T-|M|)\) are weights given to the masked and unmasked regions, respectively, with \(\delta\) being a tunable parameter and \(T\) the length of the sequence. The loss corresponding to each codebook stream \(q\) is, in turn, weighted by \(\gamma_{q}\). The loss for each codebook and time step is given by \(C(y_{q}(t),z_{q}(t))=-\log(z_{q,y_{q}(t)}(t))\), i.e., the negative logarithm of \(z_{q}(t)\) evaluated at the corresponding target index. Finally, as in HuBERT, after 500k training steps, a self-training stage is performed. We extract embeddings from the last encoder layer for 10k randomly-sampled audio signals and train a k-means model with \(k=1024\), quantize the pretraining dataset, and add the resulting stream to the \(Q\) quantizers. This new stream is given a weight equal to \(\sum_{q}\gamma_{q}\) and the model is trained for 150k additional steps. As in HuBERT, for the large model we train the k-means model using activations from the base model after self-training.
## 3 Experimental Setup
### Pretraining
For pretraining we used a mixture of datasets composed of Audioset [20], Free Music Archive [21] and LibriLight [22]. Audioset contains over 2 million 10-second audio clips from YouTube videos covering 527 different audio events. We downloaded more than 1.6M audios, which amounts to around 4500 hours. Free Music Archive consists of 106,574 30-second tracks covering 161 genres for a total of over 800 hours of music. We used the medium-size split from LibriLight containing 6000 hours of speech from audiobooks.
In all our experiments we trained with 4-second segments by randomly cropping the audio samples. At inference time, longer audios are chunked to 4 seconds and the resulting embeddings are concatenated. Our models are trained for 500k steps using AdamW optimizer with a weight decay of 0.05, \(\beta_{1}=0.9\), \(\beta_{2}=0.95\), a fixed learning rate of 1e-4, and a batch size of 128. EnCodec's parameters are downloaded from Hugging Face1 and remain unchanged during training. As targets for training we used 8 of the 32 quantizers available in EnCodec since all relevant acoustic information seemed to be preserved when listening to reconstructed speech, music and environmental sounds. We selected the training hyperparameters based on a base model's performance on HEAREval's validation data, after 150k training steps. This process resulted in the following values: \(\delta=0.9\), \(M_{\text{prop}}=0.5\) and \(M_{\text{gap}}=15\). We set each weight \(\gamma_{q}\) to be the average quantization error for the \(q\)-th codebook computed over 150 random training samples, normalized to sum to one over all codebooks. This gave a small improvement on validation data compared to using uniform \(\gamma_{q}\) values. For feature extraction we found that the last layer of the encoder gave the best results on validation data.
Footnote 1: [https://huggingface.co/facebook/encode_24khz](https://huggingface.co/facebook/encode_24khz)
### Downstream evaluation
We evaluate our models following the HEAREval procedure [15] and a subset of its instance-level tasks. The embedding for each instance is obtained by averaging the frame-level activations from the last encoder layer. The resulting embedding is fed to a multilayer perceptron and a grid search over hyperparameters is performed as described in [15]. The upstream model is frozen, so only the downstream model parameters are trained for each evaluation task. For music, we evaluate in the 50-hour subset of the NSynth Pitch [23] dataset (NS), which consists of instrument notes. The goal of this task is to classify each sound into 88 possible pitch values. We also evaluate on music genre classification (GC), using the GTZAN Genre dataset [24], which consists of 1000 30-second audio tracks categorized into 10 genres. For speech, we evaluate on the Google Speech Commands dataset v2 (SC) [25], where each utterance has to be classified into 10 possible commands or noise or unknown, and on the CREMA-D dataset [26] for emotion classification into 6 categories. For environmental sounds, we evaluate on the Freesound Dataset 50K (FSD) [27], which contains 100 hours of labeled sound events belonging to one or more of 200 possible classes from the Audioset ontology. We also evaluate in the ESC-50 dataset (ESC) [28], which is a collection of 2000 audio samples each with a single acoustic event belonging to one of 50 possible classes. We report accuracy for all tasks except FSD, where mean average precision (mAP) is used.
## 4 Results and Discussion
In Table 1, downstream metrics for our models and the best models in the HEAREval benchmark are shown. Comparing all EnCodec-based results, a system that uses EnCodec representations directly as input to the downstream model (called EC in the table) gets the worst performance for all tasks. These results suggest that the performance obtained by our proposed method is not solely due to the use of the EnCodec representations but also to the addition of the MAE. Interestingly, even a randomly initialized base MAE (ECMAE-R) already improves the EnCodec's performance for almost every task, suggesting that a random mixing and projection of embeddings might improve the representations. Further pretraining the MAE (ECMAE-B) gives large improvements compared to the random baseline, showing that the proposed pretext task and discrete targets are useful for learning audio representations. Increasing the size of the model results in improvements for almost all the downstream tasks, and adding the self-training stage (+ ST) gives another performance boost in speech and environmental tasks, though slightly degrading performance on music-related tasks.
The bottom two blocks show results for other pretrained models. Fuse H and Fuse W [29] generate embeddings by averaging the activations from all the transformer layers of HuBERT XL and Wav2Vec 2.0 Large models, respectively. BYOL-S [30] is a version of BYOL-A [14] trained only in speech samples from Audioset. We also included models with supervised pretraining: Cat HWC [29], which concatenates Fuse H, Fuse W and CREPE [31] embeddings, with the latter trained for pitch prediction in a supervised way, and PaSST [7], which is an efficient audio spectrogram transformer pre-trained for acoustic event detection in Audioset.
Results show that our models perform better than other unsupervised models and Cat HWC in the music tasks. For NS, even the small model (ECMAE-S) outperforms models specifically trained for this task (not shown in the table): CREPE which reaches 90% accuracy in this set, and a Wav2Vec2 model pretrained with music data which obtains 76% accuracy as feature extractor and 90% accuracy with full finetuning in NSynth [32]. For the music genre classification task (GC), PaSST [7] outperforms our models, possibly because it benefits from the supervised pretraining, as many categories from the Audioset ontology are related to music genres. Further, OpenL3 [33], not included in the table, also has slightly better results (87.9%) than our models in GC.
For speech related tasks, our Large model reaches competitive results, specially after applying the self-training stage (+ST). For the emotion recognition tasks (ER), only Fuse H outperforms our models, possibly because it was trained in more speech data (60k hours instead of 6K), and contains more than 3 times the number of parameters of our Large model. Notably, our Small model gets better performance than Fuse W in ER while being 6 times smaller. In the Speech Commands dataset (SC), our models with self-training stage get results comparable to BYOL-A and BYOL-S, but are still behind Fuse H and Fuse W. In contrast, for environment tasks, our models perform worse than BYOL-A and BYOL-S, but outperform Fuse H and Fuse W. We hypothesize that this might be due to the use of spectrograms in BYOL models, which might benefit these tasks by learning relationships between parts of spectrograms. This hypothesis is supported by BYOL-A paper, where using random crop and resize of spectrograms gives better results for Urban-sound8K [34] than augmentations that do not manipulate the 2D structure of spectrograms. More evidence is found in [5], where masking patches of spectrograms instead of only the time axis, improves the performance in Audioset. In future work, to further understand these behaviors, we aim to analyze the effect of the input and target representations. Further, we will explore different downstream models and finetuning strategies to maximize the utility of the upstream models.
## 5 Conclusions
We introduce EnCodecMAE, a new universal audio representation model pretrained in a self-supervised way to predict discrete targets from masked segments of audio, following a masked autoencoder (MAE) design. The main novelty of our method is that the inputs and targets for the MAE are obtained from EnCodec, a neural audio codec. Our model achieves a performance comparable with other audio representation models in a wide range of tasks. Notably, our model exhibits remarkable performance in music related tasks, in particular, on the NSynth pitch prediction dataset where it performs better than CREPE, a model specifically trained for this task. Moreover, our largest model can be trained in 5 days on 2 RTX 3090 GPUs due to its efficient design.
\begin{table}
\begin{tabular}{l|c c|c c|c c|c} & \multicolumn{2}{c|}{Music} & \multicolumn{2}{c|}{Speech} & \multicolumn{2}{c|}{Env} & \multicolumn{2}{c}{\#Par} \\ & NS & GC & SC & ER & FSD & ESC & \\ \hline EC & 61.6 & 67.2 & 34.3 & 46.7 & 202 & 38.7 & 7.4M \\ ECMAE-R & 74.7 & 71.0 & 64.7 & 44.0 &.239 & 40.8 & 94M \\ ECMAE-S & 91.3 & 84.4 & 90.9 & 69.6 &.420 & 71.8 & 51M \\ ECMAE-B & **91.8** & 85.0 & 92.1 & 70.8 &.440 & 74.3 & 94M \\ + ST & 91.4 & 84.1 & 93.9 & 73.0 &.450 & 77.3 & 94M \\ ECMAE-L & 91.3 & **86.2** & 92.7 & 73.0 &.454 & 74.5 & 269M \\ + ST & 89.8 & 86.0 & 94.3 & 74.4 &.456 & 75.8 & 269M \\ \hline Fuse H \(\dagger\) & 68.8 & 79.6 & 95.7 & **75.2** &.413 & 74.3 & 1000M \\ Fuse W \(\dagger\) & 60.6 & 79.3 & **96.9** &.403 & 69.5 & 317M \\ BYOL-A & 68.1 & 84.8 & 92.9 & 66.0 & **510** & **83.7** & 5.3M \\ BYOL-S \(\dagger\) & 71.2 & 83.7 & 94.8 & 65.7 &.508 & 80.5 & 5.3M \\ \hline Cat HWC \(\dagger\) & 88.5 & 80.5 & 96.0 & 74.7 &.420 & 73.3 & 1339M \\ PaSST \(\dagger\) & 54.1 & 88.3 & 63.9 & 61.0 &.641 & 94.7 & 86.2M \\ \end{tabular}
\end{table}
Table 1: Performance in selected HEAREval downstream tasks. The first two blocks are models pretrained in a fully unsupervised way, while the bottom models use supervised pretraining. EC stands for EnCodec, ST for self-training, and S, B, L for small, base, and large, respectively. Results marked with \(\dagger\) are taken from [15]. BYOL-A was benchmarked using its official implementation [14]. |
2309.10835 | Analysing race and sex bias in brain age prediction | Brain age prediction from MRI has become a popular imaging biomarker
associated with a wide range of neuropathologies. The datasets used for
training, however, are often skewed and imbalanced regarding demographics,
potentially making brain age prediction models susceptible to bias. We analyse
the commonly used ResNet-34 model by conducting a comprehensive subgroup
performance analysis and feature inspection. The model is trained on 1,215
T1-weighted MRI scans from Cam-CAN and IXI, and tested on UK Biobank
(n=42,786), split into six racial and biological sex subgroups. With the
objective of comparing the performance between subgroups, measured by the
absolute prediction error, we use a Kruskal-Wallis test followed by two
post-hoc Conover-Iman tests to inspect bias across race and biological sex. To
examine biases in the generated features, we use PCA for dimensionality
reduction and employ two-sample Kolmogorov-Smirnov tests to identify
distribution shifts among subgroups. Our results reveal statistically
significant differences in predictive performance between Black and White,
Black and Asian, and male and female subjects. Seven out of twelve pairwise
comparisons show statistically significant differences in the feature
distributions. Our findings call for further analysis of brain age prediction
models. | Carolina Piçarra, Ben Glocker | 2023-09-19T14:40:19Z | http://arxiv.org/abs/2309.10835v1 | # Analysing race and sex bias in brain age prediction
###### Abstract
Brain age prediction from MRI has become a popular imaging biomarker associated with a wide range of neuropathologies. The datasets used for training, however, are often skewed and imbalanced regarding demographics, potentially making brain age prediction models susceptible to bias. We analyse the commonly used ResNet-34 model by conducting a comprehensive subgroup performance analysis and feature inspection. The model is trained on 1,215 T1-weighted MRI scans from Cam-CAN and IXI, and tested on UK Biobank (n=42,786), split into six racial and biological sex subgroups. With the objective of comparing the performance between subgroups, measured by the absolute prediction error, we use a Kruskal-Wallis test followed by two post-hoc Conover-Iman tests to inspect bias across race and biological sex. To examine biases in the generated features, we use PCA for dimensionality reduction and employ two-sample Kolmogorov-Smirnov tests to identify distribution shifts among subgroups. Our results reveal statistically significant differences in predictive performance between Black and White, Black and Asian, and male and female subjects. Seven out of twelve pairwise comparisons show statistically significant differences in the feature distributions. Our findings call for further analysis of brain age prediction models.
## 1 Introduction
The global population growth and longer life expectancy are linked to the rising prevalence of age-related neurodegenerative and neuropsychiatric diseases [1, 2, 3]. As a result, there is an increasing need to establish connections between brain ageing and disease processes, to better understand their mechanisms and enable early detection and diagnosis. Significant research efforts have focused on investigating the potential of brain-predicted age as an indicator of how an individual's brain health may deviate from the norm [4, 5]. As a neuroimaging-driven biomarker, it has the potential of containing a broad spectrum of brain characteristics in a single measurement [6]. Several studies have proposed brain age prediction for the characterisation of neuropathology [7, 8], epilepsy [9], as well as an indicator of clinical risk factors [10, 11]. Most studies used structural MRI, due to its common use in clinical settings and high resolution, capturing even small structural variations in brain anatomy. Deep learning (DL), and in particular convolutional neural networks (CNNs), are widely used models for brain age prediction from MRI [12, 13]. Studies rely on well-established datasets, including the UK Biobank [14], the Cambridge Centre for Ageing Neuroscience
(Cam-CAN) dataset [15], IXI [16], the Alzheimer's Neuroimaging Initiative (ADNI) dataset [17], the The Open Access Series of Imaging Studies (OASIS) [18], among others. These datasets tend to be skewed and biased regarding ethnic and racial diversity, with a majority of White subjects. When models are trained on data with unbalanced demographics, the performance may degrade in relevant subgroups [19]. Thus, it is important to test such models for potentially disparate performance across subgroups. In this study, we analyse a ResNet-34 brain age prediction model by conducting a comprehensive statistical subgroup performance analysis and feature inspection.
## 2 Materials and methods
DatasetsFor training and validation of the brain age prediction model, we used the Cam-CAN [15] and the IXI dataset with healthy volunteers. For testing, the UK Biobank dataset was selected due to its size and availability of race and biological sex information. The demographics for each dataset are available in Table 1. Patient racial information is not provided for the Cam-CAN dataset. However, considering that the data collection took place in Cambridge (United Kingdom), we assume the majority of volunteers were White. All scans from Cam-CAN and IXI were pre-processed by us using the following steps: 1) Lossless image reorientation using the direction information from the image header; 2) Skull stripping with ROBEX v1.21[20]; 3) Intensity-based rigid registration to MNI atlas ICBM 152 2009a Nonlinear Symmetric2; 4) Bias field correction with N4ITK3[21]. The UK Biobank images were already skull-stripped and bias field corrected, and only the registration to MNI space was performed by us.
Footnote 1: [https://www.nitrc.org/projects/robar](https://www.nitrc.org/projects/robar)
Footnote 2: [http://nist.mmi.mcgill.ca/?p=904](http://nist.mmi.mcgill.ca/?p=904)
Footnote 3: [https://itk.org](https://itk.org)
ModelWe adapted the conventional ResNet-34 model [22] for age regression from 3D images. ResNet stands for Residual Network and is a type of CNN model with residual connections, a distinctive architecture designed to address the vanishing gradient problem during deep network training. We trained this model with whole preprocessed T1-weighted MRI images. The data was augmented through a composition of transformations, including random horizontal flip, contrast change, addition of Gaussian noise with random parameters and motion artifacts.
### Bias analysis
We divided our statistical bias analysis into two parts, each focusing on a specific aspect. The first part aimed to assess bias in predictive performance, while the latter delved deeper into the model to examine biases in the generated features. To ensure a sufficient sample size for each subgroup, we considered the Chinese subjects to be part of the Asian group, and excluded all subjects with race classified as "Other" (which includes "Mixed"). We then further divided
each racial subgroup ("White", "Asian" and "Black") into "Female" and "Male", resulting in six test set subgroups.
Absolute performance assessmentWe calculated the absolute error of prediction, using it as the main performance metric. With the goal of comparing the performance between all subgroups, we then progressed by verifying the assumptions necessary to perform an Analysis of Variance (ANOVA), i.e. assumption of normality - through visual inspection of the absolute error distribution and Shapiro-Wilk tests - and the assumption of homogeneity of variances, through the Levene's test. The assumption of sample independence is met from the experimental design, as all subgroups are constituted by different subjects. Given that not all assumptions were met, we progressed by using the non-parametric Kruskal-Wallis test to compare the absolute error medians of all subgroups. Further pairwise comparisons were completed using the post-hoc Conover-Iman test. Since the Kruskal-Wallis test is the non-parametric equivalent of the one-way ANOVA, we conducted two Conover-Iman tests in order to take into consideration our two factors, race and biological sex. Although the Kruskal-Wallis test can handle unbalanced data, when the differences are large its power is reduced, which may lead to inconsistent/intransitive results [23]. In order to ensure the validity and consistency of our results, we balanced the data by randomly selecting a sample from each subgroup with equal sample size. The sample size chosen for each subgroup was 126, i.e. the size of the smallest group (Black female subjects). After calculating the mean absolute error for each subgroup sample, we repeated the random sampling ten times to estimate the standard deviation. We then repeated the statistical procedure, verifying ANOVA's assumptions, and upon rejection of normality following with the Kruskal-Wallis test and corresponding
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Cam-CAN** & **IXI** & **UK Biobank** \\ \hline
**N** & 652 & 563 & 42,786 \\ \hline
**Age (years)** & & & \\ Mean \(\pm\) SD & 54.3\(\pm\)18.6 & 48.6\(\pm\)16.5 & 64.0\(\pm\)7.7 \\ Range & 18 - 88 & 20 - 86 & 44 - 82 \\ \hline
**Sex** & & & \\ Female/Male & 330/332 & 313/250 & 20,206/22,580 \\ \hline
**Race** & & & \\ White & — & 451 & 41,417 \\ Black & — & 14 & 286 \\ Asian & — & 50 & 454 \\ Chinese & — & 14 & 122 \\ Other & — & 34 & 507 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Demographic information of all datasets used.
post-hoc Conover-Iman tests.
Model features assessmentAdditionally, we assessed if the features generated by the model were biased using the framework for feature exploration proposed by Glocker et al [24]. This strategy consists of passing each test set scan through our model up to the penultimate layer, extracting its output features and subsequently inputting them to a principal component analysis (PCA) model in order to reduce their dimensionality. The PCA projections consist of a new set of dimensions (also called "modes") which capture the directions of the largest variation in the high-dimensional feature space. Given that our model was trained to predict age, it is expected that the strongest separation for samples in different age groups is seen in the first PCA modes. We plotted the distribution of samples in PCA space (first four modes) through kernel density estimation plots, split by the three demographic attributes of interest (age, biological sex, and race). Age was divided into five brackets to facilitate visual analysis. This was accompanied by two-sample Kolmogorov-Smirnov tests to compare the feature distributions of all possible pairwise combinations across race, sex, and age subgroups, in the first four modes of PCA. To decrease the number of statistical tests to be completed, for this step age was divided into only two brackets, [40-60] and [60-90]. To account for multiple testing and the consequent type 1 error inflation, the p-values were adjusted using the Benjamini-Yekutieli procedure. We considered statistical significance at a 95% confidence level.
## 3 Results
Figure 1 shows the age distribution for each subgroup, including all samples available in the test set. Within White subjects, we can observe a tendency of younger male and older female subjects, whereas for Black subjects, we find the opposite.
Absolute performance assessmentOur first analysis involved conducting a Shapiro-Wilk test to evaluate whether the subgroup's prediction errors followed a normal distribution, and the Levene's test to verify whether all groups to be compared had equal variance. The resulting p-values from the six Shapiro-Wilk tests were below the defined significance level of 0.05. This indicates that we can confidently reject the null hypothesis that the population from which each sample is drawn follows a normal distribution. For visual confirmation, the distribution of the absolute error for each subgroup, along with the corresponding p-values from the Shapiro-Wilk tests are given in the Appendix (Figure A.1). On the other hand, the Levene's test returned a p-value of \(5.82\times 10^{-52}\), confirming that we have sufficient evidence to reject the null hypothesis and conclude that not all samples come from populations with equal variances. Upon the rejection of both assumptions for two-way ANOVA, we proceeded by conducting a Kruskal-Wallis test. The resulting p-value was \(6.99\times 10^{-116}\), leading us to reject the null
hypothesis, i.e. that the population medians are all equal. The resulting p-values from the two Conover-Iman tests conducted for further pairwise comparisons were as follows: White vs Asian: 0.022; White vs Black: 0.017; Black vs Asian: 0.0015; female vs male: \(4.20\times 10^{-118}\). As suspected, the p-values from both the Kruskal-Wallis and the Conover-Iman test were notably low. The plot in Figure 2 shows the age distribution of the random samples from each subgroup (n=126), taken to ensure the robustness of our statistical procedure. It reveals a similar pattern to that observed when examining all subjects in the test set, showing higher prevalence of younger White males and older White females, while the opposite is seen for Black subjects.
The first plot in Figure 3 shows the mean absolute error (MAE) for each subgroup sample, accompanied by its corresponding error bar, calculated from ten random samples of the same dimension. The adjacent plot illustrates the disparity in absolute error concerning the average absolute error across all subjects. Notably, we see a considerable under-performance of the model for Black male subjects. The model achieves its highest performance on White female and Asian subjects.
Similarly to the tests conducted with all test set subjects, the p-values resulting from the Shapiro-Wilk tests for normality testing indicated that we could reject the null hypothesis that the samples come from a normally-distributed population, with a significance level of 0.05. Contrarily, the Levene's test produced a p-value of 0.78, suggesting that we cannot reject the null hypothesis of equal variances across all samples. However, as not all ANOVA assumptions were met, we proceeded with the Kruskal-Wallis test, which yielded a p-value of 0.0015. With a p-value below our defined significance level (0.05), we reject the null hypothesis and have sufficient evidence to suggest that the differentiating factors
Figure 1: Age distribution of all subjects in the test set and of each racial subgroup, separated by biological sex. Overlapping lines show the probability density curves.
among subgroups lead to statistically significant differences in the model's performance. The resulting p-values from the two post-hoc Conover-Iman tests were the following: White vs Asian: 0.76; White vs Black: 0.022; Black vs Asian: 0.013; female vs male: 0.008. These outcomes reveal statistically significant disparities in the model's performance between White and Black subjects and Black and Asian subjects, as well as between female and male subjects.
Model features assessmentProceeding to the examination of bias in the model's features, the kernel density estimation plots presented in Figure 4 show the density distribution of each age, race, and biological sex subgroup as generated by PCA in its first four modes. These plots include all subjects available in our test set. Here, we can infer that the PCA modes of primary interest are modes 1, 2 and 3, as they show the strongest separation between age groups, aligning with the model's training objective. Therefore, we are particularly interested in examining subgroup differences in these modes. It is nevertheless worth noting that in PCA mode 4 there is a clear separation between racial subgroups. However, these disparities might not be of primary concern as these features may not be
Figure 3: Left: MAE, considering only a random sample of 126 subjects for each subgroup. Right: Relative difference in brain age prediction performance across patient subgroups. Difference calculated in relation to the average absolute error across all subjects (i.e. all random samples, with a total of 756 subjects). For both plots, the error bars were created by repeating the random sampling ten times and calculating the standard deviation.
Figure 2: Age distribution of a random sample (n=126) from each racial subgroup, separated by biological sex. Overlapping lines show the probability density curves.
informative for age prediction.
The adjusted p-values from the two-sample Kolmogorov-Smirnov tests, conducted to compare the distributions of each pair of subgroups, can be found in Table 1 of the Appendix. Similarly to the procedure described above, the tests in which all subjects available in the test set were used yielded notably low p-values and rendered almost all pairwise comparisons statistically significant, with only three exceptions: White vs Black in mode 2, Asian vs White in mode 3, and Black vs Asian in mode 3. The adjusted p-values from the new Kolmogorov-Smirnov tests, including only a equal-sized sample of each subgroup, are shown in Table 2. When looking at the first three PCA modes and comparing racial subgroups, we find five of the nine pairwise comparisons between marginal distributions to be statistically significant. For biological sex, on the other hand, two of the three comparisons were statistically significant.
## 4 Discussion and conclusion
In this study, we aimed to thoroughly investigate the potential race and biological sex bias in a model for brain age prediction from MRI predominantly trained on White subjects. The statistical tests conducted to evaluate the model's absolute performance reveal statistically significant differences for Black subjects, compared to both Asian and White subjects, as well as differences between male and female subjects. When looking back at the model's average performance per subgroup (Figure 3), we can observe the same evidence, concluding that the
Figure 4: Kernel density estimation plots depicting the density distribution of each age, race, and biological sex subgroup across the first four PCA modes of the feature space.
negatively affected groups are Black and male subjects. One possible explanation might be the fact that these are the two most underrepresented groups in our training set, which contained 582 male (versus 643 female) subjects, and only 14 Black (versus 451 White and 64 Asian) subjects. The imbalance in racial distribution for the training set only include the IXI dataset as race information was not available for the Cam-CAN dataset. Here, we assumed that Cam-CANis predominantly White. Additionally, the results of our statistical analysis over the model's feature inspection suggests that some of the features that encode information useful for age prediction, also allow for the separation of both racial and biological sex subgroups.
In practicality, recent research primarily focuses on assessing the correlation between brain age gap and neurological disorders/clinical risks. This gap represents the model's prediction error, which can be attributed to noise (model accuracy, data quality) and physiology. When evaluating the latter, it's crucial to distinguish disorder-related changes from inherent biological differences due to sex or ethnicity. This study reveals an average of 1 to 2-year statistically significant disparities among ethnicity subgroups. Depending on biomarker application, these deviations hold significance. For instance, Cole et al. (2018) [10] found a 6.1% rise in mortality risk between ages 72-80 per extra predicted brain year. Accounting for ethnicity could be vital in such cases.
Lange et al. [6] have previously reported that the metrics used to evaluate brain age prediction performance, including MAE, are significantly affected by discrepancies in the age range of the training and testing datasets. One limitation of our study is hence the limited age range of UK Biobank (44-82) - our test set - when compared to the broad range encompassed by the training set (18-88), which is desired for an age prediction model. As a consequence, we might observe a lower overall age prediction performance than the state-of-the-art. Nevertheless, given that our primary goal was to compare the model's performance across subgroups, and that the age range is similar across the random samples of each
\begin{table}
\begin{tabular}{l|c||c|c|c|c} \hline \hline \multicolumn{2}{c}{Age 40-60/60-90} & Asian/White & Black/Asian & White/Black & Female/Male \\ \hline PCA mode 1 & \(<\)0.0001 & 0.0065 & 0.011 & \(<\)0.0001 & 0.39 \\ PCA mode 2 & \(<\)0.0001 & 0.011 & 0.0065 & 1 & 0.006 \\ PCA mode 3 & \(<\)0.0001 & 0.52 & 0.52 & 0.13 & 0.0037 \\ PCA mode 4 & 0.09 & 0.25 & \(<\)0.0001 & \(<\)0.0001 & 0.025 \\ \hline \hline \end{tabular}
\end{table}
Table 2: P-values resulting from two-sample Kolmogorov-Smirnov tests which compared marginal distributions from the pairs of subgroups indicated, across the first four PCA modes. Results including a random sample of each subgroup, all with equal size. The p-values were adjusted for multiple testing using the Benjamini-Yekutieli procedure. Significance level is set to 0.05. Statistically significant results are coloured with red.
test subgroup (Figure 2), we can assume that this evaluation remains meaningful despite age range variations in training and test sets.
Another limitation of our study is the use of a single model type, one combination of datasets and a specific type of input features (T1-weighted MRI scans). However, we believe that our findings are relevant to further motivate a systematic bias assessment, including a diverse range of commonly employed models, such as other CNN models, ensembles, or simpler machine learning models like XGBoost, as these have been shown to have comparable performance to more complex DL models [25]. Another interesting avenue for exploration would be to examine whether the similar biases persist when employing MRI-derived features, e.g. white and grey matter maps or volumes of subcortical structures.
Our results suggest that training brain age prediction on imbalanced data leads to significant differences in subgroup performance. We call for comprehensive bias assessment in other brain age prediction models, as these have emerged as important diagnostic and prognostic clinical tools.
## 5 Acknowledgments
B.G. is grateful for the support from the Royal Academy of Engineering as part of his Kheiron Medical Technologies/RAEng Research Chair in Safe Deployment of Medical Imaging AI. C.P gratefully reports financial support provided by UKRI London Medical Imaging & Artificial Intelligence Centre for Value Based Healthcare. |
2309.05586 | Extra invariant and plasma inhomogeneity to improve zonal flow | Zonal flows are known to diminish turbulent transport in magnetic fusion.
Interstingly, there is an adiabatic invariant that implies the emergence of
zonal flow. The paper shows that if this invariant is decreasing (due to some
external factors) then the emerging zonal flow is better. It is also shown that
the plasma inhomogeneity can lead to the decrease of the adiabatic invariant. A
simple condition for such decrease is found. | Alexander Balk | 2023-09-11T16:14:19Z | http://arxiv.org/abs/2309.05586v1 | # Extra invariant and plasma inhomogeneity to improve zonal flow
###### Abstract
Zonal flows are known to diminish turbulent transport in magnetic fusion. Interstingly, there is an adiabatic invariant that implies the emergence of zonal flow. The paper shows that if this invariant is decreasing (due to some external factors) then the emerging zonal flow is better. It is also shown that the plasma inhomogeneity can lead to the decrease of the adiabatic invariant. A simple condition for such decrease is found.
Fusion plasma, Transport barriers, drift/Rossby waves, Additional conservation.
## I Introduction
One of the problems of nuclear fusion is to decrease transport of heat and particles from the core of a fusion device to its walls. In devices with magnetic confinement, the decreased transport could be achieved by creating strong alternating poloidal flow (often called "zonal" flow in analogy with hydrodynamics). This is reviewed in [1; 2]. The possibility to decrease transport is based on the idea [3] that alternating zonal/poloidal flow interrupts ballistic transport (decreases mean free path) across the flow. At the same time, the poloidal flow does not contribute to the radial transport.
We consider zonal/poloidal flow not as the flow with exactly zero poloidal wave number (\(q=0\)), but as the turbulence whose poloidal wave number is much less than the radial wave number (\(|q|\ll|p|\)). The zonal flow turbulence is a continuous continuation of other plasma turbulence with the entire spectrum of wave vectors \(\mathbf{k}=(p,q)\). So, zonal flow that efficiently reduces transport, should have a large amount of large-scale energy tightly concentrated around the \(p\)-axis. This energy will eventually lead to the coherent zonal flow with exactly zero poloidal number (like the emergence of large vortex in 2D hydrodynamics), but this process is not considered in the present paper. The problem of interaction between the drift turbulence and the coherent zonal flow was studied for decades and is intensively studied now, which is reviewed in [4].
We suppose that a significant mode of plasma dynamics represent drift waves with dispersion relation
\[\omega(\mathbf{k})=\frac{\nu+\mu k^{2}}{1+k^{2}}\,q, \tag{1}\]
where the wave vector \(\mathbf{k}=(p,q)\) is non-dimensionalized by the Larmor (Rossby) radius \(\rho\), and \(k^{2}=p^{2}+q^{2}\). The interaction of these waves can have different forms, and we do not need to specify it. The frequencies \(\mu\) and \(\nu\) can depend on the coherent part of the poloidal/zonal flow [5; 6; 7]. More about this is in Section III.
The waves similar to the plasma drift waves occur in different problems. As well known, the waves with the same dispersion relation are the Rossby waves in atmosphere and ocean [8]; then often \(\nu=\beta\rho\) (\(\beta\) is the Coriolis parameter) and \(\mu=0\). The dynamics of Rossby waves leads to the generation of zonal flows in atmospheres and oceans on several planets. An example of this phenomenon are Jupiter's stripes. Much slower waves (slower than the relatively slow Rossby waves) also appear in astrophysical magnetohydrodynamics [9; 10; 11; 12]. Such waves are often called the magnetic Rossby waves. In this case, \(\nu=0\) and \(\mu\rho^{3}=B_{0}^{2}/\beta\) (\(B_{0}\) is the basic zonal magnetic field, measured in the velocity units). These waves occur in the upper layer of Earth's liquid iron outer core ("ocean of the core" [13]). They also take place in Sun's tachocline.
It is interesting that the emergence of zonal/poloidal flow follows from the extra conservation. To describe this extra invariant, let us first recall the question considered by L. Boltzmann [14; 15] about binary collisions
\[\mathbf{p}_{1}+\mathbf{p}_{2} = \mathbf{p}_{3}+\mathbf{p}_{4}, \tag{2a}\] \[\mathbf{p}_{1}^{2}+\mathbf{p}_{2}^{2} = \mathbf{p}_{3}^{2}+\mathbf{p}_{4}^{2} \tag{2b}\]
(\(\mathbf{p}_{i}\) are the momentum vectors of the two particles before and after the collision), see also [16; 17]. Does there exist an independent quantity \(\phi(\mathbf{p})\) which is conserved in the collisions (2ab)
\[\phi(\mathbf{p}_{1})+\phi(\mathbf{p}_{2})=\phi(\mathbf{p}_{3})+\phi(\mathbf{p }_{4})? \tag{2c}\]
In other words, does there exist a function \(\phi(\mathbf{p})\) such that the equation (2c) holds for any vectors \(\mathbf{p}_{i}\) (\(i=1,2,3,4\)) bound by the relations (2ab)? And the function \(\phi(\mathbf{p})\) is not a mere linear combination of the momentum \(\mathbf{p}\) and energy \(\mathbf{p}^{2}\). Boltzmann had shown that such quantity does not exist.
Similar, one can consider resonance interactions of drift/Rossby waves
\[\mathbf{k}_{1} = \mathbf{k}_{2}+\mathbf{k}_{3}, \tag{3a}\] \[\omega(\mathbf{k}_{1}) = \omega(\mathbf{k}_{2})+\omega(\mathbf{k}_{3}) \tag{3b}\]
(\(\mathbf{k}_{i}\) are the wave vectors) and pose the question: Does there exist an independent quantity \(\eta(\mathbf{k})\) which is conserved in the interactions (3ab)
\[\eta(\mathbf{k}_{1})=\eta(\mathbf{k}_{2})+\eta(\mathbf{k}_{3})? \tag{3c}\]
In other words, does there exist a function \(\eta(\mathbf{k})\) such that the equation (3c) holds for any vectors \(\mathbf{k}_{i}\) (\(i=1,2,3\))
bound by the relations (3ab)? And the function \(\eta({\bf k})\) is not a mere linear combination of the momentum \({\bf k}\) and energy \(\omega({\bf k})\). For the dispersion relation (1) such function does exist, namely [18; 19]
\[\eta({\bf k})=\arctan\frac{p+q\sqrt{3}}{k^{2}}-\arctan\frac{p-q\sqrt{3}}{k^{2}}\,. \tag{4}\]
Thus, in addition to the energy and momentum, a system of drift/Rossby waves has an extra invariant \({\cal I}=\int\eta_{\bf k}N_{\bf k}\,d{\bf k}\). The function \(N_{\bf k}=\varepsilon_{\bf k}/\omega_{\bf k}\) is the wave action spectrum (or the phase space density of quasi-particles), \(\varepsilon_{\bf k}\) is the energy spectrum. (To have positive \(N_{\bf k}\), one needs to restrict wave vectors \({\bf k}\) to half-plane \(q>0\).)
The extra invariant stems from the integrability work by V. Zakharov and E. Shulman, published in a series of papers throughout 1980s (in particular, [20; 21]). They found that conservation in resonance interactions implies cancellation of small divisors, and therefore, leads to an additional approximate invariant of the corresponding Hamiltonian system. An important feature of their theory is that one can make physical predictions without knowing the exact form of nonlinearity. It is only required that the wave dynamics is Hamiltonian (which is usually the case for a physical system). Some aspects of this theory turned out [22] to be related to the web geometry [23]. This allowed to show [17] that the Rossby wave extra invariant \({\cal I}\) (or the extra conserved quantity \(\eta\)) is _unique:_ All invariants are linear combinations of the energy, the enstrophy, and the extra invariant \({\cal I}\).
The extra conservation implies the emergence of zonal flow [24], see also Section II. Here we will see even more: If the extra invariant is decreasing with time (due to some external factors), then the emerging zonal flow is better; it diminishes transport even more efficiently.
The extra invariant and its physical implications were established through the work of several people, in particular, [25; 26; 27; 28; 29; 30; 31; 32; 33]. In [33], the adiabatic invariant \(I\) was called "zonostrophy" ("zonostrophic instability" is unrelated).
## II Decreasing extra invariant leads to efficient zonal flow.
First, let us note the energy accumulation at large scales. This is obvious for Rossby waves when \(\mu=0\) and \(\nu\neq 0\); then the energy follows the inverse cascade. But even in general, the energy accumulation at large scales appears to be true. For instance, in the classic 3-dimensional hydrodynamic turbulence, the energy follows the direct cascade, but the large scales carry most of the energy (the Kolmogorov spectrum has infrared divergence). Similar situation occurs for the gravity waves on the ocean surface [39]. This system conserves two integrals: the energy and the wave action. The wave action follows the inverse cascade, while the energy follows the direct cascade. Still, most of the energy is carried by large scales (the Kolmogorov-Zakharov energy spectrum, determined by the wave action flux, has infrared divergence). In both of these situations, most of the energy flows towards large \(k\) and dissipates there. However, a small fraction of the energy reaches large scales, and the energy accumulates there. Similar phenomenon seems to hold [11; 12] for the turbulence of magnetic Rossby waves, when \(\nu=0\) and \(\mu\neq 0\). Then the energy and the enstrophy trade places; and so, there is direct cascade of energy, but the energy accumulates at small \(k\). When both parameters \(\mu\) and \(\nu\) in (1) are non-zero, then there is no clear direction for the energy cascade, but presumably, the energy accumulates at large scales.
Now recall the well known conserved quantities for the system of interacting drift waves. These are the energy \({\cal E}\) and the zonal momentum \({\cal M}\)
\[{\cal E}=\int\omega_{\bf k}\;N_{\bf k}\;d{\bf k},\qquad{\cal M}=\int q\;N_{\bf k }\;d{\bf k}. \tag{5}\]
The enstrophy is a linear combination of \({\cal E}\) and \({\cal M}\). The \(p\)-momentum is not a real physical invariant because the function \(p/\omega_{\bf k}\) has singularity (when \(q\to 0\)), see [32] for details. To show the emergence of zonal flow, we need to consider the following linear combination of the extra invariant \({\cal I}\) with \({\cal E}\) and \({\cal M}\)
\[\tilde{\cal I}={\cal I}-2\sqrt{3}\,\frac{{\cal E}-\mu\,{\cal M}}{\nu-\mu}=\int \tilde{\eta}_{\bf k}\,N_{\bf k}\,d{\bf k},\] (6a) where \[\tilde{\eta}_{\bf k}=\eta_{\bf k}-2\sqrt{3}\,\frac{\omega_{\bf k}-\mu\,q}{\nu -\mu}=\eta_{\bf k}-2\sqrt{3}\,\frac{p}{1+k^{2}}. \tag{6b}\]
Like \(\eta_{\bf k}\), the function \(\tilde{\eta}_{\bf k}\) is independent of \(\mu\) and \(\nu\). Let us write the modified extra invariant (6) in terms of the energy spectrum: \(\tilde{\cal I}=\int R_{\bf k}\,\varepsilon_{\bf k}\;d{\bf k}\), where the ratio \(R_{\bf k}=\tilde{\eta}_{\bf k}/\omega_{\bf k}\) shows how much extra invariant \(\tilde{\cal I}\) is carried by the unit amount of energy with wave vector \({\bf k}=(p,q)\). The contour plot of \(R_{\bf k}\) is presented in figure 1.
Figure 1 shows that the accumulated large scale energy should concentrate near the \(p\)-axis (poloidal/zonal flow). Indeed, if during the energy transfer towards large scales, the energy accumulates away from \(p\)-axis, then the extra invariant \(\tilde{\cal I}\) would significantly increase (in contradiction to the extra conservation). This is because -- according to the figure -- \(R_{\bf k}\) is small at large \(k\) but large at small \(k\), unless \(q\) is close to zero. This is also supported by the asymptotic behavior:
\[\frac{(\nu+\mu k^{2})R_{\bf k}}{8\sqrt{3}}=\left\{\begin{array}{ll}\frac{q^{2 }(5p^{2}+q^{2})}{5k^{8}}+O(k^{-6})&(k\to\infty),\\ \frac{q^{2}}{p^{2}(1+p^{2})^{2}}+O(q^{4})&(q\to 0).\end{array}\right. \tag{7}\]
Now suppose, the extra invariant \(\tilde{I}\) is decreasing because of some external factors, but the energy \({\cal E}\) is not decreasing. Then a lesser amount of the extra invariant is available for the spreading of the energy away from the \(p\)-axis, and the energy has to concentrate more tightly around \(p\)-axis. This means that the emerging zonal flow
is more unidirectional, and it contributes less to the radial transport. In addition, the zonal flow interrupts the radial transport, decreasing the mean free path. It is worthwhile to note that the transport decrease occurs for transport by any plasma mode (not just by drift waves).
Actually, one needs to decrease the extra invariant as much as possible, without decreasing the energy. This is relatively easy because the energy kernel (1) and the extra invariant kernel (4) are quite different.
There are at least two ways to decrease the extra invariant without decreasing the energy.
First, one can introduce some dissipation and pumping in different areas of the \(\mathbf{k}\)-plane, so that the extra invariant \(\tilde{\mathcal{I}}\) is dissipated, while the energy \(\mathcal{E}\) is pumped. This approach was studied in [40].
The present paper reports another opportunity which is due to the inhomogeneity of fusion plasma.
## III Evolution of the invariant by the inhomogeneous wave kinetic equation
Let us describe the turbulence of the drift/Rossby waves by the wave action spectrum \(N(p,q,x,y,t)\) obeying the wave kinetic equation
\[\frac{\partial N}{\partial t}+\frac{\partial(\omega,N)}{\partial(\mathbf{k}, \mathbf{x})}=St[N]\,, \tag{8}\]
where we use the notation
\[\frac{\partial(\mathcal{F},\mathcal{G})}{\partial(u,v)}=\frac{\partial \mathcal{F}}{\partial u}\cdot\frac{\partial\mathcal{G}}{\partial v}\,-\,\, \frac{\partial\mathcal{F}}{\partial v}\cdot\frac{\partial\mathcal{G}}{ \partial u}\]
for arbitrary functions \(\mathcal{F}\), \(\mathcal{G}\) of arbitrary variables \(u\), \(v\) (vector or scalar). The l.h.s. in equation (8) describes the slow refraction of waves, while the r.h.s. (stoss-term or collision integral) describes changes in \(N\) due to the 3-wave resonance interactions (3). We assume that the interactions occur locally in physical space, and so, the three interacting waves have the same plasma parameters (the inhomogeneity does not enter \(St[N]\)).
Let us consider the evolution by the equation (8) of the integral
\[\tilde{\mathcal{I}}=\int\tilde{\eta}(p,q)\;N(p,q,x,y,t)\;dp\,dq. \tag{9}\]
We multiply equation (8) by \(\tilde{\eta}\) and integrate over \(d\mathbf{k}\). Since the quantity \(\tilde{\eta}\) is conserved in the 3-wave resonance interactions, the term \(St[N]\tilde{\eta}\) integrates to zero. The l.h.s. in (8) leads to the flowing of \(\tilde{I}\) in the physical plane (the divergence of the flux) and to a source/sink term
\[\frac{\partial\tilde{\mathcal{I}}}{\partial t}+\frac{\partial}{\partial \mathbf{x}}\cdot\int\frac{\partial\omega}{\partial\mathbf{k}}\,\tilde{\eta}\,N \,d\mathbf{k}=\int S\,N\,d\mathbf{k} \tag{10}\]
with the kernel
\[S=\frac{\partial(\omega,\tilde{\eta})}{\partial(\mathbf{k}, \mathbf{x})}=-\frac{\partial\omega}{\partial x}\frac{\partial\tilde{\eta}}{ \partial p}=\] \[\frac{p\,(\nu^{\prime}+k^{2}\mu^{\prime})}{(1+k^{2})^{3}[(p+ \sqrt{3}q)^{2}+k^{4}][(p-\sqrt{3}q)^{2}+k^{4}]}\,, \tag{11}\]
where the frequencies \(\mu\) and \(\nu\) in the dispersion relation (1) slowly depend on the radial coordinate \(x\); prime denotes the \(x\)-derivative.
If the function \(\tilde{\eta}\) were replaced by \(\omega\), then the last integral in (10) would vanish, and we would have the energy conservation (energy flows throughout the \(xy\)-plane). So, the inhomogeneous part of the kinetic equation can decrease the extra invariant \(\tilde{\mathcal{I}}\), while the energy \(\mathcal{E}\) is always conserved by the entire kinetic equation.
The parameters of the dispersion relation (1) depend on the zonal flow velocity \(U(x)\)[5; 6; 7]
\[\mu=U(x)/\rho,\qquad\nu=\rho[\kappa+U^{\prime\prime}(x)]+U(x)/\rho\,. \tag{12}\]
The radial \(x\) and poloidal \(y\) coordinates are considered in generalized sense: Radial direction is the direction of the anti-gradient of some quantity \(Q\), the poloidal direction is orthogonal to the radial direction and to the confining magnetic field (the contour lines \(Q=\)const are not necessarily circles). The constant \(\kappa\) is determined by the local value of the gradient \(\nabla Q\) (\(\kappa\) is analogous to the \(\beta\)-parameter in hydrodynamics).
The \(U^{\prime\prime}\) term in the second expression (12) is often neglected, but recently its significance is re-considered [5; 6; 7].
We see from (10)-(11) that the invariant \(\tilde{I}\) would decrease if the drift wave turbulence spectrum had significant asymmetric part
\[N_{a}(p,q,x,y,t)=N(p,q,x.y,t)-N(-p,q,x,y,t)\]
Figure 1: Contour plot of the ratio \(R_{\mathbf{k}}=\tilde{\eta}_{\mathbf{k}}/\omega_{\mathbf{k}}\) in logarithmic scale: The color represents the values of \(\ln(R_{\mathbf{k}})\). The figure is qualitatively the same for various value of the parameters \(\mu\) and \(\nu\) in (1); for this particular plot, \(\mu=1,\nu=2\). The graph is shown only for positive \(p,q\), due to the symmetries \(R(p,q)=R(-p,q)=R(p,-q)\).
\[[\rho^{2}U^{\prime\prime\prime}+(1+k^{2})U^{\prime}]N_{a}<0 \tag{13}\]
(most of the factors in (11) are positive and cancel out). The condition (13) is especially simple if the \(U^{\prime\prime}\) term in (12) is neglected. Then \(\tilde{I}\) decreases when \(U^{\prime}N_{a}<0\). This means: If zonal flow velocity \(U(x)\) is decreasing, the turbulence spectrum should be skewed towards positive \(p\) (the spectrum \(N\) should be bigger for positive \(p\) than for negative \(p\)). If zonal flow velocity \(U(x)\) is increasing, the turbulence spectrum should be skewed towards negative \(p\). In general, the condition (13) involves the length scale. The condition (13) can be used to create a transport barrier at a certain location.
## IV Conclusion
The dynamics of interacting drift waves has an adiabatic invariant; Section I. The presence of this extra invariant implies the emergence of zonal flow, which diminishes the transport of heat and particles (by any plasma mode, not just by drift waves), and so, serves as transport barrier. The present paper makes two points:
1). If the extra invariant is decreasing (due to some external factors), while the energy is not decreasing, then the emerging zonal flow is a better transport barrier; Section II. The extra invariant kernel so much differs from the energy kernel, that it is relatively easy to decrease the extra invariant without decreasing the energy.
2). The plasma inhomogeneity can decrease the extra invariant, while preserving the energy; Section III. To see this, we have applied the inhomogeneous wave kinetic equation (the three waves involved in the collision integral having the same plasma parameters) and derived a simple condition (13) for the decrease of the extra invariant.
|
2309.04698 | Advancements in Upper Body Exoskeleton: Implementing Active Gravity
Compensation with a Feedforward Controller | In this study, we present a feedforward control system designed for active
gravity compensation on an upper body exoskeleton. The system utilizes only
positional data from internal motor sensors to calculate torque, employing
analytical control equations based on Newton-Euler Inverse Dynamics. Compared
to feedback control systems, the feedforward approach offers several
advantages. It eliminates the need for external torque sensors, resulting in
reduced hardware complexity and weight. Moreover, the feedforward control
exhibits a more proactive response, leading to enhanced performance. The
exoskeleton used in the experiments is lightweight and comprises 4 Degrees of
Freedom, closely mimicking human upper body kinematics and three-dimensional
range of motion. We conducted tests on both hardware and simulations of the
exoskeleton, demonstrating stable performance. The system maintained its
position over an extended period, exhibiting minimal friction and avoiding
undesired slewing. | Muhammad Ayaz Hussain, Ioannis Iossifidis | 2023-09-09T06:39:38Z | http://arxiv.org/abs/2309.04698v1 | Advancements in Upper Body Exoskeleton: Implementing Active Gravity Compensation with a Feedforward Controller +
###### Abstract
In this study, we present a feedforward control system designed for active gravity compensation on an upper body exoskeleton. The system utilizes only positional data from internal motor sensors to calculate torque, employing analytical control equations based on Newton-Euler Inverse Dynamics. Compared to feedback control systems, the feedforward approach offers several advantages. It eliminates the need for external torque sensors, resulting in reduced hardware complexity and weight. Moreover, the feedforward control exhibits a more proactive response, leading to enhanced performance. The exoskeleton used in the experiments is lightweight and comprises 4 Degrees of Freedom, closely mimicking human upper body kinematics and three-dimensional range of motion. We conducted tests on both hardware and simulations of the exoskeleton, demonstrating stable performance. The system maintained its position over an extended period, exhibiting minimal friction and avoiding undesired slewing.
## 1 Introduction
In recent years, exoskeleton technology has emerged as a promising avenue for enhancing human mobility and assisting individuals with mobility impairments. These wearable robotic systems have shown great potential in various applications, ranging from augmenting strength and endurance for industrial workers to aiding patients with neurological disorders in regaining mobility. Among the key challenges the exoskeleton developers face is the effective management of gravity-related forces, which can be especially demanding in upper-body exoskeletons due to their complex kinematics and range of motion. As the designed exoskeleton is aimed towards rehabilitating patients, it has to be lightweight, and the wearer should experience the motion smoothly and with minimal muscle effort.
To address this challenge, the present study focuses on the development of an innovative feedforward control system for active gravity compensation on an upper body exoskeleton. Since the reference is time-varying and is not known apriori, the torque has to be computed at every instance of time, therefore a Feedforward controller is chosen. Unlike traditional feedback, torque control systems that rely on external sensors and their corrective actions based on feedback errors can introduce jerk and integral windups. The feedforward approach aims to predict and counteract gravitational forces using only the positional data obtained from the motor's internal sensors. This unique feature significantly reduces hardware complexity and weight, paving the way for more practical and efficient exoskeleton designs.
The proposed control system leverages analytical control equations based on Newton-Euler Inverse Dynamics, enabling real-time calculation of the corresponding torque to counteract gravity. By avoiding the need for external torque sensors,
the system achieves a more proactive and responsive control, leading to improved performance and stability during operation.
The upper body exoskeleton at the core of this study boasts a lightweight and ergonomic design, mimicking human upper body kinematics and providing a three-dimensional range of motion. The system's four Degrees of Freedom offer versatility and adaptability, making it suitable for various activities and applications.
To evaluate the effectiveness of the feedforward control system, comprehensive testing was conducted on both hardware and simulations of the designed exoskeleton. The system's ability to maintain its position for an extended period, even in the presence of minimal friction, was observed, demonstrating the efficacy and robustness of the developed control mechanism which is discussed in the Section 7.
Overall, the implementation of a feedforward control system for active gravity compensation on an upper body exoskeleton holds great promise in advancing the field of wearable robotics as it can be implemented on most manipulator arms as they have position sensors by default. Its potential to enhance human-machine interactions and its application in rehabilitation and assistive technologies present exciting prospects for addressing real-world mobility challenges and improving the quality of life for a diverse range of users.
## 2 Related Works
The gravity compensation has been studied and implemented on several rehabilitative exoskeletons, both upper-body and lower-body, and both in active and passive domains. The authors of [1] coined the terms Zero-G and One-G in the context of the upper body exoskeleton (CyberForce Exoskeleton), where One-G meant that only the weight of the rehabilitative exoskeleton is compensated, whereas Zero-G meant that both the weight of the exoskeleton, and the weight of the patient's arm, are compensated. They calculated the amount of force exerted by the robot using haptic feedback and discretizing the workspace into cubes and calculated force at each of their vertices. Afterward, the authors performed Electromyography to study muscle activity and fatigue in both Zero-G and One-G scenarios in virtual environments, and it was found out that for Zero-G scenarios, the muscle fatigue was less than One-G scenario and almost similar to the fatigue when the subject is resting. In our paper, we used the same terms for Zero-G and One-G as used by the authors of this paper.
In [2], the authors looked into the improvement in the work area of the hemiparetic arm by performing passive gravity compensated (using Freebal device) reach training on seven chronic stroke patients for 6 weeks in 18 half-hour sessions and they concluded the mean increase in Fugl-Meyer Assessment by 3 points, which indicates the improvement of the motor functioning, balance, and joint functioning in patients suffering from post-stroke hemipgesia by the means of gravity compensation.
In [3], the authors discussed the modeling, control, and kinematic and design evaluation of the HARMONY exoskeleton. The control system was based on a recursive Newton-Euler algorithm, which was formulated as a part of the dynamic model of the robot based on the feed-forward torque, which compensates for the robot dynamics. The exoskeleton has five active degrees of freedom and one degree of freedom each for elbow and wrist joints using Series Elastic Actuators. The feedback of those actuators was utilized by a PID controller to control the torque output.
The authors of [3] in [4] presented a strategy for the control of the shoulder mechanism of an upper-body exoskeleton (HARMONY) for promoting the Scapulohumeral rhythm using the inverse dynamics model to compensate for the dynamics of the robot (including gravity force). According to the authors, to provide the therapeutic movement the main contributor of torque was the effect of gravity, since the movement has low velocity, therefore inertial forces can be ignored. Firstly, the feed-forward torque with zero-torque compensated the majority of the weight of the robot against gravity was confirmed in every configuration. To constrain the shoulder mechanism, the coupling torque was added to follow the upper arm link with the given angular ratio. The trajectory of the shoulder mechanism with respect to the upper arm link angles were tracked throughout the elevation with a nearly zero coupling torque. On the contrary, when the force is applied to the shoulder girdle mechanism while the operator elevates (which mimics an abnormal scapulohumeral rhythm), the shoulder mechanism exerts a gentle force to recover a normal coordinated angle safely.
In [5] the authors have implemented a feedforward model based arm weight compensation using the rehabilitation Robot ARMin on the upper and lower arm. They have performed an evaluation of their requirements and according to that, their exoskeleton should have Freedom of Movement, No Additional Disturbances, Scalability, and Applicability to other systems. Afterward, the experimental results were verified using EMG measurements. They installed their EMG electrodes on 6 different muscles and performed EMG measurements in 4 different positions on 3 subjects. They found out that weight compensation reduces the effort of the subject wearing the weight compensated robot, by an average of 26% and across the whole workspace. Their feedforward model-based weight compensation method took the torque data from the robot as the input and performed the suitable actuation. Their method is quite similar to our approach, the only difference is that their feedforward control system is based on the torque inputs, which do the weight compensation afterward.
The manufacturers and authors of HomeRehab Robotic Device [6] and [7] have implemented analytical and machine
learning methods to perform gravity compensation. Their 3 DoF robot which consists of closed chain kinematics even though not an actual wearable exoskeleton, but it tries to accomplish the task of rehabilitation of stroke patients as well as perform gravity compensation in 2 dimensions or 3 dimensions [6]. Their control system is based on feedback from an external, low-cost force sensor in the end effector. Afterward, they devised analytical equations for gravity compensation and compared them with their PID controller and several machine learning algorithms. According to their results, it was difficult to ascertain which performed better ML methods or Analytical equations for the determination of gravity compensation torques. Analytical Methods can calculate the gravity compensation torque even outside the workspace whereas, ML methods require only training data to learn the compensating torques, and they cannot perform outside the workspace they are trained for. This drawback is more like an overfitting problem, but still can be solved by using deep learning methods and more training data.
In the current work, we developed a 4 DoF exoskeleton, which performed active gravity compensation in 3 DoF, with a 3-dimensional workspace by implementing a feedforward control system using only the positional inputs from the motor's internal sensors for controlling the output torques. In the above-mentioned works, they have either used feedback controllers (PD, PID, etc.) with external sensors or used feedforward controllers using internal torque sensors for performing active or passive gravity compensation.
Even though in our case, using internal positional sensors and a feedforward controller made the modeling equations of the system more complex, it also contributed to making the system more lightweight and responsive.
## 3 Theory
To determine the joint torques and forces necessary to generate the desired joint positions' corresponding acceleration, we employ the concept of inverse dynamics. In this study, we have adopted the Newton-Euler Inverse Dynamics approach, which relies on the forces exerted on each individual link. Its recursive computational structure, particularly its treatment of rotational dynamics, makes it a suitable choice [8].
When an \(n\)-DoF rigid serial link manipulator experiences no external forces, the Newton-Euler Inverse Dynamics equation can be formulated as shown below:
\[\tau=M(\theta)\ddot{\theta}+C(\theta,\dot{\theta})+F(\dot{\theta})+G(\theta)+J (\theta)^{T}W \tag{1}\]
In this equation:
* \(\theta\), \(\dot{\theta}\), and \(\ddot{\theta}\) represent \(N\times 1\) vectors representing joint positions, velocities, and accelerations, respectively.
* \(M(\theta)\) represents an \(N\times N\) matrix representing the symmetric joint space inertia matrix.
* \(C(\theta,\dot{\theta})\) represents an \(N\times N\) matrix representing the symmetric Centrifugal and Coriolis forces.
* \(F(\dot{\theta})\) represents an \(N\times 1\) vector representing joint frictions.
* \(G(\theta)\) represents an \(N\times 1\) vector representing the gravitational influence on each joint.
* \(J(\theta)\) represents a \(6\times N\) matrix, which is the Jacobian Matrix transforming the force at the End-Effector (located at the \(N^{th}\) link) into joint torques.
* \(W\) is \(6\times 1\) a vector which consists of \([F_{x}F_{y}F_{z}M_{x}M_{y}M_{z}]^{T}\) where \(F\) and \(M\) represent forces and moment respectively on the \(N^{th}\) link of the End-Effector.
* \(\tau\) is the \(N\times 1\) vector consisting of torque at each joint
In Equation 1, we should take into consideration the different variables of that equation in the context of the exoskeleton. Based on the calculations, experimentation, and literature review, it is evident that the inertia matrix, \(M(\theta)\dot{\theta}\), which depends upon the acceleration, length, mass, and radius of the human and the robot has a very negligible impact on the overall gravity compensation as all of those variables are quite small. This can be confirmed by [9], where the authors performed the mathematical analysis of the moment of inertia of the human arm at a fixed position while considering the human arm as the frustum of a cone (truncated cone). Similarly, the numeric values of inertia generated by the human body are presented by[10] which also indicates the low values by the human arm.
Similarly, [11], studied the effects of velocity torques on the human arm at different velocities and came to the conclusion that when the movement of the human arm is slow, gravity plays a major role in dynamics whereas, at higher velocities, torque related to velocity takes over. Our work is aimed toward the rehabilitation of stroke patients using the exoskeleton, therefore, its overall velocity is kept low. We also noticed the same thing as the influence of gravity is much higher than the effect of inertia or centrifugal force since the exoskeleton design is lightweight and the subject cannot move his/her arm at a higher velocity.
The Coriolis forces and centrifugal forces represented by \(C(\theta,\dot{\theta})\) do not influence the robot in any meaningful way. The viscous and coulomb frictions \(F(\dot{\theta})\) exist in the system stemming from the internal frictions of the motors, but their overall effect does not influence the gravity compensation. \(J(\theta)^{T}W\) is due to the end-effector, which the current exoskeleton does not have. The effect of gravity as represented by \(G(\theta)\) in Equation 1 has the most influence on the exoskeleton dynamics as each pose has a different orientation of the joints, therefore, the effect of gravity is different which is calculated in Equations 2 to 5 and Equations 6 to 9 with gyroscopic stabilization.
## 4 Design of Exoskeleton
In this study, an upper-body exoskeleton is developed, which consists of 4 high power-to-weight ratio Brushless DC (BLDC) Motors on each side. The shoulder joints are represented by 3 Motors mounted on the top side of the Exoskeleton, and the fourth motor represents the elbow joint in Figures 1 and 2. Each motor has position, velocity, and current sensors. The overall assembly including both the left and right sides of the exoskeleton, the wearing assembly, power pack, electric and CAN bus connections, circuitry, and controllers weighs around 8 kg and is highly mobile and ergonomic as weight is quite distributed. The workspace of the exoskeleton is 3-dimensional, therefore it is capable of movement and can perform gravity compensation in multiple planes and directions. The exoskeleton is connected to the user's arms with cuffs at the bicep and forearm so that the user can control the robotic arm as well as the motion of the shoulder joints.
user. Consequently, a Feedforward Controller was devised due to its ability to offer the desired advantages. This controller derives gravity-compensating torques for each joint by utilizing input data from the internal position sensor of each motor.
One of the key advantages of implementing the Feedforward Controller is its obviation of the need for an additional external sensor in the exoskeleton hardware, such as a wearable torque sensor. This decision not only keeps costs and complexity at a minimum but also enhances ergonomic considerations.
Furthermore, [11] conducted a comparison between open-loop (feedforward) and closed-loop (feedback) controllers, highlighting that open-loop controllers, which precisely compute joint torques analytically, exhibit superior reactivity when compared to their feedback (closed-loop) counterparts. This equates to reduced computation time.
Additionally, since the system's dynamics were meticulously taken into account during the derivation of the system's modeling equations in the exoskeleton design, it effectively overcame the typical limitation of feedforward systems, which necessitate careful consideration of all system parameters to be controlled.
The inverse dynamics equations employed in this scenario are outlined below. When the subject is stationary, particularly when seated or immobile, the associated torque for each motor joint is determined as follows. These equations are
Figure 3: Feedforward Scheme for the control of Exoskeleton for Equations 2 to 5
Figure 2: Exoskeleton with the wearer
specifically applicable when the robot's design adheres to the DH Parameters, as detailed in Table 1.
\[\tau_{1} =c_{1}g[d_{1}(m_{1}+m_{2}+m_{3}+m_{4})\] \[+l_{2}(m_{2}+m_{3}+m_{4})\sin(\theta_{2})\] \[+l_{3}(m_{3}+m_{4})\cos(\theta_{3})\] \[+l_{4}m_{4}\cos(\theta_{3}+\theta_{4})]\sin(\theta_{1}) \tag{2}\] \[\tau_{2} =c_{2}g[d_{2}(m_{2}+m_{3}+m_{4})+l_{3}(m_{3}+m_{4})\] \[\cdot\cos(\theta_{3})+l_{4}m_{4}\cos(\theta_{3}+\theta_{4})]\sin( \theta_{2})\] (3) \[\tau_{3} =c_{3}g[d_{3}(m_{3}+m_{4})\sin(\theta_{3})\] \[+l_{4}(m_{4})\sin(\theta_{3}+\theta_{4})]\cos(\theta_{2})\] (4) \[\tau_{4} =c_{4}gd_{4}(m_{4})\sin(\theta_{3}+\theta_{4})\cos(\theta_{2}) \tag{5}\]
Where \(l_{n}\) is the length of the corresponding link, \(d_{n}\) is the center of gravity, g = 9.8 \(m/s^{2}\) is the acceleration due to gravity, \(m_{n},\theta_{n},\tau_{n}\), are the mass, angular position, and torque respectively at the \(n\)-th joint \(c_{n}\) is the constant to adjust/vary the value of torque for the \(n\)-th joint (by default, it should be kept at 1).
\(\theta_{n}\) the angle motor joint makes against the gravity vector.
\(\theta_{n}=[\forall\in\theta_{1},-\pi/2,0,0]\) corresponds to the straight downward position from joint 2 (of minimum potential energy). \(l_{1},l_{2},l_{3},l_{4}\) represent the corresponding lengths of the links as shown in Figure 1.
Joint 1, if the viewer is not moving or by default, its axis is parallel with the gravity vector, there is no need for gravity compensation in this case as \(sin\theta_{1}=0\) (i.e. its z-axis is parallel to the gravity vector) therefore, \(\tau_{1}=0\) in Equation 2. Joint 2 is orthogonal to the subsequent joints 3 and 4. So initially, when Joints 3 and 4 are at the default position(i.e.\((\theta_{3},\theta_{4})=(0,0)\)), it works as a simple planar arm with a combined mass and lengths of all subsequent joints. But when Joints 3 and 4 are at some non-zero angle, the combined torques affecting Joint 2 should be less since the length of the moment arm is shorter than the length at the default position.
If the subject is wearing the exoskeleton and is mobile, then the stabilization must be provided using the outputs of the gyroscopes mounted on the flat surface near the shoulder. The angles of pitch (bowing) and yaw (sideways movement) of the human body, as shown in Figure 4, must be accounted for which are represented by \(\beta\) and \(\phi\) respectively. The following set of equations (Equations 6 to 9), considering that their default angles \((\beta,\phi)=(0,0)\) were built on top of the above-mentioned set of Equations 2 to 5 with input from gyroscopic sensors considered. In this case, \(\tau_{1}\) should be considered non-zero as \(\theta_{1}\) may or may not be aligned with the gravity vector and is represented by \(\tau_{1m}\).
Figure 4: Bowing and Tilting, Represented by \(\beta\) and \(\phi\) respectively
\[\tau_{1m} =c_{1}g[d_{1}(m_{1}+m_{2}+m_{3}+m_{4})\] \[+l_{2}(m_{2}+m_{3}+m_{4})\sin(\theta_{2}+\phi)\] \[+l_{3}(m_{3}+m_{4})\cos(\theta_{3}+\beta)\] \[+l_{4}m_{4}\cos(\theta_{3}+\theta_{4}+\beta)]\sin(\theta_{1})\cos( \phi)\cos(\beta) \tag{6}\] \[\tau_{2m} =c_{2}g[d_{2}(m_{2}+m_{3}+m_{4})\] \[+l_{3}(m_{3}+m_{4})\cos(\theta_{3}+\beta)+l_{4}m_{4}\cos(\theta_{ 3}+\theta_{4}+\beta)]\] \[\cdot\sin(\theta_{2}+\phi)\cos(\theta_{1})\cos(\phi)\cos(\beta)\] (7) \[\tau_{3m} =c_{3}g[d_{3}(m_{3}+m_{4})\sin(\theta_{3}+\beta)\] \[+l_{4}(m_{4})\sin(\theta_{3}+\theta_{4}+\beta)]\cos(\theta_{2}+\phi)\] \[\cdot\cos(\theta_{1})\cos(\phi)\cos(\beta)\] (8) \[\tau_{4m} =c_{4}gd_{4}(m_{4})\sin(\theta_{3}+\theta_{4}+\beta)\] \[\cdot\cos(\theta_{2}+\phi)\cos(\theta_{1})\cos(\phi)\cos(\beta) \tag{9}\]
Where \(\tau_{nm}\) is the torque at the respective \(n\)-th joint.
## 6 Implementation of Feedforward Controller
In this section, we shall discuss the implementation of the control equations in Simulation and on actual hardware.
### Simulation and Data-Driven Model for Gravity Compensation Equation
The Feedforward Controller was first tested on an equivalent representation of the exoskeleton on Matlab and Simulink using the Robotics Toolbox by Peter Corke [12]. It comes with a wide range of functions that enable users to effortlessly visualize and regulate the motion and dynamics of individual serial-linked robotic manipulators. To create the linkages, the functions Link(dh, options) and SerialLink(L) were utilized, first to construct the individual links and then to connect them. The Robotic Toolbox utilizes the Denavit-Hartenberg(DH) parameters to assign the robotic link configuration and properties \([a_{n},\alpha_{n},d_{n},\theta_{n}]\). For the designed Exoskeleton, the DH parameter table is shown in Table 1. The dynamic parameters of the exoskeleton were defined afterward, including, link mass, viscous friction, coulomb friction, motor inertia, inertia matrix, gravitational vector, the center of gravity, and gear ratio. The offset of \(-\pi/2\) rad is provided to joint 2 such that by default the exoskeleton hangs downward like a human arm from joint 2 onwards.
After defining the robot, the gravload function of Robotics Toolbox was used to simulate the effect of gravity from any starting position of the robot. The gravload function is based on RNE (Recursive Newton Euler) which calculates the inverse dynamics of the system under consideration. The output of the gravload function was used to compare the accuracy of the analytical function and Adaptive Network-based Fuzzy Inference System (ANFIS) controller in the simulation. The ANFIS controllers are based on the combination of Artificial Neural Networks(ANN) and Fuzzy Inference System(FIS). The ANN part does the classification of the input, whereas the FIS part of ANFIS decides the output based on linguistic models.
Then the model is given the command to move within the workspace, while the gravload function and analytical equations (Equations 3 to 5) calculate the amount of torque exerted on every joint at different joint angles on certain intervals under the influence of gravity. Using the torque data generated by gravload function, at different joint angles, an ANFIS controller model was trained. Afterward, the comparison was made between the output of the implemented
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \(a(m)\) & \(\alpha(rad)\) & \(d(m)\) & \(\theta(rad)\) & \(offset(rad)\) \\ \hline \hline Joint 1 & 0.05 & \(-\pi/2\) & 0 & \(\theta_{1}\) & 0 \\ \hline Joint 2 & 0.13 & \(\pi/2\) & 0 & \(\theta_{2}\) & \(-\pi/2\) \\ \hline Joint 3 & 0.3 & 0 & 0 & \(\theta_{3}\) & 0 \\ \hline Joint 4 & 0.3 & 0 & 0 & \(\theta_{4}\) & 0 \\ \hline \end{tabular}
\end{table}
Table 1: DH Parameters of the Exoskeleton
gravload function, analytical equations, and ANFIS controller. Based on the comparison, it was found that the analytical equations performed much better than the ANFIS controller, almost similar to the benchmark gravload function. The resulting Root Mean Square Error (RMSE) is given in Section 7.1.
### Implementation of Analytical Equation on Hardware
Following the verification of analytical equations through simulation trials, they were subsequently put into practice on the exoskeleton. A comprehensive hardware description can be found in Section 10. The controller setup involved utilizing an Arduino Uno equipped with a CAN bus shield, responsible for managing four BLDC Motors on each side through the CAN bus communication protocol. For an evaluation of the exoskeleton's stability during the execution of gravity compensation, please refer to Section 7.2.
## 7 Results
This section is divided into the simulation and hardware parts. In the simulation part, both the Analytical Equations for calculating the torque corresponding to each joint as stated in Equations 3 to 5, and the output generated by ANFIS controller were compared with the gravload function (mentioned in Section 6.1) of Robotics toolbox. Afterward, in the hardware part, the result of the stability analysis while performing the active gravity compensation at multiple joint positions of the exoskeleton is discussed.
### Results from Simulation
The error size between the joint torques calculated by inverse dynamics function of the Robotics Toolbox (gravload) on the open kinematic chain (Exoskeleton) when compared with Analytical Equations 3 to 5 and ANFIS respectively for each joint for 1053 different positions spanning the entire workspace of the Exoskeleton. The Root Mean Squared Error (RMSE) of the analytical function and ANFIS were calculated for their accuracy, and it was inferred that the analytical function has negligible RMSE (for Joint 2, Joint 3, and Joint 4 as \(6.06e^{-16}\), \(4.06e^{-16}\), \(8.03e^{-17}\) respectively), whereas ANFIS Controller, RMSE for Joint 2, Joint 3 and Joint 4 was \(1.71e^{-3}\), \(1.32e^{-3}\), \(3.15e^{-4}\)respectively which is significantly higher than analytical equations. ANFIS takes around 380 seconds on average to train to predict one joint torque from the simulation data on AMD Ryzen 5950HX Processor, 32 GB RAM, and Nvidia RTX3080 GPU on Matlab.
### Results from Hardware
Gravity compensation means that the robot should hold its particular position without succumbing to the influence of gravity and collapsing. Therefore, this can be ascertained that there is no variation in position when active gravity compensation is implemented on it. In the first subplot of Figure 5, the positions of three joints of the robot (Joints 2, 3, and 4) that are not parallel with the gravity vector are shown. They will fall to their lowest potential energy state without gravity compensation. However, when gravity compensation is applied, they stay at their respective positions, indicating that the gravity's overall effect is nullified.
The middle section of Figure 5 highlights a disturbance in positions and velocities that corresponds to the deliberate manipulation of every joint by the user within the 17-second to 20-second timeframe. In which, Joint 2 was elevated whereas joints 3 and 4 were moved downwards (as seen from Positional and Velocity subgraphs). Following this movement, all the joints maintained their respective positions and their velocities were around zero.
In Figure 6, the stability analysis was performed for multiple joint orientations of the exoskeleton. Each position was held for about a minute, and then the joints were rotated to a new position and were tested for gravity compensation for that particular position before transitioning to a new position. In total, 13 different positions are shown here for reference. After moving to a new position, the joints held their position as there was no change in position, implying zero overall velocity. The video of hardware performing the gravity compensation at One-G is available at [13].
## 8 Discussion
This study focuses on the conceptualization and control system of a lightweight upper-body exoskeleton. The hardware was kept simple to reduce the weight such as the use of a high power-to-weight ratio BLDC Motors with built-in driver circuitry in the joints, lightweight carbon fiber for the link construction, and a decision to forgo external sensors due to the use of a Feedforward controller.
The analytical equations were opted over the data-driven approach of the ANFIS controller due to their lower accuracy. There are shortcomings of ANFIS controllers such as the curse of dimensionality, which means that if the number of
features increases then it requires an exponential amount of time to train with time complexity around the order of (\(O^{2}\)) [14]. Also, ANFIS controllers are Multiple Input Single Output (MISO) systems, which means that we have to train one ANFIS for every joint that takes the positions (\(\theta_{1},\theta_{2},\theta_{3},\theta_{4}\)) of all 4 Joints and calculates the torque (\(\tau_{n}\)) for that joint. If we add the stabilization data from the gyroscope, then it shall have more input features(\(\theta_{1},\theta_{2},\theta_{3},\theta_{4},\beta,\phi\)) and would be much slower to train. Therefore, in our case, we have to put 4 ANFIS controllers into our microcontroller (each outputting the corresponding torque for every motor) which requires a bit more processing than the system of the analytical equations. Since ANFIS needs the positional data and corresponding torque of the motors to train, data generation from actual hardware was a time-consuming step but in simulation, the data, and feature generation was easier to do as compared to the hardware of the exoskeleton. The analytical function worked perfectly in Simulation as it generated almost negligible RMSE and the robot held its position under the influence of gravity even with minimal friction (emanating from the motors) the actual hardware was totally stable as it did not drift from its given position for any given time-frame.
## 9 Conclusion and Future Works
In this paper, we have successfully demonstrated the feasibility of gravity compensation on an upper body exoskeleton or similar robot using the Feedforward Analytical Function based on Newton-Euler Inverse Dynamic Equations. Through rigorous testing on both simulations and a physical hardware prototype of the exoskeleton, the controller based on these equations exhibited flawless performance. Notably, in comparison to the data-driven ANFIS controller, the analytical solution proved to be more accurate in the simulation, leading us to disregard the ANFIS controller for the exoskeleton hardware.
Looking ahead, if flexible or elastic links are employed in the system, the analytical solution may become impractical due to uncertainties introduced by elasticity. In such cases, data-driven approaches like ANFIS could be considered instead.
Currently, the exoskeleton prototype effectively compensates for its own weight (One-G). However, future iterations will require more powerful motors to enable compensation for the wearer's arm weight as well as the robot's weight
Figure 5: Stability analysis during the active gravity compensation of the Exoskeleton for two different positions
(Zero-G). The gravity compensation equations will remain unchanged, with adjustments made only to the corresponding mass constant to accommodate the varying arm mass. While the exoskeleton controller currently provides gravity compensation, we propose the development of an additional Feedback control loop to complement the existing feed-forward loop. This new loop will be designed to control the exoskeleton's position and velocity using external sensors like IMEI, EMG, EEG sensors, or VR glasses. The objective is to create a cascade control system and determine if stroke patients, while wearing the exoskeleton, can effectively control it using external signals apart from their arm muscles. This would open up possibilities for rehabilitation or other tasks.
In conclusion, our research showcases the efficacy of the Feedforward Analytical Function in gravity compensation for an upper body exoskeleton and outlines a promising direction for further enhancing its control capabilities through the integration of external sensors in a cascade control system.
## 10 Appendix
### Hardware Specifications
#### 10.1.1 Motors
On each side of the exoskeleton, there are four BLDC Motors, three of them are AK-60-6 (Joints 1,3 and 4) and one AK-80-9 Motor (Joint 2) from T-motors.
The AK-60-6 is a BLDC motor designed for use in robotics applications. It is a high-performance motor with a voltage rating of 24 volts, a rated current of 7.4 amps, a peak torque of 9 Nm, and a weight of 315 grams. T-Motor AK-80-9 is a larger derivative of the AK-60-6 and has a voltage rating of 48 volts, a rated current of 10.3 amps, a peak torque of 18 Nm, and a weight of 485 grams. They have a compact size, inbuilt driver circuitry, low weight, reliability, and
Figure 6: Stability analysis during the active gravity compensation of the Exoskeleton for multiple positions
a high-to-weight ratio making them suitable for use in small robots, robotic arms, or other mechanisms that require precise and high-force movement such as upper-body exoskeleton.
#### 10.1.2 Links and Brackets
Lightweight Carbon Fiber tubes are used in the links and the brackets of the motor mounts. The 3D-printed Motor mounts are built for 190\({}^{\circ}\) (approx. 3.3 rad) movements. They are durable, have a high strength-to-weight ratio, corrosion resistance, and low thermal expansion.
#### 10.1.3 Controller, Communication, and Power Supply
The control system was implemented on an Arduino Uno which is based on the ATmega328P microcontroller. The programming was done using Arduino Integrated Development Environment(IDE) which supports a user-friendly simplified version of the C++ programming language. As the motors communicated via CAN bus protocol, the CAN shield was used on top of the microcontroller.
The whole setup was powered by a TATTU Smart Lipo Battery 22,2V 6 Cells, 222Wh which weighs around 1.4 kg. It is compact, lightweight, has high energy density, high output, and long lifespan.
|
2309.15959 | Linear Progressive Coding for Semantic Communication using Deep Neural
Networks | We propose a general method for semantic representation of images and other
data using progressive coding. Semantic coding allows for specific pieces of
information to be selectively encoded into a set of measurements that can be
highly compressed compared to the size of the original raw data. We consider a
hierarchical method of coding where a partial amount of semantic information is
first encoded a into a coarse representation of the data, which is then refined
by additional encodings that add additional semantic information. Such
hierarchical coding is especially well-suited for semantic communication i.e.
transferring semantic information over noisy channels. Our proposed method can
be considered as a generalization of both progressive image compression and
source coding for semantic communication. We present results from experiments
on the MNIST and CIFAR-10 datasets that show that progressive semantic coding
can provide timely previews of semantic information with a small number of
initial measurements while achieving overall accuracy and efficiency comparable
to non-progressive methods. | Eva Riherd, Raghu Mudumbai, Weiyu Xu | 2023-09-27T19:16:25Z | http://arxiv.org/abs/2309.15959v1 | # Linear Progressive Coding for Semantic Communication using Deep Neural Networks
###### Abstract
We propose a general method for semantic representation of images and other data using progressive coding. Semantic coding allows for specific pieces of information to be selectively encoded into a set of measurements that can be highly compressed compared to the size of the original raw data. We consider a hierarchical method of coding where a partial amount of semantic information is first encoded a into a coarse representation of the data, which is then refined by additional encodings that add additional semantic information. Such hierarchical coding is especially well-suited for semantic communication i.e. transferring semantic information over noisy channels. Our proposed method can be considered as a generalization of both progressive image compression and source coding for semantic communication. We present results from experiments on the MNIST and CIFAR-10 datasets that show that progressive semantic coding can provide timely previews of semantic information with a small number of initial measurements while achieving overall accuracy and efficiency comparable to non-progressive methods.
semantic communication, compressed sensing, compressed learning, neural network, classification
## I Introduction
We consider the general problem of (linear) progressive semantic representation of data using deep neural networks for efficient data storage and communication.
Semantic encoding means that we do not wish to store or transmit data in its raw form; instead, we wish to selectively encode certain meaningful information ("message") contained in the data. We consider the case where the message can be organized in a hierarchical sequence of categories. We seek to design a progressive encoding scheme where a coarse initial description of the message is augmented by refining descriptions. Given storage and communication constraints, our goal is to explore the tradeoffs between the amount of resources required for the initial coarse description and subsequent refinements.
### _Related Work_
The idea of progressive coding has been most well-developed in the area of image processing. The concept of progressive image coding or compression was originally popularized [1] for efficiently transmitting images over slow Internet connections. Standards such as JPEG 2000 [2] allowed for encoding and transmitting images in a gradual manner, allowing for the display of lower-resolution versions while higher-resolution details are progressively transmitted.
This first generation of progressive image coding methods were primarily based on wavelet and frequency-domain representations [3]. While some of this early works also attempted to take into account the human visual system to optimize the encoding [4], the ability to minimize perceptual distortions [5] has been significantly enhanced by the more recent introduction of convolutional neural networks [6].
In fact, the new capabilities from neural networks have led to image coding methods that combine progressive encoding with _semantic data representation_[7, 8], wherein each step in progressive coding offers enhanced image by adding some meaningful information that was previously missing. Simultaneously, the introduction of deep neural networks has also led to a renewed interest in semantic information processing with various types of data [9, 10, 11]. The idea is to use neural networks to selectively extract meaningful pieces of information from raw data for storage, processing and transmission. In communication engineering, this represents a major departure [12] from the previously dominant Shannon model [13] where semantics are ignored.
Recent work on _compressed learning_[14] explores extracting semantic information from images using only a small number of measurements. It is well-known from the classical theory of compressed sensing that natural data such as images [15, 16, 17] can be recovered from under-sampled measurements by taking advantage of sparsity. Compressed learning seeks to extract semantic information, rather than the image itself, from a minimal number of measurements [18, 19].
### _Contributions and Findings_
In this paper, we integrate the idea of progressive coding and compressed learning in semantic communications. Specifically, we used (linear, benefits of "linear" explained later) compressed measurements to efficiently encode semantic information in a progressive fashion for classification purpose at the receiver. Thus an initial small number of samples (measurements) are used to encode information for coarse classifications, and later more samples are used to encode information for fine-grained classifications. Deep neural networks can be used to train projections for such measurements and to perform classifications using these compressed measurements.
We report on a series of experiments on the MNIST and CIFAR-10 image datasets to illustrate this concept. In both experiments, we perform an initial coarse classification using smaller number of samples followed by a more detailed classification with more samples. Some key findings from these experiments are as follows.
1. We show that the raw signal data can be very significantly compressed into a small number of measurements to encode the semantic information of interest. This is consistent with the literature on semantic coding. Furthermore, the measurements involved only linear projections of the raw image data.
2. Our progressive classifiers are comparable in complexity (measured by number of layers and neurons) and achieves a similar performance (measured by classification accuracy) to non-progressive classifiers from the literature using the same number of measurements. Of course, our progressive classifiers are also able to provide a quick preview of a coarse classification.
3. There is a tradeoff between the accuracy of the initial coarse classification and the number of measurements used for the coarse encoding. The less obvious observation is that useful levels of accuracy can be achieved with a surprisingly small number of measurements. As an extreme case, for the MNIST dataset that we can make an initial prediction about an image label with \(90\%\) accuracy with _just one single linear measurement_.
## II Problem Statement
Let \(\mathbf{X}\in\mathbb{R}^{N}\) be a vector in a high-dimensional space such as a vectorized set of image pixels. Let \(\mathbf{A}_{k}\in\mathbb{R}^{M_{k}\times N},\ k=1\ldots K\), represent a sequence of measurement matrices that produce the sequence of measurements \(\mathbf{U}_{k}\doteq\mathbf{A}_{k}\mathbf{X}\). The measurements \(\mathbf{U}_{k},\ k=1\ldots K\), are transmitted over a noisy channel \(P(\mathbf{V}_{k}|\mathbf{U}_{k})\) and the resulting noisy measurements \(\mathbf{V}_{k},\ k=1\ldots K\) are processed by machine-learning or other prediction algorithms to produce a sequence of predicted labels \(\mathbf{\hat{Y}}_{1}\doteq g_{1}(\mathbf{V}_{1}),\ \mathbf{\hat{Y}}_{2} \doteq g_{2}(\mathbf{V}_{1},\mathbf{V}_{2}),\ \ldots,\ \mathbf{\hat{Y}}_{k} \doteq g_{k}(\mathbf{V}_{1},\mathbf{V}_{2},\ldots,\mathbf{V}_{k}),\ \ldots,\ \mathbf{\hat{Y}}_{K} \doteq g_{K}(\mathbf{V}_{1},\mathbf{V}_{2},\ldots,\mathbf{V}_{K})\).
The true labels \(\mathbf{Y}_{k}=f_{k}(\mathbf{X}),\ k=1\ldots K\) represent a sequence of refinements of semantic information contained in \(\mathbf{X}\), where \(\mathbf{Y}_{1}\) and \(\mathbf{Y}_{K}\) represents a very coarse-grained and fine-grained label respectively. Our goal is to eventually recover the fine-grained label \(\mathbf{Y}_{K}\). However, we would also like to obtain quick previews and successive refinements in the form of the coarse-grained labels \(\mathbf{Y}_{1},\ \mathbf{Y}_{2},\ldots\) similar to how progressive image coding gradually generates a high resolution image by refining an initial low resolution image.
In general, we aim to have accurate prediction of \(\mathbf{Y}_{k}\), \(k=1\ldots K\). We prioritize having timely classifications for \(\mathbf{Y}_{k}\)'s with lower index \(k\) with earliest-received batch of samples at the communication receiver. The utility of the communication receiver can be modeled by a weighted sum of mutual information:
\[\sum_{k=1}^{K}\lambda_{k}I(\mathbf{V}_{1},\mathbf{V}_{2},\ \ldots,\mathbf{V}_{k}; \mathbf{Y}_{k}), \tag{1}\]
where \(\lambda_{k}\)'s are adjustable non-negative parameters putting different priorities on the different grain-level tasks. For example, if \(K=2\), \(\lambda_{1}\gg\lambda_{2}>0\) implies that the first set of measurements \(\mathbf{V}_{1}\) needs to give highest accuracy for decoding label \(\mathbf{Y}_{1}\); moreover, conditioned on that, the 2nd batch of measurements are required to give highest accuracy for decoding label \(\mathbf{Y}_{2}\), when combined with the existing 1st batch of measurements.
Our goal is to design a progressive encoding (sampling) scheme that optimize (1). Note that generally we can use non-linear projections (for example, projections through neural network) of \(\mathbf{X}\) to obtain these compressed projection \(\mathbf{U}_{k}\)'s. However, besides linear measurements being simple to implement in low-power sensors or devices, we particularly propose linear projections for the following reasons. Firstly, when the total number of (noiseless) samples \(\sum_{k=1}^{K}M_{k}=N\), one can simply use matrix inverse to fully recover the full data \(\mathbf{X}\); however, for general non-linear measurements, we do not have efficient algorithms that theoretically guarantee fully recovering \(\mathbf{X}\). Secondly, linear measurements can be more robust against adversarial attacks when compared with non-linear measurements obtained through neural networks. Even when the number of linear samples \(\sum_{k=1}^{K}M_{k}\ll N\), one can still use sparsity-based compressed sensing to recover the full signal with (adversarial) robustness guarantees.
## III General architecture of (Linear) progressive semantic coding
Figure 1 describes a general architecture for linear progressive coding for semantic communication. In this architecture, the first batch of linear samples \(\mathbf{U}_{1}\) is optimized (trained) to give best performance for coarser-level Semantic Task 1. Now with \(\mathbf{U}_{1}\) fixed, we train another batch of linear samples \(\mathbf{U}_{2}\) such that when combined with \(\mathbf{U}_{1}\), we have the best performance for finer-level Semantic Task 2. Note that we cannot re-optimize the first batch of samples \(\mathbf{U}_{1}\) for Semantic Task 2, because these \(\mathbf{U}_{1}\) samples are already optimized for Task 1 and then fixed. This can extend to more levels of tasks.
Fig. 1: Illustration of progressive coding for semantic communication
## IV Numerical Results
We designed and performed a series of experiments with the MNIST and CIFAR-10 datasets to demonstrate our idea of progressive semantic coding.
### _Experiment Setup_
Figure 2 shows the experiment design in block diagram. A transmitter takes a high-dimensional input signal (e.g. an MINST or CIFAR-10 image) and performs a small number \(M_{1}\) of linear measurements on the input signal. These measurements are sent over a noisy communication link to a receiver which feeds the noisy measurements to a neural network classifier to produce a coarse initial classification.
The transmitter then performs an additional number of \(M_{2}\) linear measurements on the input signal which are also then sent to the receiver over the noisy link. The receiver feeds all \(M_{1}+M_{2}\) noisy measurements into a second neural network classifier that produces a final fine-grained classification that represents a refinement of the initial prediction.
### _Noise-free Experiments with the MNIST dataset_
The MNIST dataset is a widely used collection of \(28\times 28\) pixel grayscale images of handwritten digits (0-9) designed for training and evaluating machine learning models for digit recognition. For our progressive coding experiment, we split up the digit recognition task into a 2-step process: first we perform a coarse prediction of whether the digit in the image is even or odd, and in the second step, refine the initial coarse even/odd prediction into a full 0-9 digit prediction.
We now describe the training process used for the experiment. We first trained an end-to-end neural network for the coarse prediction. The weights of the linear encoder that produces \(M_{1}\) linear measurements as well as the reprojection and prediction layers that produce the coarse prediction are optimized using stochastic gradient descent during this training. For the fine prediction, we perform another round of training, where the weights for the initial \(M_{1}\) measurements are kept fixed, while the weights for the second set of \(M_{2}\) measurements as well as the reprojection and prediction layers of the second neural network are optimized for the full digit recognition.
We performed an extensive set of experiments for various different values of \(M_{1}=a,M_{2}=b\) under noise-free conditions i.e. the inputs \(\mathbf{V}_{k}\) to the decoder are identical to the measurements \(\mathbf{U}_{k}\). The results are shown in Table I. A brief description of the entries in Table I follows. Each row of Table I has \(11\) columns of accuracy numbers for a sequence of experiments for a common set of \((a,b)\) parameter values. Col 1 shows the accuracy of the coarse (even/odd) prediction with \(M_{1}=a\) measurements all optimized for coarse prediction, and Col 2 shows the fine (\(0-9\)) digit prediction accuracy using the same \(M_{1}=a\) measurements as Col 1. Col 3 and Col 4 show coarse and fine prediction accuracy respectively with \(M_{1}=a\) measurements all optimized for fine prediction.
Col 5 shows coarse prediction accuracy with \(M_{1}=a\) measurements optimized for coarse prediction, and Col 6 shows fine prediction accuracy with \(M_{2}=b\) additional measurements optimized for fine prediction.
Col 7 shows the coarse prediction accuracy using \(M_{1}=a+b\) measurements all optimized for coarse prediction, and Col 8 shows the accuracy of fine predictions based on the same measurements as Col 7. Col 9 and Col 10 show the accuracy of coarse and fine predictions using \(M_{2}=a+b\) measurements all optimized for fine prediction.
**Discussion.** The coarse and fine prediction accuracy numbers reported in Columns 5, 6 respectively of Table I represent the performance of our proposed progressive coding method under noise-free conditions. The remaining columns provide various benchmarks for comparison. Remarkably, it is possible to achieve \(90\%\) accuracy for even/odd prediction based on just one linear measurement i.e. a neural network decoder is able to predict whether the digit in the image is even or odd with \(90\%\) accuracy using just one well-chosen linear projection of the pixels of the image! Column 7 serves as an upper-bound for the accuracy of the coarse prediction using a total of \(M_{1}+M_{2}=a+b\) measurements, and likewise Column 10 serves as an upper-bound for fine prediction using \(M_{1}+M_{2}=a+b\) total measurements.
Consider the row corresponding to \(a=5,\ b=10\) in Table I. Col 1 shows that the \(M_{1}=a=5\) initial measurements achieve even/odd prediction accuracy of more than \(97\%\). However, from Col 2, we see that these initial \(5\) measurements, being optimized for even/odd prediction, can only achieve a \(74\%\) accuracy for \(0-9\) digit prediction. This number improves very substantially to almost \(97\%\) with the addition of \(M_{2}=b=10\) additional measurements as seen from Col 6. This overall accuracy is based on a total of \(M_{1}+M_{2}=a+b=15\) measurements of which \(5\) are optimized for the initial coarse prediction task. If we optimize all \(15\) measurements for the fine \(0-9\) digit prediction task, the accuracy improves only slightly as seen from Col 10.
Fig. 2: MNIST Neural Network Architecture
The difference between Col 6 and Col 10 can be thought of as a penalty for the progressive coding: the slightly lower accuracy in Col 6 is the price we pay for being able to make a quick even/odd prediction. We can see that this penalty is consistently small.
### _Effect of Channel Noise_
The results in Table I were from experiments under noise-free conditions which are of course not realistic for a communication setting. In general, the neural network classifier does not have access to the linear measurements \(\mathbf{U}_{k}\) directly, but only to noise corrupted copies \(\mathbf{V}_{k}\) of these measurements.
To study the effect of noise, we modified the noise-free experiments by retraining the classifiers with noisy measurements. Specifically, for a fixed noise level \(\sigma_{w}^{2}\), we added several random realizations of white Gaussian noise to the measurements from each training image: \(\mathbf{V}_{k}\equiv\mathbf{U}_{k}+\mathbf{W}_{k},\ \mathbf{W}_{k}\sim N( \mathbf{0},\sigma_{w}^{2}\mathbb{I}_{M_{k}})\). We then retrained the weights for the reprojection and prediction layers for the coarse and fine prediction networks, and then tested the accuracy of the newly trained networks with noisy measurements on test images. This process was repeated for several different noise levels \(\sigma_{w}^{2}\). Note that the linear measurements were not modified by this training process. In particular, we use the same linear measurements as the noise-free experiments for new experiments with noise.
Figure 3 shows the accuracy of the coarse and fine prediction as a function of SNR. As expected, the accuracy improves with SNR and essentially matches the performance in the noise-free case for SNRs above \(13\) dB or so.
### _CIFAR-10 Results_
For the CIFAR-10 datasets, we focus on classification of 4 classes: deer, horse, automobile and truck. The coarse classification is to classify whether it is an animal or vehicle. The fine classification is to further distinguish whether it is a deer or horse; an automobile or truck. We adopt the same progressive architecture for coarse and fine classifications. If we use \(M_{1}=102\) and \(M_{2}=102\), we have \(87.6\%\) accuracy for coarse classification using the first \(M_{1}\) measurements optimized for coarse classification, and \(71.1\%\) accuracy for fine classification using the \(M_{1}+M_{2}\) measurements. However, if \(M_{2}=922\), we can increase the accuracies respectively to \(96.9\%\) and \(92.6\%\), if we use these \(M_{1}+M_{2}\) measurements for coarse and fine classification. Using fewer samples, one can already achieve decent accuracy for quicker coarse classification.
\begin{table}
\begin{tabular}{||c||c c c c|c c c|c c c c||} \hline a & b & Col 1 & Col 2 & Col 3 & Col 4 & Col 5 & Col 6 & Col 7 & Col 8 & Col 9 & Col 10 & Col 11 \\ \hline \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table} TABLE I: MNIST Results.
Fig. 3: Coarse and Fine Classification Accuracy vs SNR. |
2309.07385 | Multi-dimensional Speech Quality Assessment in Crowdsourcing | Subjective speech quality assessment is the gold standard for evaluating
speech enhancement processing and telecommunication systems. The commonly used
standard ITU-T Rec. P.800 defines how to measure speech quality in lab
environments, and ITU-T Rec.~P.808 extended it for crowdsourcing. ITU-T Rec.
P.835 extends P.800 to measure the quality of speech in the presence of noise.
ITU-T Rec. P.804 targets the conversation test and introduces perceptual speech
quality dimensions which are measured during the listening phase of the
conversation. The perceptual dimensions are noisiness, coloration,
discontinuity, and loudness. We create a crowdsourcing implementation of a
multi-dimensional subjective test following the scales from P.804 and extend it
to include reverberation, the speech signal, and overall quality. We show the
tool is both accurate and reproducible. The tool has been used in the ICASSP
2023 Speech Signal Improvement challenge and we show the utility of these
speech quality dimensions in this challenge. The tool will be publicly
available as open-source at https://github.com/microsoft/P.808. | Babak Naderi, Ross Cutler, Nicolae-Catalin Ristea | 2023-09-14T02:04:02Z | http://arxiv.org/abs/2309.07385v1 | # Multi-dimensional Speech Quality Assessment in Crowdsourcing
###### Abstract
Subjective speech quality assessment is the gold standard for evaluating speech enhancement processing and telecommunication systems. The commonly used standard ITU-T Rec. P.800 defines how to measure speech quality in lab environments, and ITU-T Rec. P.808 extended it for crowdsourcing. ITU-T Rec. P.835 extends P.800 to measure the quality of speech in the presence of noise. ITU-T Rec. P.804 targets the conversation test and introduces perceptual speech quality dimensions which are measured during the listening phase of the conversation. The perceptual dimensions are noisiness, coloration, discontinuity, and loudness. We create a crowdsourcing implementation of a multi-dimensional subjective test following the scales from P.804 and extend it to include reverberation, the speech signal, and overall quality. We show the tool is both accurate and reproducible. The tool has been used in the ICASSP 2023 Speech Signal Improvement challenge and we show the utility of these speech quality dimensions in this challenge. The tool will be publicly available as open-source at [https://github.com/microsoft/P.808](https://github.com/microsoft/P.808).
Babak Naderi, Ross Cutler, Nicolae-Catalin Ristea Microsoft Corporation, Redmond, USA
babaknaderi \(|\) ross.cutler \(|\) nristea @microsoft.com
**Index Terms**: speech quality assessment, subjective test, crowdsourcing, perceptual dimensions, signal quality.
## 1 Introduction
Audio telecommunication systems, such as remote collaboration systems, smartphones, and telephones, are now ubiquitous and essential tools for work and personal use. Audio engineers and researchers have been working to improve the speech quality of these systems, with the goal of making them as good or better than face-to-face communication. However, there is still room for improvement, as it is still common to hear frequency response distortions, isolated and non-stationary distortions, loudness issues, reverberation, and background noise in audio calls.
Subjective speech quality assessment is the gold standard for evaluating speech enhancement processing and telecommunication systems, and the ITU-T has developed several recommendations for subjective speech quality assessment. ITU-T P.800 [1] describes lab-based methods for the subjective determination of speech quality, including the Absolute Category Rating (ACR). ITU-T P.808 [2] describes a crowdsourcing approach for conducting subjective evaluations of speech quality. It provides guidance on test material, experimental design, and a procedure for conducting listening tests in the crowd. The methods are complementary to laboratory-based evaluations described in P.800. An open-source implementation of P.808 is described in [3]. ITU-T P.835 [4] provides a subjective evaluation framework that gives standalone quality scores of speech (SIG) and background noise (BAK) in addition to the overall quality (OVRL). An open-source implementation of P.835 is described in [5]. Perceptual dimensions for speech quality are identified in [6] and extended to be noisiness, coloration, discontinuity and loudness [7]. Those are extensively studied in conversational test [8, 9, 10] and are focus of more recent multidimensional speech quality assessment standards namely ITU-T P.863.2 [11] and P.804 [12] (listening phase) (Table 1).
Intrusive objective speech quality assessment tools such as Perceptual Evaluation of Speech Quality (PESQ) [13] and Perceptual Objective Listening Quality Analysis (POLQA) [14] require a clean reference of speech. Non-intrusive objective speech quality assessment tools like ITU-T P.563 [15] do not require a reference, though it has low correlation to subjective quality [16]. Newer neural net-based methods, such as [16, 17, 18, 19] provide better correlations to subjective quality. NISQA [20] is an objective metric for P.804, but the correlation to subjective quality is not sufficient to use as a challenge metric.
Lab-based subjective testing in practice is slow due to the recruitment of test subjects and the limited number of test subjects, and expensive due to paying qualified test subjects and the cost of the test lab. The speed and cost result in the vast majority of research papers not using subjective tests but rather objective functions that are not well correlated to subjective opinion. An alternative to lab-based subjective tests is to crowdsource the testing. We introduce a crowdsourced multi-dimensional speech quality assessment tool that extends P.804 by adding SIG, OURL, and reverberation (see Table 1). We show the tool is both accurate compared to lab results and is reproducible. The tool has been successfully used in the ICASSP 2023 Speech Signal Improvement challenge [21].
In Section 2 we describe the implementation of the tool. In Section 3 we provide accuracy and reproducibility analysis.
In Section 4 we provide an example usage of the tool. In Section 5 we discuss conclusions and future work.
## 2 Implementation
We extended the P.808 Toolkit[3] to include a test template for a multi-dimensional quality assessment. The toolkit provides scripts for preparing the test, including packing the test clips in small test packages, preparing the reliability check questions, and analyzing the results. We ask participants to rate the perceptual quality dimensions of speech namely coloration, discontinuity, noisiness, and loudness, and also reverberation, Signal Quality, and Overall quality of each audio clip. In the following, each section of the test template, as seen by participants, is described. These sections are predefined and only the audio clips under the test will be changed from one study to another.
In the first section, the participant's eligibility and their device suitability are tested and a qualification is assigned to those that pass which remains valid for the entire experiment. The participant's hearing ability is evaluated through digit-triplet-test [22]. Moreover, we test if their listening device supports the required bandwidths (i.e., full-band, wide-band, and narrow-band); details are in Section 2.1).
Next, the participant's environment and device are tested using a modified-JND test [23] in which they should select which stimulus from a pair has a better quality in four questions. A temporal certificate will be issued for participants after passing this section which expires after two hours and consequently repeating this section will be required. Detailed instructions are given in the next section including introducing the rating scales and providing multiple samples for each perceptual dimension. Participants are required to listen to all samples for the first time. Figure 1 illustrates how the rating scale for quality dimensions is presented to participants. In addition, we used a Likert 5-point scale for signal quality and overall quality as specified by ITU-T Rec. P.835. In the Training section participants should first adjust the playback loudness to a comfortable level by listening to a provided sample and then rate 7 audio clips. This section is similar to the ratings section, but the platform provides live feedback based on their ratings. By completing this section a temporal certificate is assigned to the participants which is valid for one hour. Last is the Ratings section, where participants listen to ten audio clips and two gold standard and trapping questions and cast their votes on each scale. The gold standard questions are the ones that the experimenter already knows their answers (being excellent or bad) and participants are expected to vote on each scale with a minor deviation from known the answer [22]. Trapping questions are questions in which a synthetic voice is overlaid to a normal clip and asks participants to provide a specific vote to show their attention [24]. For this test, we provide scripts for creating the trapping clips, which ask participants to select answers reflecting the best or worst quality in all scales. For rating an audio clip, the participant should first listen to the end of the clip, and then they start casting their votes. During that time, the audio will be played back in a loop. After participants finish with a test set, they can continue with the next one where only the rating section will be shown until other temporal certificates are valid. By the expiration of any certificate, the corresponding section will be shown when they start the next test set.
### Survey optimization
We utilized the multi-scale template in various research studies and improved it through the incorporation of experts and test participant feedback.
**Descriptive adjectives:** The understanding of perceptual dimensions might not be intuitive for naive test participants, therefore the P.804 recommendation includes a set of descriptive adjectives to describe the presence or absence of each quality dimension. We expanded this list through multiple preliminary studies, where participants were asked to listen to samples from each perceptual dimension and name three adjectives that best describe them. For each dimension, we selected the top three most frequently selected terms and presented them below each pole of the scale, as shown in Figure 1. The list of selected terms is reported in Table 2. We used discrete scales for dimensions to be consistent with Signal and Overall scales.
**Bandwidth check:** This test ensures the participant devices support the expected bandwidth. The test consists of five samples, and each has two parts separated by a deep tone. The second part is the same as the first part but in three samples superimposed by additive noise. Participants should listen to each sample and select if both parts have the same or different quality. We filtered the white noise with the following bandpass filters: 3.5-22K (all devices should play the noise), 9.5-22k (super-wide-band or fullband is supported), and 15-22K (fullband is supported).
**Gold questions:** Gold questions are widely used in crowdsourcing [22]. Here we observed gold questions that represent the strong presence of an impairment on one dimension and the clear absence of impairment on another dimension can best reveal an inattentive participant.
**Randomization:** We randomize the presentation order of scales for each participant. However, the Signal and Overall
Figure 1: Sub-dimensions are rated on a 5-point discrete scale with descriptive adjectives on poles.
quality are always presented at the end. The randomized order is kept for each participant until a new round of training is required.
## 3 Validation
### Reproducibility
We used a subset of the blind test set from ICASSP SIG 2023 challenge [21] for our reproducibility test. We selected 50 audio clips from the challenge which are processed by 18 models (and the degraded source clips) leading to 950 audio clips. We repeated our crowdsourcing test 5 times with a mutually exclusive group of workers, on separate days on Amazon Mechanical Turk. We calculated the Mean Opinion Scores (MOS) per clip and per model and show the correlation between different runs and scales in clip and model level in Table 3, respectively. The results show a strong correlation between different runs at the model level.
### Accuracy
In a separate experiment, a subset of data employed in Section 3.1 is assessed by expert listeners. This subset comprises 20 degraded source clips and their enhanced versions generated by 9 models from the ICASSP SIG 2023 challenge. Table 4 presents the correlation between MOS values provided by experts and crowdsourcing, indicating a robust correlation for all dimensions except coloration and reverberation. Further investigation reveals a poor agreement among experts on these dimensions (\(ICC_{2k}=.256\), \(ICC_{2k}=.267\), for reverberation and coloration, respectively).
## 4 Usage
The ICASSP 2023 Speech Signal Improvement Challenge [21] aimed to encourage research in enhancing speech signal quality in communication systems, a persistent issue in audio communication and conferencing. Participants were provided with a test and blind sets, and winners were determined through this multi-dimensional subjective test. Both the test and blind sets have 500 samples, encompassing a diverse range of speech distortions, including frequency response distortions, bandwidth limitations, reverberation, and packet loss. Overall 9 teams participated in this challenge and the reported results are based on 9x500 processed clips used in the subjective test.
We compared the correlation between quality scores collected using this survey (P.804) and P.835-based [5] subjective tests for all entries, which are reported in Table 5. A robust correlation was observed in the shared scores between the two subjective methodologies. Regarding team rankings, the only swap occurred between two teams when utilizing scores from the P.835 test, resulting in a tied rank based on P.804 ratings. Moreover, we compute the PCC between the subjective P.804 metrics and the metrics obtained using DNSMOS P.835 [18] and NISQA [20]. The correlations vary from PCC \(0.478\) to \(0.700\), highlighting the ongoing need for a subjective test to precisely assess speech quality.
In addition, we conducted Explanatory Factor Analysis (EFA) [25] to explore the underlying relationships among quality dimensions and assess if there's shared variance among sub-dimensions. We applied the Maximum Likelihood extraction method with Varimax rotation, extracting three factors as suggested by the Scree plot. Bartlett's test of sphericity yielded a significant result, and the KMO value of 0.65 indicated that the data was suitable for explanatory factor analysis. The factor loadings of quality scores on each factor are shown in Table 6. In total, three factors accounted for \(62\%\) of the variance in the data. Factor 1 primarily represented signal quality, with high loadings from Signal, Coloration, and Loudness. Discontinuity formed a separate factor, with some cross-loadings from Signal, suggesting lim
\begin{table}
\begin{tabular}{c c c} \hline \hline Area & Description & Possible source \\ \hline Noisiness & Background noise, circuit noise, coding noise; BAK & Coding, circuit or background noise; device \\ Colorado & Frequency response distortions & Bandwidth limitation, resonances, unbalanced freq. response \\ Discontinuity & Isolated and non-stationary distortions & Packet loss; processing; non-linearities \\ Loudness & Important for the overall quality and intelligibility & Automatic gain control; mic distance \\ Reverberation & Room reverberation of speech and noise & Rooms with high reverberation \\ Speech Signal & Overall signal quality & \\ Overall & Overall quality & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Speech quality areas from P.804 listening phase (the first four) plus three additional areas
\begin{table}
\begin{tabular}{|c|c|c|} \hline Scale & \multicolumn{2}{c|}{Positive Poo} & \multicolumn{2}{c|}{Negative Poo} \\ \cline{2-3} \multicolumn{1}{|c|}{} & Centinuous & Discontinuous \\ \cline{2-3} \multicolumn{1}{|c|}{} & Steady, Growth, Clean & Study, Chevy, Unween \\ \cline{2-3} \multicolumn{1}{|c|}{} & Potential Routtions & Sub-optimal balances \\ \cline{2-3} \multicolumn{1}{|c|}{} & Easy to hear, Passant, Level & Too sight, Varying octars, Too bad \\ \hline \multirow{2}{*}{**Noisiness**} & Not noisy & Noisy \\ \cline{2-3} \multicolumn{1}{|c|}{} & Clean/Clover, Noiseless, Not having & Duzzy, Heating, Clogging \\ \hline \multirow{2}{*}{**Coformation**} & Disordered & Control \\ \cline{2-3} \multicolumn{1}{|c|}{} & Normal, Natural, Direct & Default/P.r., Thin, Matfield \\ \cline{2-3} \multicolumn{1}{|c|}{} & No rework & High rework \\ \cline{2-3} \multicolumn{1}{|c|}{} & Clear, Clean, No reto & Echo, Sound reflection, Tunnel sound \\ \hline \hline \end{tabular}
\end{table}
Table 2: Labels on each scale’s pole and descriptive adjectives provided to participants. Terms used in ITU-T Rec. P.804 are marked in red.
ited shared variance between Discontinuity ratings and both Coloration and Loudness. As anticipated, Noisiness constituted a distinct factor orthogonal to the others, with loading from Reverberation. Considering all mentioned factors, we highlight the importance of adding the signal and reverb dimensions to the P.804, since they contribute to orthogonal factors in a significant percentage.
Additionally, a majority of the sub-dimensions exert an influence on the quality of the signal and the overall quality. To investigate the indirect and total effects of the sub-dimensions on the overall quality, a mediation analysis was conducted, with signal quality serving as the mediator variable. The outcomes of this analysis are presented in Table 7, which reveals that Coloration had the highest total effect on the overall quality of this dataset.
## 5 Conclusions
This paper describes an open-source toolkit designed for multi-dimensional subjective speech quality assessment in crowdsourcing. We detail the various sections of the test template and present evidence that the collected ratings obtained using this toolkit are both valid and reproducible. Additionally, it was demonstrated that the toolkit can be used to rank speech enhancement models on large-scale subjective tests, and it can provide insights into the effect of each perceptual dimension on overall quality. Results also showed that coloration, discontinuity, and noisiness as three orthogonal factors that other dimensions load onto. Future work includes improving the survey training and scale descriptions to improve coloration and reverberation accuracy with respect to expert raters.
\begin{table}
\begin{tabular}{l c c} \hline \hline Quality score & Factor 1 & Factor 2 & Factor 3 \\ \hline Coloration & 0.787 & & \\ Discontinuity & & 0.936 & \\ Loudness & 0.476 & & \\ Noisiness & & 0.742 \\ Reverberation & & & 0.413 \\ Signal & 0.824 & 0.481 & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Loading of quality dimensions on three-factor structure using Maximum Likelihood extraction method with Varmax rotation. KMO = 0.65. Factor loading \(>\)0.3 is presented.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Per model} & \multicolumn{6}{c}{Per Clip} \\ \cline{2-13} & \multicolumn{2}{c}{Coloration} & \multicolumn{2}{c}{Discontinuity} & Loudness & Noisiness & Reverberation & Signal & Overall & \multicolumn{2}{c}{Colection} & \multicolumn{2}{c}{Discontinuity} & Loudness & Noisiness & Reverberation & Signal & Overall \\ \hline \hline \multirow{2}{*}{Bart - Run2} & 0.914 & 0.918 & 0.988 & 0.988 & 0.982 & 0.986 & 0.982 & 0.729 & 0.629 & 0.628 & 0.564 & 0.786 & 0.805 \\ \hline \hline \multirow{2}{*}{Bart - Run3} & 0.904 & 0.986 & 0.957 & 0.954 & 0.956 & 0.957 & 0.963 & 0.974 & 0.972 & 0.922 & 0.663 & 0.632 & 0.819 & 0.822 \\ \hline \hline \multirow{2}{*}{Bart - Run4} & 0.951 & 0.986 & 0.955 & 0.972 & 0.972 & 0.974 & 0.976 & 0.685 & 0.755 & 0.83 & 0.657 & 0.421 & 0.81 & 0.81 \\ \hline \hline \multirow{2}{*}{Bart - Run5} & 0.829 & 0.925 & 0.945 & 0.966 & 0.946 & 0.981 & 0.986 & 0.976 & 0.584 & 0.58 & 0.600 & 0.76 & 0.776 \\ \hline \hline \multirow{2}{*}{Bart - Run2} & 0.934 & 0.963 & 0.969 & 0.956 & 0.962 & 0.954 & 0.971 & 0.74 & 0.940 & 0.731 & 0.729 & 0.698 & 0.863 & 0.804 \\ \hline \hline \multirow{2}{*}{Bart - Run4} & 0.961 & 0.975 & 0.981 & 0.956 & 0.956 & 0.956 & 0.956 & 0.956 & 0.758 & 0.738 & 0.744 & 0.731 & 0.872 & 0.604 \\ \hline \hline \multirow{2}{*}{Bart - Run2} & 0.923 & 0.966 & 0.953 & 0.973 & 0.926 & 0.982 & 0.976 & 0.940 & 0.772 & 0.778 & 0.604 & 0.767 & 0.760 & 0.63 \\ \hline \hline \multirow{2}{*}{Bart - Run3} & 0.913 & 0.964 & 0.971 & 0.988 & 0.963 & 0.957 & 0.981 & 0.779 & 0.860 & 0.773 & 0.765 & 0.739 & 0.878 & 0.883 \\ \hline \multirow{2}{*}{Bart - Run3} & 0.912 & 0.956 & 0.956 & 0.973 & 0.930 & 0.954 & 0.967 & 0.963 & 0.750 & 0.680 & 0.674 & 0.609 & 0.811 & 0.810 \\ \hline \hline \multirow{2}{*}{Bart - Run5} & 0.941 & 0.966 & 0.967 & 0.963 & 0.976 & 0.984 & 0.961 & 0.965 & 0.760 & 0.760 & 0.965 & 0.818 & 0.847 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Pearson correlation between different runs of the reproducibility test in clip and model level.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Quality score & Factor 1 & Factor 2 & Factor 3 \\ \hline \hline Coloration & 0.787 & & \\ Discontinuity & & 0.936 & \\ Loudness & 0.476 & & \\ Noisiness & & 0.742 \\ Reverberation & & 0.413 \\ Signal & 0.824 & 0.481 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Effects of sub-dimensions on Overall quality, considering Signal quality as mediator.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dimension & PCC & SRCC & Kendall Tau-b & Tau-b95 \\ \hline \hline Background/Noisiness & 0.964 & 0.926 & 0.825 & 0.853 \\ Signal & 0.954 & 0.933 & 0.801 & 0.914 \\ Overall & 0.965 & 0.940 & 0.825 & 0.822 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Correlations between subjective scores obtained from P.804 and P.835 subjective tests on shared dimensions in model level for all entries. Tau-b95 is Kendall Tau-b applied to corrected ranked-order by considering 95% confidence interval of subjective scores according to [26]. |
2307.00146 | Bluefish: Composing Diagrams with Declarative Relations | Diagrams are essential tools for problem-solving and communication as they
externalize conceptual structures using spatial relationships. But when picking
a diagramming framework, users are faced with a dilemma. They can either use a
highly expressive but low-level toolkit, whose API does not match their
domain-specific concepts, or select a high-level typology, which offers a
recognizable vocabulary but supports a limited range of diagrams. To address
this gap, we introduce Bluefish: a diagramming framework inspired by
component-based user interface (UI) libraries. Bluefish lets users create
diagrams using relations: declarative, composable, and extensible diagram
fragments that relax the concept of a UI component. Unlike a component, a
relation does not have sole ownership over its children nor does it need to
fully specify their layout. To render diagrams, Bluefish extends a traditional
tree-based scenegraph to a compound graph that captures both hierarchical and
adjacent relationships between nodes. To evaluate our system, we construct a
diverse example gallery covering many domains including mathematics, physics,
computer science, and even cooking. We show that Bluefish's relations are
effective declarative primitives for diagrams. Bluefish is open source, and we
aim to shape it into both a usable tool and a research platform. | Josh Pollock, Catherine Mei, Grace Huang, Elliot Evans, Daniel Jackson, Arvind Satyanarayan | 2023-06-30T21:34:52Z | http://arxiv.org/abs/2307.00146v4 | # Bluefish: A Relational Grammar of Graphics
###### Abstract
The Grammar of Graphics (GoG) has become a popular format for specifying visualizations because it unifies different chart types into a consistent, modular, and customizable framework. But its benefits have not yet reached the broader class of data-driven graphic representations--from annotated charts and hierarchical visualizations to molecular structure diagrams, Euclidean geometry, and mathematical formulae. These graphics are still developed using rigid typologies, monolithic tools, or specialized grammars that lack the customizability and generality of the GoG. In response, we present Bluefish, a _rational_ grammar of graphics that extends the benefits of the GoG to this larger domain. Bluefish provides two key abstractions: user-extensible, domain-specific elements (e.g., mathematical expressions, chemical atoms, or program state stack frames); and perceptual groupings (also known as Gestalt relations) like proximity, nesting, and linking. Users compose these primitives within a Bluefish specification, which the language runtime compiles to a _rational_ _scangerp_: a formal representation of a graphic that, compared to traditional tree-based scenographs, better preserves semantic relationships between visual elements. To illustrate its flexibility, we show that Bluefish can represent data-driven graphic representations across a diverse range of domains while closely aligning with domain-specific vocabulary. Moreover, to demonstrate the affordances of Bluefish's relational scenograph, we develop a prototype screen reader tool that allows blind and low-vision users to traverse a diagram without significant additional scaffolding.
Data Visualization, Grammar of Graphics, Perceptual Groupings. 1
## 1 Introduction
When designing the Grammar of Graphics (GoG), Leland Wilkinson set out _"to produce a package that could draw every statistical graphic [he] had ever seen"_[77]. In contrast to prior approaches at the time--which provided chart _typologies_ that offered only a fixed collection of charts with limited customization--Wilkinson constructed a _compositional language_ with six primitives: data, transforms, marks, scales, guides, and coordinate systems. As a result, the GoG affords visualization authors a large expressive gamut without imposing a heavy specification burden, and its design has influenced modern visualization languages like Tableau's Vis2Q[L24,65], ggplot2[76], and Vega-Lite [62].
However, Wilkinson's GoG and its descendants as largely focused on _statistical_ graphics (i.e., graphic representations that compare quantities [73]). To author a broader class of _data-driven graphic representations_--from annotated charts and hierarchical visualizations to molecular structure diagrams, Euclidean geometry, and mathematical formulae--users must either invoke layout algorithms as data transformations (a mechanism that violates the clean separation between data values and visual variables) or fallback to more brittle approaches including rigid typologies and monolithic tools. For example, Mermaid [68] is a popular JavaScript library for Markdown-based diagramming but offers a fixed palette of diagram types including flowcharts, class and state diagrams, and Gantt charts each with only a handful of customization options (e.g., only some diagram types allow customizing the curve style of arrows). While a new class of domain-specific languages has emerged to fill this gap [44], these languages further fragment the landscape of tools. For instance, SmilesDrawer [51], GoTree [38], and Gosling [41] offer different abstractions for the shared concept of annotation. This fragmentation creates a burden for both language developers, who must reimplement similar primitives, as well as diagram authors, who must learn multiple often inconsistent and incompatible methods of accomplishing the same goal. Moreover, this fragmentation often causes authors to shepherd a diagram back-and-forth between separate tools [6, 7]--a tedious, manual operation that is difficult to automate, must be repeated every time the diagram is updated, and can lead to visual inconsistencies that ultimately impact the reader. A unified language for data-driven graphic representations can empower both end-users and grammar designers to author complex graphics using consistent, compositional abstractions.
In this paper, we introduce _Bluefish_, a grammar that extends the benefits of compositional specification to data-driven graphic representations beyond statistical charts. Bluefish diagrams are authored with two classes of primitives (Sect. 3). The first are user-extensible _elements_: the unique vocabulary for a particular visual domain (e.g., marks in visualization, atoms and bonds in a molecular structure, stack frames in a visualization of program state, etc.). The second set of primitives are _perceptual groupings_: a choice of visual arrangement of elements that conveys information about the relationships between them (a concept also known as Gestalt relations [74]). For example, in an annotated chart, the _proximity_ of a text label to a dot mark associates the two together; in a chemical diagram, a line _connects_ two atoms together as part of their bond; and, in a piece of augmented math
Figure 1: Data-driven graphic representations built with Bluefish. These graphics run the gamut from statistical charts to diagrams, and are constructed with user-extensible, domain-specific elements and a consistent, universal set of perceptual grouping primitives. From left to right: an UpSet plot [37], U.S. Polio incidence rates from the Charticulator gallery [53], ChartAccent [52], Python Tutor [23], an annotated diagram of an Aspirin molecule, and the Euclidean geometric proof of the Pythagorean Theorem.
notation [25], _similar colors_ relate a symbol to its definition.
Bluefish is embedded in JavaScript, and diagrams are authored in JSX, a syntax extension to JavaScript popularized by the React library. Bluefish specifications are compiled into a _relational scenergaph_: a runtime data structure used for layout and rendering (Sect. 4). In contrast to traditional tree-based scenergaphs, Bluefish's relational scenergaph retains more semantic information about the relationships between visual elements (e.g., containment and connection). To layout the scenergaph, Bluefish adopts _local propagation_ rather than global constraint solvers for a more modular and extensible runtime architecture.
To evaluate Bluefish's expressivity, we construct a gallery of examples spanning the gamut from statistical charts through to broader data-driven graphic representations such as the Python Tutor [23] program state visualization and the Euclidean geometric proof of the Pythagorean theorem. These examples demonstrate that Bluefish enables a highly compositional authoring workflow, where new custom graphical elements or perceptual groups can be constructed as combinations of existing primitives. Moreover, our examples show how Bluefish can serve as the foundation of higher-level and domain-specific grammars -- for instance, we implement a GoG called Bluefish Plot (modeled off Observable Plot [14] and Victory Charts [15]), and detail how a series of abstraction steps can transform a specification of a tree diagram into the GoTree visualization grammar [38].
We further develop a prototype screen reader tool to demonstrate how Bluefish's relational scenergaph can enable more accessible data-driven graphic representations. In particular, as the scenergaph reflects the relational structure of the diagram, novel screen reader structures can be constructed that enable navigational that is more fluid and aligned with the semantics of the graphic domain. We illustrate these affordances by making two examples from our gallery (ChartAccent [52] and an annotated Aspirin molecule) accessible. In doing so, we find that as compared to existing state-of-the-art approaches [9, 83], users can more easily move between annotations and data values, and traverse the bonds and rings of the chemical molecule.
## 2 Related Work
### Graphic Representations & Perceptual Groups
In visualization, when we refer to "graphics" we often mean statistical graphics [77] Famed psychologist, Barbara Tversky offers a broad definition: a graphic is as a visual artifact consisting of "elements that are systematically arranged in space" [69]. Yuri Engelhardt defines a scope in-between the two called _graphic representations_[73] with ten primary types (including charts, tables, maps, pictures, and written text among others). These definitions inspire the scope and design of Bluefish: we focus on a set of _data-driven_ graphic representations (i.e., those amenable to programmatic generation) via two classes of primitives, _elements_ and _perceptual groupings_. Perceptual groupings, also known as Gestalt relations [74], can not only spatially arrange elements but also denote other relationships including containment and connection. Perceptual groups can cover a broad range of phenomena including temporal grouping (such as when several elements move in the same direction) or deal with complex emergent phenomena like continuity or closures. Bluefish focuses on a class of groupings commonly found in _static_ data-driven graphical representations [54, 69, 73, 74]. We illustrate these groupings in Fig. 2.
### Visualization, Diagramming, and UI Toolkits
**Visualization Grammars.** Fig. 2 summarizes how different visualization grammars surface perceptual groupings. GoG-based systems like VizQL [24], ggplot2 [76], and Vega-Lite [62] do not explicitly encode perceptual groupings. Rather, perceptual groupings occur as a result of other primitives the grammars offer -- for instance, spatial proximity via ordinal scales or facets, similar attributes and alignment via the visual encoding mapping, etc. As a result, and as we show in Fig. 3, it can be difficult to rapidly explore alternate perceptual groupings. To address this limitation, researchers have developed domain-specific visualization grammars such as SetCoLa [26] and GoTree [38] with explicit grouping operators. While useful, these operators model perceptual groups as a system of constraints. As a result, these operators can only be composed in limited ways. In contrast, Bluefish offers explicit perceptual grouping operators modeled as _components_ -- an approach that facilitates accretive design, where new components can be rapidly created as combinations of existing ones. We elaborate on these differences and their implications in Sect. 3.3.2.
**UI Toolkits and Embedded Grammars.** Modern user interface toolkits like React, SwiftUI, and Jetpack Compose have converged on the concept of a _component_ to author UIs in a modular way. A UI component colocates related pieces of structure, content, and styling. Crucially, components can be nested, which produces new components that work the same way as primitive components (such as HTML elements) supplied by the system. This allows UI authors to build and share libraries of reusable functionality tailored to their specific problems, and to seamlessly compose many different libraries together. This approach stands in contrast to visualization systems which, with rare exceptions [45], offer a fixed palette of mark types. As a result, many developers have sought to embed the GoG in UI frameworks. Two prominent examples include Swift Charts [1] (embedded in SwiftUI) and Victory Charts [15] (embedded in React). These libraries take advantage of the UI component model to provide additional extensibility for custom marks and axes, and high-level components for common chart types. These frameworks often also provide operators for some perceptual grouping -- typically spatial layout and alignment. However, as with the SetCoLa [26] and GoTree [38], these operators are modeled as constraints which limit their expressivity. Moreover, this relational information is not preserved in the resultant interface (e.g., as annotations on the Document Object Model nodes). As a result, downstream tools (such as assistive technologies) are unable to provide semantically-aligned representations of visual content.
**Diagramming Languages.** The idea of composing primitive elements to create new ones has also been embraced by diagramming languages and toolkits including Haskell Diagrams [80], Diagrammar [21], Manim [55], and Penrose [81]. These libraries provide primitives for many perceptual groups, but their APIs lack consistency and often expose perceptual groups as constraints. For example, Manim pro
Fig. 2: The static perceptual groupings we scope to, how they manifest in various visualization systems, and how we implement them in Bluefish.
vides a next_to method for shapes that is similar to Row and Col components in Bluefish. However, next_to does not produce a new component; rather, it returns the modified object for method chaining. Penrose programs are split into Substance and Style files, analogous to HTML and CSS, respectively. As a result, Penrose does not have a concept of a component. We provide a closer comparison to Penrose in Sect. 5.2. Additionally, some developers have created high-level diagram typologies, such as Mermaid [68]. While these tools provide for low viscosity authoring, they tradeoff customization and expressivity.
**Layout Strategies.** In addition to differences in user-facing APIs, these systems make different implementation decisions to lay out a user's specification. Diagramming systems and domain-specific visualization grammars often select global solvers like linear programming or gradient descent for their expressivity. However UI toolkits and some diagramming languages opt for simpler, dataflow-based techniques like local propagation [64]. In fact, SwiftUI replace their Auto Layout engine (based on the global linear programming solver Cassowary [2]) with a dataflow technique. Bluefish uses local propagation for layout, because it affords low viscosity and integration with external layout algorithms. We detail this design decision in Sect. 4.2.
## 3 The Bluefish Language Design
We scope Bluefish to support static, data-driven graphic representations via two classes of primitives: _elements_, which determine the graphical entities that comprise the diagram; and _perceptual groupings_, that define relationships between the elements. Bluefish surfaces these primitives through JSX, an extension to JavaScript popularized by the React library that emulates XML and HTML syntax. JSX provides two types of tags: self-closing tags (e.g., <React />) which are well-suited for instantiating simple elements, and block or container tags (e.g., <Group>...</Group>) for composition and perceptual grouping.
### _Design Goals_
Bluefish's abstraction design is motivated by the following goals:
**Vocabulary Correspondence.** A user should be able to specify a graphic with primitive elements that map closely to their domain. We adapt this goal from Ma'ayan & Ni et al. [42] as well as the Cognitive Dimensions of Notations [8]. While marks like text, rectangles, circles, and lines provide the foundations for graphic elements, they present a greater articulatory distance [28] than primitives tailored to the author's domain (e.g., atoms, bonds, and rings for diagrams of chemical molecules). However, we recognize that it is impossible for language designers to anticipate every possible primitive for every possible domain. And even if one could produce such a collection, doing so would impose a large maintenance burden on language developers. Thus, when authoring graphic representations, users must be able to both _use_ and _create_ domain-specific elements.
**Creation By Composition.** Domain-specific elements are often created by composing simpler elements. For example, a piece of math notation is often a deeply nested collection of basic symbols that are composed using lines, superscript and subscript positioning, and horizontal spacing These kinds of elements are typically called _glyphs_ in the visualization literature [46, 60]. But, with the exception of Atlas [39] and GoG libraries embedded in UI frameworks, visualization grammars do not provide abstraction mechanisms for creating glyphs that work like built-in marks [79, 44, 78]. Instead, to compose complex graphics, users of visualization grammars typically concatenate many specifications together. Bluefish provides a set of design patterns, and leverages JSX notation, to allow users to create new elements by composing together existing ones.
**Universal Composition.** Counterbalancing the focus of the previous goals, the mechanisms for composition should afford authors the ability to cut across and cohere visual elements from disparate domains as part of a single diagram. Just as basic graphical marks are shared across domains, we aim to design a set of _universal_ patterns for composition. This goal is motivated by a call from Park et al. who argue for the need to _"unify many of these existing grammars, while retaining both the simplicity and power of the original ones"_[49]. In Bluefish, we achieve universal composition through a standard library of _perceptual groupings_ -- a set of operators drawn from the broader class of Gestalt relations to indicate some visual elements as _"going together more strongly than others"_[74]. Moreover, to ensure _consistent_ composition [8], invoking a perceptual grouping produces a new element that can participate in further compositions with other perceptual groupings.
**Low Viscosity.** Users should be able to make atomic edits to the specification in order to rapidly explore alternate designs [8]. In Bluefish, low viscosity manifests in two ways. First, the semantics of the perceptual grouping operators are consistent with one another: users should be able to trivially replace one perceptual grouping with another, and produce a valid graphic without having to refactor the specification. Second, the behavior of perceptual grouping operators are determined entirely by their input properties and children. As a result, a user is able to locally reason about the behavior of a perceptual grouping as an isolated component.
### _User-Extensible Visual Elements_
Bluefish provides a base set of elements that reflect native SVG: Circle, Line, Rect, and Text. Users can also define new elements by composing existing ones via the affordances of JSX notation. For example, the original version of Python Tutor uses a combination of divs and a table to represent a stack frame (Fig. 1). As this combination of elements is a meaningful semantic unit of a runtime program state visualization, an author might choose to encapsulate this code in a reusable JSX component like so:
StackFrame = ({header, vars }) = (<divv- <divv[ header]</divv[ header]</div< table>(vars.map[(name, value)] => ( <trv-dtd(name]</<tdv-dtd(value]</dtd-tr> )))</table> </divv> </divv> ```
This component may then be used like a native element:
<StackFrame header="Global Frame" vars={[name : 'x', value: 5], [name: 'y', value: 6]]} />
Occasionally, a user may need to define a new base element that is not a composition of others (for example, an Image). For such cases, Bluefish provide a lower-level element definition API. Once registered, these custom elements are indistinguishable from those built-in (in fact, all of Bluefish's base elements are written with this API).
### _Perceptual Grouping Operators_
Perceptual groupings (also known as Gestalt relations [74]) are ways of visually arranging elements to convey relationships between them. These groupings can cover a broad range of phenomena. For instance, some groupings unfold temporally, such as _common fate_ when several elements move in the same direction or towards the same place. Other groupings deal with complex emergent visual phenomena, such as _closure_ where an incomplete element is nevertheless perceived as whole. At present, and as shown in Fig. 2, Bluefish focuses on perceptual groupings commonly found in static graphics including spatial _proximity_, _similar attribute_, _alignment_ (i.e., similar position), element _connectedness_, and _common region_. These groupings are exposed as block or container tags in the JSX notation (i.e., a component with children). Thus, to express a grouping in Bluefish, we write
<PerceptualGroup><Child />...<Child />/PerceptualGroup>
where each Child instantiates a particular visual element (e.g., text, shapes, or domain-specific glyphs and primitives).
#### 3.3.1 Perceptual Grouping Definitions
**Spatial Proximity (<Distribute>).** Spatial proximity manifests groups in one of two ways. First, among pairs or small collections of elements, spatial proximity can take the form of _relative spacing_. For instance, elements A and B may be grouped together if no element C is closer to A or B than they are to each other. On the other hand, by chaining together many elements with roughly equal spacing, we can create a region of _uniform spacing_ to collectively group the elements.
We represent spatial proximity using a Distribute component that evenly spaces its children. The component takes a direction
(horizontal or vertical), a spacing attribute to dictate the size of the gaps between children, and a total attribute to specify the sum of gaps and children sizes. If both attributes are specified, Distribute computes the space available for each child element and sizes it accordingly. Bluefish does not provide a relative spacing component as enforcing it requires reasoning about _all_ elements in a graph to ensure there is no closer element. This component would violate our design goal of low viscosity, since its behavior is not defined exclusively by its input properties and children.
**Similar Attribute (<AlignProps>).** Elements may appear grouped if they share common traits like color, size or angle. Similar attribute grouping is invoked via the AlignProps component, which takes two parameters:props and values. The component inspects its children to identify whether the givenprops are specified on any child element; if so, their values are propagated to all other children. If none of the children define the givenprops, the corresponding values are propagated to the children instead.
**Spatial Alignment (<Align>).** Elements may also be grouped together if they are spatially aligned. We implement this using an Align component. In the typical case, a user specifies an alignmentprop on the Align component. This alignment can be in just one direction, such as left, or in both the x and y directions, such as topCenter. To override this alignment for a specific component, a user can provide the guidePrimary prop to that component with a different alignment. Although spatial alignment can be conceived of as applying a _similar attribute_ to positions, doing so would require exposing granular coordinate properties like left and topCenter on individual elements. We instead follow D3's principle of _ecosystem compatibility_[13], opting for elements to be positioned and sized via an API more native to vector graphics (i.e., x, y, width, height, etc.). As a result, AlignProps is more difficult to reuse and Bluefish provides Align for spatial alignment.
**Element Connectedness (<Link>).** When elements are connected by a line, they can be perceived as being related. Some researchers argue this relationship emerges from two alignments that group the connector's endpoints with the elements. Others argue that the connection itself creates a perceptual group [74]. The former can be expressed using a pair of Align components, and the latter is implemented with Link. The advantage of creating a custom component for this grouping is that a user needs to write a single component, and we can infer which sides to align the endpoints to based on the positions of the elements. Bluefish's Link component follows the line element interface in Scalable Vector Graphics (SVG). However, whereas the position of an SVG line is determined by an explicit set of start and end coordinates, Bluefish's Link determines its positions via its children. The current implementation of this component only supports two child elements, and it only renders an SVG line.
**Common Region (<Contain> & <Background>).** Elements lying within a contour or nested inside one another are often perceived as being grouped together. Thus, common region can be interpreted in one of two ways. First, if element A contains (i.e., its visual boundary) entirely wraps around element B, we perceive A and B as being grouped. On the other hand, if A contains B, C, and D, we perceive B, C, and D as grouped, and that A is instead the _reification_ of that grouping.
Bluefish provides two components to support these two forms of common region grouping. The Contain component takes exactly two children, with the first designated as the container and the second as the contained element -- whichever child does not have a specified size is resized to fit or fit within the other. An optional padding attribute may be specified to providing spacing between the two elements. In contrast, the Background component takes an arbitrary number of children, and a required element attribute specifies an additional element that is used as the background.
#### 3.3.2 Design Considerations and Rationale
The design of Bluefish's perceptual grouping operators follows directly from the design goals described above in Sect. 3.1. In particular, as research has shown [54, 69, 73], perceptual groups are a fundamental feature of graphic representations regardless of the underlying domain or context. We see further evidence of this in the fact that, as shown in
Fig. 3: Perceptual group viscosity in Vega-Lite and Bluefish: We first define a basic chart in both languages. To add an annotation in Vega-Lite we first filter the data and use an xOffset encoding to place a text mark. In Bluefish we also filter the data, but place the label using Align and Distribute groupings. To move this annotation outside the plot in Vega-Lite, we have to introduce a new param, switch the xOffset to an x encoding, and add a rule mark that implicitly connects to the data and label. In Bluefish, we simply name the plot, use that name to change the target of the Distribute, and add a Link grouping. Finally, to highlight this annotation in Vega-Lite, we need to manually compute padding to define the encoding of a text mark. In Bluefish, we use a Contain grouping.
Fig. 2, many of the primitives found in domain-specific and domain-agnostic toolkits reflect perceptual groupings. However, there are two key differences between how perceptual groups are realized in prior toolkits, and how they are refiled in Bluefish.
First, for several of these prior approaches, perceptual groupings occur implicitly through the use of their primitives. For example, with the GoG [77], spatial proximity is conveyed through ordinal scales, facets, or offset encoding channels; attribute similarity is similarly implicit in the visual encoding adopted across not only the GoG but domain-specific approaches like SetCoLa [26] and GoTree [38] as well. In contrast, Bluefish makes perceptual groupings an explicit set of operators. And, in doing so, Bluefish is able to facilitate a less _viscous_[8] authoring process. For instance, consider the sequence of changes shown in Fig. 3 which iteratively redesign the perceptual groupings used to associate a textual label with its target point mark -- starting with spatial proximity and alignment, changing the definition of proximity and adding element connectedness, and finally aligning attributes as well. As perceptual grouping is an implicit result of primitives offered by the GoG (and its descendants including Vega-Lite as shown in the figure), each of these steps requires making edits across the entire specification rather than atomic edits to a localized portion. Moreover, in the case of Vega-Lite specifically, although values for horizontal alignment and similar colors can be hardcoded into the specification, doing so risks introducing inconsistencies when iteratively exploring alternate designs. Ensuring consistency, however, requires extracting the shared value as a param--a step that introduces a level of indirection. In contrast, by providing explicit perceptual grouping operators, Bluefish allows users to explore alternate designs by making only targeted changes to the specification. Shared values required for a perceptual group are defined in situ, reducing the need for indirection.
Second, where prior toolkits offer explicit grouping mechanisms (e.g., SetCoLa's and GoTree's primitives for spatial proximity, alignment, connectedness, and common region), they model them as _constraints_ -- i.e., functions to be solved by a constraint engine. As a result, users are limited to only declarative statements about perceptual groups and are not able to operate on them further -- for instance, composing them into reusable glyphs, or referencing a perceptual grouping as part of a downstream operation. Moreover, because element connectedness and common region are expressed with elements, a constraint-based API cannot express granpereceptual groups consistently. In contrast, Bluefish exposes perceptual groupings as _components_, a design decision that has syntactic and semantic consequences. Syntactically, modeling perceptual groupings as components allows for a _closeness of mapping_[8]: a perceptual group is defined by wrapping a container tag around participating elements, akin to how groupings are defined in other languages likely to be familiar to diagram authors including HTML (e.g., <div> tags) and SVG (i.e., <\(g\)> tags). Semantically, as perceptual groups are components, diagram authors can treat them as higher-order elements, using them wherever they might have other elements. For instance, the final step in Fig. 3 wraps a _common region_ grouping around the _connectedness_ grouping of the label and dot mark.
#### 3.3.3 Limitations and Custom Groups
Bluefish's standard library of perceptual groupings is not intended to be exhaustive. Rather, it captures common groupings for static data-driven graphics [73, 74, 54], informed by our design goals. For instance, Bluefish does not support grouping by element _symmetry_, because it is typically a global property of the placement and attributes of all elements in a region, rather than a local, composable grouping [74]. Other emergent visual phenomena such as _continuity_ and _closure_ also emerge from global interactions between elements. Finally, Bluefish's current implementation (detailed in Sect. 4) relies primarily on reasoning about rectangular bounding boxes; this makes it difficult to support groupings like _parallelism_ which would require reasoning about more complex shape representations such as parallel lines or curves.
To account for these limitations, Bluefish allows users to author their own perceptual grouping operators via custom components. For instance, a user could combine existing perceptual groups to create a new, compound group (such as combining Align and Distribute into a Row or Col component). Alternatively, for even more customization, users can write components that invoke custom layout algorithms. We provide examples in Sect. 5.3. If these components are written using Bluefish's component and layout interfaces, they compose just as well as Bluefish's standard collection of groupings. In fact, our standard perceptual groupings are implemented with the same API.
|
2307.16724 | Revisiting Quantum Optimal Control Theory: New Insights for the
Canonical Solutions | In this study, we present a revision of the Quantum Optimal Control Theory
(QOCT) originally proposed by Rabitz et al (Phys. Rev. A 37, 49504964 (1988)),
which has broad applications in physical and chemical physics. First, we
identify the QOCT equations as the Euler-Lagrange equations of the functional
associated to the control scheme. In this framework we prove that the extremal
functions found by Rabitz are not continuous, as it was claimed in previous
works. Indeed, we show that the costate is discontinuous and vanishes after the
measurement time. In contrast, we demonstrate that the driving field is
continuous. We also identify a new set of continuous solutions to the QOCT.
Overall, our work provides a significant contribution to the QOCT theory,
promoting a better understanding of the mathematical solutions and offering
potential new directions for optimal control strategies. | Katherine Castro, Ignacio R. Solá, Juan J. Omiste | 2023-07-31T14:45:33Z | http://arxiv.org/abs/2307.16724v1 | # Revisiting Quantum Optimal Control Theory: New Insights for the Canonical Solutions
###### Abstract
In this study, we present a revision of the Quantum Optimal Control Theory (QOCT) originally proposed by Rabitz et al [1], which has broad applications in physical and chemical physics. First, we identify the QOCT equations as the Euler-Lagrange equations of the functional associated to the control scheme. In this framework we prove that the extremal functions found by Rabitz are not continuous, as it was claimed in previous works. Indeed, we show that the costate is discontinuous and vanishes after the measurement time. In contrast, we demonstrate that the driving field is continuous. We also identify a new set of continuous solutions to the QOCT. Overall, our work provides a significant contribution to the QOCT theory, promoting a better understanding of the mathematical solutions and offering potential new directions for optimal control strategies.
## 1 Introduction
Minimization and optimization are standard tools in mathematics that we encounter in many areas of science [2, 3, 4] ranging from health and biology to space exploration or even social behaviour [5]. Furthermore, most of the fundamental equations in physics may be derived from a variational principle, which aims to minimize a quantity, for instance, the action or the energy [6, 4, 7]. Prominent examples are Lagrangian mechanics, general relativity or geometric optics. In atomic and molecular physics, let us highlight the use of the so-called Dirac-Frenkel variational principle to propagate quantum systems with many degrees of freedom [8], which leads to the Multiconfigurational Time-Dependent Hartree and related methods [9, 10, 11, 12, 13].
In the literature, we find Optimal Control as the methodology designed to obtain a driving field such that a given observable is optimal, i. e., a maximum or minimum, for which a functional is designed and its extremal solutions are explored, ensuring that certain constraints are fulfilled [14, 15, 16, 17, 18]. It is said to be Quantum Optimal Control if the Schrodinger equation drives the dynamics [1]. This methodology has been widely used to control molecular orientation [19, 20, 21], state transitions and population transfer [22, 23, 24, 25, 26], ultracold atoms systems [27, 28], ionization [29] or even entanglement [30, 31]. It has also been used to drive Bose-Einstein condensates [32] using the classical [33] and Gross-Pitaevskii approximation [34] or to drive dissipative systems [35, 36, 37, 38].
The most used algorithms are based on the gradient of the functional with respect to the field [39, 40, 41], which can be used to ensure a fast monotonic increase in the objective. This is the case of Krotov's method [42, 43, 44, 35, 45, 46], as well as of many algorithms developed by Rabitz and coworkers [47, 48, 49, 50, 51, 52, 53] There is continuously ongoing research and improvements on Krotov's method [54, 55, 56], being of particular interest the algorithms which provide accelerated convergence of quantum control [49, 54] or strategies to smooth the control parameters of the driving field [34]. In this context, there are also studies on the landscape, i. e., topological and geometric structure defined by the functional [57, 15, 58].
The Quantum Optimal Control Theory (QOCT) is a well tested method that has proven to be efficient and accurate to control the dynamics of a quantum system [43]. In this work we revisit the underlying theory to rigorously obtain the control variational equations. To do so, we also derive the Euler-Lagrange equations explaining in detail and pedagogically the theory of variations. The latter can be of interests for many physicist or chemist as well as grade students who do not fully understand the theory of variations and can benefit of it to, for instance, develop numerical methods.
This paper is structured as follows: In Section 2, we apply the Euler-Lagrange equations to obtain a set of QOCT equations, also known as equations of motion, which differ in a term with the equations obtained in the literature [1, 43, 39]. In Section 3, we present the necessary conditions to obtain continuous solutions and demonstrate that the previously labeled "canonical solutions" are discontinuous in the costate, but not in the wavefucntion or the driving field under certain conditions. Our concluding remarks are summarized in
Section 4. Additionally, in the appendix, we provide a brief introduction to theory of variations, derivations of the Euler-Lagrange equations, and explain how to impose constraints using a Lagrange multipliers. We also include some proofs relevant to QOCT mentioned in the primary text.
## 2 Quantum Optimal control
Quantum optimal control is a family of methods to optimize a specific observable for a quantum system. This involves imposing a cost, which is a constraint or condition on the field driving the system. In this section, we derive the QOCT equations by utilizing the Euler-Lagrange equations (ELE). The ELE are derived using the theory of variations, and their solutions correspond to the stationary functions. Specifically, the ELE corresponding to the functional
\[\int_{\Omega}\mathcal{F}[y,y_{x_{1}},y_{x_{2}},\ldots,y_{x_{n}},x_{1},x_{2}, \ldots,x_{n},\Omega^{\prime}]\mathrm{d}\Omega^{\prime} \tag{1}\]
are given by
\[\frac{\partial}{\partial y}\mathcal{F}+\sum_{k=1}^{m}(-1)^{k}\sum_{j=1}^{n} \frac{\mathrm{d}^{k}}{\mathrm{d}x_{j}^{k}}\frac{\partial}{\partial y_{x_{j}^{ k}}}\mathcal{F}=0, \tag{2}\]
being \(y_{x_{j}^{m}}(x_{1},x_{2},\ldots,x_{n})\equiv\frac{\partial^{m}}{\partial x_ {j}^{m}}y\). The ELE are derived in A and B.
Our objective is to optimize the expectation value of the observable \(O\) while minimizing a cost functional and satisfying the TDSE. To achieve this, we define a functional \(J\) and then its extremal points. The functional \(J\) may be divided into three terms:
\[J[\psi,\psi^{*},\chi,\chi^{*},\varepsilon]=J_{\mathrm{opt}}[\psi,\psi^{*}]+J_{ \mathrm{cost}}[\varepsilon]+J_{\mathrm{TDSE}}[\psi,\psi^{*},\chi,\chi^{*}, \varepsilon]. \tag{3}\]
First, the functional corresponding to optimizing the expectation value of the operator \(O\)
\[J_{\mathrm{opt}}[\psi,\psi^{*}]=\int_{\Omega}\psi^{*}(\Omega,T)O\psi(\Omega,T) \mathrm{d}\Omega, \tag{4}\]
where \(\psi(\Omega,t)\) is the wavefunction which describes the quantum system and \(\Omega\) the degrees of freedom (Euler angles, cartesian coordinates of one or many-body,...). Then, we set the _cost_, i. e., a condition on the field, as [39]
\[J_{\mathrm{cost}}[\varepsilon]=-\alpha\int\limits_{0}^{T}\left[\varepsilon(t )-\varepsilon_{\mathrm{ref}}(t)\right]^{2}\mathrm{d}t, \tag{5}\]
where \(\alpha\) is a penalty factor. It is worth noting that \(J_{\mathrm{cost}}[\varepsilon]\) may be more complex by including a mask function [39]. However, for the sake of simplicity and without loss of generality, we will consider this simpler case. Finally, the functional which ensures that the TDSE is fulfilled reads as
\[J_{\mathrm{TDSE}}[\psi,\psi^{*},\chi,\chi^{*},\varepsilon]=-2\,\mathrm{Im} \int\limits_{0}^{\widehat{T}}\left[\int_{\Omega}\chi^{*}(\Omega,t)\left(i \frac{\partial}{\partial t}-H(\Omega,\varepsilon)\right)\psi(\Omega,t) \mathrm{d}\Omega\right]\mathrm{d}t, \tag{6}\]
where \(\chi(\Omega,t)\) is the Lagrange multiplier. In this work, we set the Hamiltonian to \(H=-\frac{1}{2}\nabla^{2}+V(\Omega,\epsilon(t))\), where \(V(\Omega,\epsilon(t))\) is the internal potential plus the interaction with a driving field, \(\epsilon(t)\). In contrast to the literature, we take the upper limit of the time interval to \(\widehat{T}\), a value greater than the time at which the function to optimize is evaluated, \(T\), denoted as the _measurement time_[43]. Specifically, we choose \(\widehat{T}>T\), as shown in Fig.1, to effectively manage the discontinuity caused by the Dirac delta in the QOCT equations, as described below.
To obtain the extremal points we derive the corresponding set of Euler-Lagrange equations (2). In the case of \(\epsilon(t)\), we find
\[2\,\mathrm{Im}\int_{\Omega}\chi^{*}(\Omega,t)\frac{\partial}{ \partial\epsilon}H(\Omega,\epsilon)\psi(\Omega,t)\mathrm{d}\Omega-2\alpha[( \epsilon(t)-\epsilon_{\mathrm{ref}}(t)]=0\Rightarrow \tag{7}\] \[\epsilon(t)=\epsilon_{\mathrm{ref}}(t)+\frac{1}{\alpha}\,\mathrm{ Im}\int_{\Omega}\chi^{*}(\Omega,t)\frac{\partial}{\partial\epsilon}V( \vec{r},\epsilon)\psi(\Omega,t)\mathrm{d}\Omega. \tag{8}\]
Now, by taking variations on the wavefunction \(\psi(\Omega,t)\), we determine the Lagrange multiplier \(\chi^{*}(\Omega,t)\). Note that only the part of the functional (6) that has a direct dependence on \(\psi(\Omega,t)\) is relevant. As \(\psi(\Omega,t)\) and \(\psi^{*}(\Omega,t)\) are independent of each other (for further information, refer to Appendix C), the relevant term reads as
\[j_{\psi}[\chi^{*},\epsilon,\psi,\Omega,t]=i\chi^{*}(\Omega,t)\left(i\frac{ \partial}{\partial t}-H(\Omega,\epsilon)\right)\psi(\Omega,t)+\psi^{*}(\Omega,t)O\psi(\Omega,t)\delta(t-T). \tag{9}\]
Then, the terms in the ELE are
\[\frac{\partial\,j_{\psi}}{\partial\psi} = i\chi^{*}(\Omega,t)\left[-V(\Omega,\epsilon)\right]+O\psi^{*}( \Omega,t)\delta(t-T), \tag{10}\] \[\frac{\partial\,j_{\psi}}{\partial\psi} = -\chi^{*}(\Omega,t),\quad\frac{\mathrm{d}}{\mathrm{d}t}\frac{ \partial\,j_{\psi}}{\partial\psi}=-\dot{\chi}^{*}(\Omega,t),\] (11) \[\frac{\partial\,j_{\psi}}{\partial\psi_{x_{j}}} = 0,\quad\frac{\partial\,j_{\psi}}{\partial\psi_{x_{j}^{2}}}=\frac{ i}{2}\chi^{*}(\Omega,t),\quad\frac{\mathrm{d}^{2}}{\mathrm{d}x_{j}^{2}}\frac{ \partial\,j_{\psi}}{\partial\psi_{x_{j}^{2}}}=\frac{i}{2}\chi^{*}_{x_{j}^{2}}( \Omega,t)\text{ for all }x_{j}, \tag{12}\]
where we use the notation \(\dot{\psi}(\Omega,t)\equiv\frac{\partial}{\partial t}\psi(\Omega,t)\) and \(\psi_{x_{j}^{n}}(\Omega,t)\equiv\frac{\partial^{n}}{\partial x_{j}^{n}}\psi( \Omega,t)\). Note that we have used the identity \(\int_{\Omega}\psi^{*}(\Omega,T)O\psi(\Omega,T)\mathrm{d}\Omega=\int_{\Omega} \int_{\Omega}\int_{\Omega}^{\widehat{T}}\psi^{*}(\Omega,t)O\psi(\Omega,t) \delta(t-T)\mathrm{d}\Omega\mathrm{d}t\) and
Figure 1: Sketch of a time interval illustrating the definition of \(T\) and \(\widehat{T}\).
Eq. (2) to derive the contributions to the ELE with high order derivatives. After summing all the terms, we obtain
\[\left(i\frac{\partial}{\partial t}+H(\Omega,\epsilon)\right)\chi^{*}(\Omega,t)=- iO\psi^{*}(\Omega,t)\delta(t-T). \tag{13}\]
To obtain the ELE for \(\chi(\Omega,t)\) we can either take variations on \(\psi^{*}(\Omega,t)\) or simply take the complex conjugate of Eq. (13),
\[\left(i\frac{\partial}{\partial t}-H(\Omega,\epsilon)\right)\chi(\Omega,t)=- iO\psi(\Omega,t)\delta(t-T). \tag{14}\]
The TDSE of \(\psi^{*}(\Omega,t)\) is obtained by taking variations of \(\chi(\Omega,t)\). The relevant term reads
\[j_{\chi}[\psi^{*},\chi,\epsilon,\Omega,t]=-i\chi(\Omega,t)\left(i\frac{ \partial}{\partial t}+H(\Omega,\epsilon)\right)\psi^{*}(\Omega,t) \tag{15}\]
Each term in the ELE is
\[\frac{\partial j_{\chi}}{\partial\dot{\chi}} = 0,\qquad\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial j_{\chi}}{ \partial\dot{\chi}}=0 \tag{16}\] \[\frac{\partial j_{\chi}}{\partial\chi} = -i\left(i\frac{\partial}{\partial t}+H(\Omega,\epsilon(t))\right) \psi^{*}(\Omega,t). \tag{17}\]
By combining all the terms we derive the EOM
\[\left(i\frac{\partial}{\partial t}+H(\Omega,\epsilon)\right)\psi^{*}(\Omega, t)=0 \tag{18}\]
Finally, we take variations with respect to the Lagrange multiplier \(\chi^{*}(\Omega,t)\). As done above, we work with the relevant term
\[j_{\chi^{*}}[\psi,\chi^{*},\epsilon,\Omega,t]=i\chi^{*}(\Omega,t)\left(i\frac {\partial}{\partial t}-H(\Omega,\epsilon)\right)\psi(\Omega,t). \tag{19}\]
The terms in the associated ELE are
\[\frac{\partial j_{\chi}}{\partial\chi^{*}} = 0,\qquad\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial j_{\chi^{*} }}{\partial\dot{\chi}^{*}}=0, \tag{20}\] \[\frac{\partial j_{\chi^{*}}}{\partial\chi^{*}} = i\left(i\frac{\partial}{\partial t}-H(\Omega,\epsilon)\right) \psi(\Omega,t)\, \tag{21}\]
from which we obtain the TDSE for \(\psi(\Omega,t)\)
\[\left(i\frac{\partial}{\partial t}-H(\Omega,\epsilon)\right)\psi(\Omega,t)=0. \tag{22}\]
As expected, Eq. (22) is the complex conjugate of Eq. (18), which ensures that the functional \(J[\psi,\psi^{*},\chi,\chi^{*},\epsilon]\) is appropriate. Summing up, we derived the set of equations
\[\left(i\frac{\partial}{\partial t}-H(\Omega,\epsilon)\right)\psi( \Omega,t)=0, \tag{23}\] \[\left(i\frac{\partial}{\partial t}-H(\Omega,\epsilon)\right)\chi( \Omega,t)=-iO\psi(\Omega,t)\delta(t-T),\] (24) \[\epsilon(t)=\epsilon_{\mathrm{ref}}(t)+\frac{1}{\alpha}\operatorname {Im}\int_{\Omega}\chi^{*}(\Omega,t)\frac{\partial}{\partial\epsilon}H(\Omega, \epsilon)\psi(\Omega,t)\mathrm{d}\Omega, \tag{25}\]
and their complex conjugates.
In the following sections we obtain two different types of solutions, namely, continuous and discontinuous solutions. The latter are related to the _canonical_ QOCT equations.
## 3 Extremal trajectories: Conditions and continuity
### Continuous solutions
In Eq.(23), the continuity of \(\psi(\Omega,t)\) is guaranteed, while the continuity of \(\chi(\Omega,t)\) and thus \(\varepsilon(t)\) cannot be assured by Eq.(24). Here, we show that \(\chi(\Omega,t)\) is continuous if it fulfills \(\chi(\Omega,t)=\frac{i}{2\pi n}O\psi(\Omega,t)\), where \(n\in\mathbb{Z}-0\).
Proof.: Let us assume the ansatz \(\chi(\Omega,t)=\widetilde{\chi}(\Omega,t)e^{i\phi\Theta(t-T)}\), where \(\widetilde{\chi}(\Omega,t)\) is continuous and fullfils the TDSE, \(\phi\in\mathbb{C}\) and \(\Theta(t-T)\) is the Heaviside function. We substitute the ansatz in Eq. (24)
\[i\frac{\partial}{\partial t}\left(\widetilde{\chi}(\Omega,t)e^{ i\phi\Theta(t-T)}\right)-\hat{H}(\Omega,t)\widetilde{\chi}(\Omega,t)e^{i \phi\Theta(t-T)}=-iO\psi(\Omega,t)\delta(t-T)\] \[ie^{i\phi\Theta(t-T)}\frac{\partial}{\partial t}\widetilde{ \chi}(\Omega,t)+i\widetilde{\chi}(\Omega,t)\frac{\partial}{\partial t}e^{i \phi\Theta(t-T)}-\hat{H}(\Omega,t)\widetilde{\chi}(\Omega,t)e^{i\phi\Theta(t -T)}=-iO\psi(\Omega,t)\delta(t-T)\] \[e^{i\phi\Theta(t-T)}\left(i\frac{\partial}{\partial t}\widetilde {\chi}(\Omega,t)-\hat{H}(\Omega,t)\widetilde{\chi}(\Omega,t)\right)=\left( \phi\widetilde{\chi}(\Omega,t)e^{i\phi\Theta(t-T)}-iO\psi(\Omega,t)\right) \delta(t-T)\]
To remove the discontinuity, we may set
\[\lim_{t\to T}\phi\widetilde{\chi}(\Omega,t)e^{i\phi\Theta(t-T)}=iO\psi( \Omega,T)\Rightarrow\lim_{t\to T}\chi(\Omega,t)=\frac{i}{\phi}O\psi( \Omega,T) \tag{26}\]
Finally, \(\lim_{t\to T}\widetilde{\chi}(\Omega,t)e^{i\phi\Theta(t-T)}\) is defined if \(e^{i\phi\Theta(t-T)}\) is continuous, i. e., \(\phi=2\pi n\) with \(n\in\mathbb{Z}-\{0\}\). Therefore, \(\chi(\Omega,t)\) is continuous if and only if \(\chi(\Omega,T)=\frac{i}{2\pi n}O\psi(\Omega,T)\). Note that it is easy to prove that \(\varepsilon(t)\) is also continuous since \(\psi(\Omega,t)\) and \(\chi(\Omega,t)\) are continuous.
It should be noted that the condition of continuity is imposed at a specific time \(T\), which poses a challenge as \(T\) is not a boundary of the time interval (as illustrated in Fig. 11). Therefore, such solutions may not be appropriate from a formal standpoint, especially when extending the system to a finite _measurement time_. Let us also remark that the solution here obtained differs from the costate provided in the literature [43, 39, 46]. In the following section, we will provide evidence to demonstrate that the costate reported in earlier studies corresponds to a solution that is discontinuous.
### Relationship to Canonical _quantum optimal control equations_
In this section, we introduce a less restrictive condition for \(\chi(\Omega,t)\) at \(t=T\), allowing for discontinuity, and impose an initial condition such that \(\chi(\Omega,T^{\prime})=0\) for \(T^{\prime}{>}T\).
Proof.: Let us define \(\chi(\Omega,t)=\widetilde{\chi}(\Omega,t)f(t)\), where \(\widetilde{\chi}(\Omega,t)\) satisfies the TDSE \(\left(i\frac{\partial}{\partial t}-H(\Omega,\varepsilon)\right) \widetilde{\chi}(\Omega,t)=0\) and \(\widetilde{\chi}(\Omega,t)=\chi(\Omega,t)\) for \(t\in[0,T)\). We plug it in Eq. (14)
\[\left(i\frac{\partial}{\partial t}-H(\Omega,\varepsilon)\right) \left(\widetilde{\chi}(\Omega,t)f(t)\right)=-iO\psi(\Omega,t)\delta(t-T)\] \[i\widetilde{\chi}(\Omega,t)\frac{\partial}{\partial t}f(t)+f(t) \left(i\frac{\partial}{\partial t}\widetilde{\chi}(\Omega,t)-H(\Omega, \varepsilon)\widetilde{\chi}(\Omega,t)\right)=-iO\psi(\Omega,t)\delta(t-T)\] \[\widetilde{\chi}(\Omega,t)\frac{\partial}{\partial t}f(t)=-O\psi (\Omega,t)\delta(t-T) \tag{27}\]
Next, we integrate over time in the interval \([T-\delta,T+\delta]\) with \(\delta>0\) and take the limit \(\delta\to 0\)
\[\lim_{\delta\to 0}f(\Omega,T+\delta)-\lim_{\delta\to 0}f(\Omega,T-\delta)=- \frac{O\psi(\Omega,T)}{\widetilde{\chi}(\Omega,T)}. \tag{28}\]
Setting \(\chi(\Omega,t)=0\) for \(t>T\) and using that \(\chi(\Omega,t)=\widetilde{\chi}(\Omega,t)\) for \(t<T\), that is to say, \(\lim_{\delta\to 0}f(T+\delta)=0\) and \(\lim_{\delta\to 0}f(T-\delta)=1\), we obtain
\[-1=-\frac{O\psi(\Omega,T)}{\widetilde{\chi}(\Omega,T)}\Rightarrow\widetilde{ \chi}(\Omega,T)=O\psi(\Omega,T), \tag{29}\]
therefore, we obtain the boundary condition for the costate
\[\lim_{\delta\to 0}\chi(\Omega,T-\delta)=O\psi(\Omega,T). \tag{30}\]
Next, we plug Eq. (30) in Eq. (25)
\[\lim_{\delta\to 0}\varepsilon(T-\delta) =\varepsilon_{\text{ref}}(T)+\frac{1}{\alpha}\operatorname{Im} \int_{\Omega}O\psi^{*}(\Omega,T)\frac{\partial}{\partial\varepsilon}H(\Omega, \varepsilon)\psi(\Omega,T)\mathrm{d}\Omega\] \[=\varepsilon_{\text{ref}}(T)+\frac{1}{\alpha}\operatorname{Im} \left\langle\psi\left|O\frac{\partial}{\partial\varepsilon}H(\Omega, \varepsilon)\right|\psi\right\rangle \tag{31}\]
Note that the integral in Eq. (31) is real if \(O\frac{\partial}{\partial\varepsilon}H(\Omega,\varepsilon)\) is Hermitian, that is, \(\left[O,\frac{\partial}{\partial\varepsilon}H(\Omega,\varepsilon)\right]=0\). As a consequence, we have \(\lim_{\delta\to 0}\varepsilon(T-\delta)=\varepsilon_{\text{ref}}(T)\). Moreover, even though \(\chi(\Omega,t)\) is not continuous at \(t=T\), we have that \(\lim_{\delta\to 0}\varepsilon(T+\delta)=\varepsilon_{\text{ref}}(T)\), which ensures the continuity of \(\varepsilon(t)\) at \(t=T\).
## 4 Conclusions
In this study, we have presented a comprehensive explanation of the theory of variations that is essential to derive the Quantum Optimal Control Theory (QOCT) equations. Specifically, we have derived the Euler-Lagrange equations for multiple variables and demonstrated a systematic approach to incorporate constraints in any multidimensional functional by utilizing the Lagrange multipliers.
Moreover, we have shown that the _canonical_ solutions of the QOCT equations, which are commonly reported in the literature, exhibit discontinuity at the measurement time, as it
is commonly assumed. In contrast, we have identified a new set of continuous solutions of QOCT equations. However, the lack of well-defined boundary conditions limits their practical applicability.
Finally, let us highlight that our work presents a foundation for developing novel optimal control strategies using our methodology.
J.J.O. gratefully acknowledges the funding from the Madrid Government (Comunidad de Madrid Spain) under the Multiannual Agreement with Universidad Complutense de Madrid in the line Research Incentive for Young PhDs, in the context of the V PRICIT (Regional Programme of Research and Technological Innovation) (Grant: PR27/21-010), Project PID2019-106732GB-I00 (MINECO) and the Social European Fund and Juan de la Cierva-Incorporacion granted by the Ministerio de Ciencia e Innovacion. I.R.S. acknowledges support from MINECO PID2021-122796NB-I00. K.C. acknowledges support form MINECO PID2020-118180GB-I00, and Junta de Andalucia (Spain) grants A-FQM-441-UGR18 and P20-00164.
## Appendix A Theory of variations
As we have sketched above, the QOCT is a widely used procedure to maximize or minimize an observable of a quantum system by setting several constraints. Most of the theory developed in the literature addresses physicists and chemists, and lacks a rigorous mathematical framework, which demands clear and correct proofs. In particular, this is due to a misunderstanding in the concept of _variation_, which is defined in mathematics as _any function \(\eta(\Omega)\) that perturbs a given function \(f(\Omega)\), fulfilling that \(\eta(\Omega)\) vanishes in the border of the domain of \(f(\Omega)\)_. Let us also note that this vanishing condition may also apply for the derivatives of \(\eta(\Omega)\), as we explain in detail in B.
In the following sections we show and derive the mathematical tools to correctly derive the EOM of QOC. We first derive the Euler-Lagrange equations (ELE), which are the equations fulfilled by the functions which optimize a given functional. By applying the ELE, we do not need to make explicit use of the variations, facilitating and clarifying the derivation of our set of QOCT equations. Next, we explain in detail how to add constraints to the functional, such as the total energy of the driving field.
### Derivation of Euler-Lagrange equations
The minimization principle consists on seeking for the minimum of the functional
\[\int_{\Omega}\mathcal{F}[\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}}, \ldots,\widehat{y}_{x_{n}},\Omega^{\prime}]\mathrm{d}\Omega^{\prime}, \tag{1}\]
where \(\mathcal{F}\) is a functional of \(\widehat{y}(x_{1},x_{2},\ldots,x_{n})\) defined in a \(n\) dimensional domain \(\Omega\), its first derivatives and \(\Omega^{\prime}=(x_{1},x_{2},\ldots,x_{n})\). Note that we use the notation \(y_{x_{i}}=\frac{\partial y}{\partial x_{i}}\). The Euler
Lagrange equations (ELE) are the conditions of the extremal functions, that is, the conditions that have to be satisfied by \(\{y_{x_{j}}\}_{j=1}^{n}\), that optimize the functional (A.1). They read as
\[\frac{\partial}{\partial\widehat{y}}\mathscr{F}(\widehat{y}, \widehat{y}_{x_{1}},\ldots,\widehat{y}_{x_{n}},x_{1},\ldots,x_{n})-\] \[\sum_{j=1}^{n}\frac{\mathrm{d}}{\mathrm{d}x_{j}}\frac{\partial}{ \partial\widehat{y}_{x_{j}}}\mathscr{F}(\widehat{y},\widehat{y}_{x_{1}}, \ldots,\widehat{y}_{x_{n}},x_{1},\ldots,x_{n})=0.\] (A.2)
Proof.: To seek for these extremal functions we follow the same strategy than Lagrange [2]. We take
\[\widehat{y}(x_{1},x_{2},\ldots,x_{n},\epsilon)=y(x_{1},x_{2},\ldots,x_{n})+ \xi\eta(x_{1},x_{2},\ldots,x_{n}),\] (A.3)
where \(\eta(x_{1},x_{2},\ldots,x_{n})\in\mathbb{C}\) is such that vanishes on \(\partial\Omega\), i. e., the border of the domain \(\Omega\) and \(\epsilon\in\mathds{R}\). Let us note that we may find in the literature that the _variation_ is defined as \(\delta y(x_{1},x_{2},\ldots,x_{n})\equiv\xi\eta(x_{1},x_{2},\ldots,x_{n})\). Thus, \(\widehat{y}(x_{1},x_{2},\ldots,x_{n},\epsilon)\) and \(y(x_{1},x_{2},\ldots,x_{n})\) have the same value in the \(\partial\Omega\) for all \(\eta(x_{1},x_{2},\ldots,x_{n})\). In Fig. A1, we illustrate how the variations affect a curve in a plane with the calculation of the shortest distance between two points. For the sake of simplicity we restrict our analysis to the 2 dimensional case, which can be easily generalized. The stationary function is a geodesic which corresponds to the straight line linking two points. Therefore, the problem is reduced to finding \(y(x_{1},x_{2},\ldots,x_{n})\) so that
\[\frac{\partial}{\partial\xi}\int_{\Omega}\mathscr{F}[\widehat{y},\widehat{y} _{x_{1}},\widehat{y}_{x_{2}},\Omega^{\prime}]\mathrm{d}\Omega^{\prime}\bigg{|} _{\xi=0}=0.\] (A.4)
We assume that \(y(x_{1},x_{2})\) is a stationary point of \(\int_{\Omega}\mathscr{F}[y,y_{x_{1}},y_{x_{2}},x_{1},x_{2}]\mathrm{d}x_{1} \mathrm{d}x_{2}\). Then, we study the functional close to this point by adding a _variation_\(\delta y=\xi\eta(x_{1},x_{2})\), w
is any function which vanishes at the border of the domain of \(y(x_{1},x_{2})\) (see Fig. A1). The variation ensures that for \(\xi=0\), \(\int_{\Omega}\mathscr{F}[y+\xi\eta,y_{x_{1}}+\xi\eta_{x_{1}},y_{x_{2}}+\xi\eta_{x_ {2}},x_{1},x_{2}]\mathrm{d}x_{1}\mathrm{d}x_{2}\) has an extremal, i. e., it fulfills the condition
\[\frac{\partial}{\partial\xi}\int_{\Omega}\mathscr{F}[\widehat{y},\widehat{y}_{ x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2}]\mathrm{d}x_{1}\mathrm{d}x_{2}\bigg{|}_{ \xi=0}=0. \tag{11}\]
Expanding the derivative, we obtain
\[\int\limits_{\Omega}\left[\frac{\partial}{\partial\widehat{y}} \mathscr{F}(\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2}) \frac{\partial\widehat{y}}{\partial\xi}+\frac{\partial}{\partial\widehat{y}_ {x_{1}}}\mathscr{F}(\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\frac{\partial\widehat{y}_{x_{1}}}{\partial\xi}+\right. \tag{12}\] \[\left.\frac{\partial}{\partial\widehat{y}_{x_{2}}}\mathscr{F}( \widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\frac{ \partial\widehat{y}_{x_{2}}}{\partial\xi}\right]\mathrm{d}x_{1}\mathrm{d}x_{2}=\] (13) \[\int\limits_{\Omega}\left[\frac{\partial}{\partial\widehat{y}} \mathscr{F}(\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2}) \eta(x_{1},x_{2})+\right.\] (14) \[\left.\frac{\partial}{\partial\widehat{y}_{x_{2}}}\mathscr{F}( \widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\eta_{x_{2}}(x _{1},x_{2})\right]\mathrm{d}x_{1}\mathrm{d}x_{2}=\] (15) \[\int\limits_{\Omega}\frac{\partial}{\partial\widehat{y}}\mathscr{ F}(\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\eta(x_{1},x_{2}) \mathrm{d}x_{1}\mathrm{d}x_{2}+\] (16) \[\left[\frac{\partial}{\partial\widehat{y}_{x_{1}}}\mathscr{F}( \widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\eta(x_{1},x_{2 })\right]_{\partial\Omega}\] (17) \[-\int\limits_{\Omega}\mathrm{d}x_{1}\mathrm{d}x_{2}\frac{\mathrm{ d}}{\mathrm{d}x_{1}}\frac{\partial}{\partial\widehat{y}_{x_{1}}}\mathscr{F}( \widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\eta(x_{1},x_ {2})+\] (18) \[\left[\frac{\partial}{\partial\widehat{y}_{x_{2}}}\mathscr{F}( \widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\eta(x_{1},x_{ 2})\right]_{\partial\Omega}\] (19) \[-\int\limits_{\Omega}\frac{\mathrm{d}}{\mathrm{d}x_{2}}\frac{ \partial}{\partial\widehat{y}_{x_{2}}}\mathscr{F}(\widehat{y},\widehat{y}_{x _{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\eta(x_{1},x_{2})\mathrm{d}x_{1} \mathrm{d}x_{2}. \tag{20}\]
The terms (19) and (19) vanish because \(\eta(x_{1},x_{2})=0\) in the border of the domain, i. e., for \((x_{1},x_{2})\in\partial\Omega\). Therefore, the condition for the stationary point is
\[\int\limits_{\Omega}\left[\frac{\partial}{\partial\widehat{y}} \mathscr{F}(\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})- \frac{\mathrm{d}}{\mathrm{d}x_{1}}\frac{\partial}{\partial\widehat{y}_{x_{1}}} \mathscr{F}(\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})\right.\] \[\left.-\frac{\mathrm{d}}{\mathrm{d}x_{2}}\frac{\partial}{\partial \widehat{y}_{x_{2}}}\mathscr{F}(\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x _{2}},x_{1},x_{2})\right]\eta(x_{1},x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}=0. \tag{21}\]
Since this condition has to be fulfilled for all \(\eta(x_{1},x_{2})\)
\[\frac{\partial}{\partial\widehat{y}}\mathscr{F}(\widehat{y},\widehat{y}_{x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})-\frac{\mathrm{d}}{\mathrm{d}x_{1}}\frac{ \partial}{\partial\widehat{y}_{x_{1}}}\mathscr{F}(\widehat{y},\widehat{y}_{x _{1}},\widehat{y}_{x_{2}},x_{1},x_{2})-\frac{\mathrm{d}}{\mathrm{d}x_{2}} \frac{\partial}{\partial\widehat{y}_{x_{2}}}\mathscr{F}(\widehat{y},\widehat{y}_{ x_{1}},\widehat{y}_{x_{2}},x_{1},x_{2})=0. \tag{22}\]
This result can be generalized for more than two variables following the same procedure, obtaining
\[\frac{\partial}{\partial y}\mathcal{F}(y,y_{x_{1}},\ldots,y_{x_{n}},x_{1},\ldots, x_{n})-\sum_{j=1}^{n}\frac{\mathrm{d}}{\mathrm{d}x_{j}}\frac{\partial}{\partial y _{x_{j}}}\mathcal{F}(y,y_{x_{1}},\ldots,y_{x_{n}},x_{1},\ldots,x_{n})=0, \tag{111}\]
where we have used that \(\widehat{y}(\Omega,t)=y(\Omega,t)\) for \(\xi=0\).
It is important to remark that it is not necessary to know \(y\) in the entire border if the ELE guarantee the existence and uniqueness of the solution.
Finally, let us now consider a functional depending on a function of high order derivatives as \(\mathcal{F}(\widehat{y},\{\widehat{y}_{x_{j}}\}_{j=1}^{n},\{\widehat{y}_{x_{j }^{2}}\}_{j=1}^{n},\ldots,\{\widehat{y}_{x_{j}^{m}}\}_{j=1}^{n},x_{1},\ldots,x _{n})\), where \(y_{x_{j}^{k}}\equiv\frac{\partial^{k}y}{\partial x_{j}^{k}}\). Thus, the ELE for functional with higher order derivatives read
\[\frac{\partial}{\partial y}\mathcal{F}+\sum_{k=1}^{m}(-1)^{k}\sum_{j=1}^{n} \frac{\mathrm{d}^{k}}{\mathrm{d}x_{j}^{k}}\frac{\partial}{\partial y_{x_{j}^ {k}}}\mathcal{F}=0, \tag{112}\]
where we have not explicitly indicated the arguments of the functional for the sake of clarity. A detailed proof can be found in B.
### Adding constraints
Adding a constraint may be mandatory to solve a given problem, and it is done by using the Lagrange multipliers [3]. In the context of calculus, to find the stationary point of a function \(y(x_{1},x_{2})\) fulfilling the condition \(g(x_{1},x_{2})=0\) we have to find the stationary point of
\[f(x_{1},x_{2},\lambda)=y(x_{1},x_{2})+\lambda g(x_{1},x_{2}). \tag{113}\]
The stationary points fulfill
\[\frac{\partial}{\partial x_{1}}f(x_{1},x_{2},\lambda)=\frac{\partial}{ \partial x_{2}}f(x_{1},x_{2},\lambda)=\frac{\partial}{\partial\lambda}f(x_{1 },x_{2},\lambda)=0, \tag{114}\]
where \(\lambda\) is the _Lagrange_ multiplier and the last equality ensures that the constraint \(g(x_{1},x_{2})=0\).
The extension to the theory of variations consists on taking a function as Lagrange multiplier. In this case, we restrict to functionals with first derivatives only. The generalization can be done following the steps described in B. The multiplier function, \(\lambda(x_{1},x_{2},\ldots,x_{n})\), may depend on an arbitrary number of variables, depending on the constraint. Let us show how to construct the functionals based on these constrains with some illustrative examples in a two dimensional space.
#### a.2.1 Constraint at each \((x_{1},x_{2})\in\Omega\)
First, we consider a constraint at each \((x_{1},x_{2})\in\Omega\), i. e., \(g(y,y_{x_{1}},y_{x_{2}},\ldots,x_{1},x_{2})=0\). Then, we sum the term
\[\int\limits_{\Omega}\lambda g(y,y_{x_{1}},y_{x_{2}},x_{1},x_{2})\mathrm{d}\Omega. \tag{2.22}\]
to the functional to be minimized, \(\int\limits_{\Omega}\mathcal{F}(y,y_{x_{1}},y_{x_{2}},x_{1},x_{2})\mathrm{d}\Omega\). Now, we apply the ELE (2.17) to \(\lambda(x_{1},x_{2})\) obtaining \(g(y,y_{x_{1}},y_{x_{2}},\ldots,x_{1},x_{2})=0\). Note that we are not taking into account \(\int\limits_{\Omega}\mathcal{F}(y,x_{1},y_{x_{2}},\ldots,x_{1},x_{2})\mathrm{ d}\Omega\) here, since it does not depend on \(\lambda\). Finally, in order to obtain the ELE corresponding to \(y\)'s we use the functional with constrains
\[\mathcal{G}(y,y_{x_{1}},y_{x_{2}},x_{1},x_{2},\lambda,\epsilon, \eta)=\] \[\int\limits_{\Omega}[\mathcal{F}(y,y_{x_{1}},y_{x_{2}},x_{1},x_{2 })+\lambda g(y,y_{x_{1}},y_{x_{2}},x_{1},x_{2})]\,\mathrm{d}\Omega \tag{2.23}\]
#### a.2.2 Constraint given by an integral in one variable of a function of two variables
Now we analyze another remarkable kind of constrains, given by _an integral in one variable of a function of two variables_, i. e., \(\int_{\Omega_{1}}g(y,\{y_{x_{i}}\},\{y_{x_{i},x_{j}}\},\ldots,x_{1},x_{2}) \mathrm{d}x_{1}=0\). For example, the conservation of the norm of the wavefunction (integration in the position space of the norm squared) for all time \(t\) is commonly applied to many-body physics such as Multiconfigurational Time-Dependent Hartree-Fock and related self-consistent methods [11, 12, 10].
In this case we include the functional
\[\int\limits_{\Omega_{1}}\int\limits_{\Omega_{2}}\lambda(x_{2})g(y,y_{x_{1}},y _{x_{1}},x_{1},x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}, \tag{2.24}\]
where we assume that \(\Omega\) can be split in \(\Omega_{1}\) and \(\Omega_{2}\), with \(x_{j}\in\Omega_{j}\). Here we can not apply directly the ELE as it is expressed above, since the multiplier depends on less variables than the dimensions of \(\Omega\). However, we can still use Lagrange's methodology by adding a variation \(\xi\eta(x_{2})\) to the multiplier and deriving with respect to \(\xi\).
\[\mathcal{G}(y,y_{x_{1}},y_{x_{2}},x_{1},x_{2},\xi,\eta)=\int\limits_{\Omega_{ 2}}\int\limits_{\Omega_{1}}(\lambda(x_{2})+\xi\eta(x_{2}))g(y,y_{x_{1}},y_{x_ {2}},x_{1},x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2},\]
and then, deriving with respect to \(\xi\)
\[\frac{\partial}{\partial\xi}\mathcal{G}(y,y_{x_{1}},y_{x_{2}},x_{ 1},x_{2},\xi,\eta) =\int\limits_{\Omega_{2}}\int\limits_{\Omega_{1}}\eta(x_{2})g(y,y_ {x_{1}},y_{x_{2}},x_{1},x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2} \tag{2.25}\] \[=\int\limits_{\Omega_{2}}\eta(x_{2})\int\limits_{\Omega_{1}}g(y, y_{x_{1}},y_{x_{2}},x_{1},x_{2})\mathrm{d}x_{1}\mathrm{d}x_{2}. \tag{2.26}\]
Thus, \(\frac{\partial}{\partial\xi}\mathcal{G}(y,y_{x_{1}},y_{x_{2}},x_{1},x_{2},\xi,\eta)=0\) for all \(\eta(x_{2})\) if and only if the constrain \(\int\limits_{\Omega_{1}}g(y,y_{x_{1}},y_{x_{2}},x_{1},x_{2})\mathrm{d}x_{1}=0\) is fulfilled.
a.2.3 General case: Constraint given by an integral in \(m\) variables of a function of \(n\) variables
The strategy consists on including a Lagrange multiplier in the \(n-m\) variables which are not integrated, as can be generalized from the previous cases.
## Appendix B Derivation of Euler-Lagrange type equation for higher order derivatives
In this section we derive the ELE for a functional with higher order derivatives. First, we will derive the case of second and first order derivatives and one variable. Then, we will generalize the result.
Proof.: Let us take the functional \(\int\limits_{\Omega}\mathcal{F}(y,y_{x},y_{x^{2}},x)\mathrm{d}x\). We follow a similar procedure as in A.1 to compute the ELE. First, we evaluate the functional with the function \(\widehat{y}(x)=y(x)+\xi\eta(x)\), where \(y(x)\) is the extremal, \(\eta(x)\) is a function such that \(\eta(x)|_{\partial\Omega}=\eta_{x}(x)|_{\partial\Omega}=0\) and \(\xi\in\mathds{R}\)
\[\mathcal{I}[(y+\xi\eta,y_{x}+\xi\eta_{x},y_{x^{2}}+\xi\eta_{x^{2} })]=\] \[\int\limits_{\Omega}\mathcal{F}(y+\xi\eta,y_{x}+\xi\eta_{x},y_{x ^{2}}+\xi\eta_{x^{2}},x)\mathrm{d}x. \tag{21}\]
Since \(y(x)\) is an extremal of the functional \(\mathcal{I}\), then \(\frac{\mathrm{d}}{\mathrm{d}\xi}\mathcal{I}[(y+\xi\eta,y_{x}+\xi\eta_{x},y_{x ^{2}}+\xi\eta_{x^{2}})]|_{\xi=0}=0\). The derivative reads as
\[\frac{\mathrm{d}}{\mathrm{d}\xi}\mathcal{I}[(y+\xi\eta,y_{x}+\xi \eta_{x},y_{x^{2}}+\xi\eta_{x^{2}})]=\] \[\int\limits_{\Omega}\left[\frac{\partial}{\partial\widehat{y}} \mathcal{F}(\widehat{y},\widehat{y}_{x},\widehat{y}_{x}^{2},x)\eta(x)+\frac{ \partial}{\partial\widehat{y}_{x}}\mathcal{F}(\widehat{y},\widehat{y}_{x}, \widehat{y}_{x}^{2},x)\eta_{x}(x)\right.\] \[\left.\frac{\partial}{\partial\widehat{y}_{x}^{2}}\mathcal{F}( \widehat{y},\widehat{y}_{x},\widehat{y}_{x^{2}},x)\eta_{x^{2}}(x)\right] \mathrm{d}x, \tag{22}\]
where we have used that \(\widehat{y}(x)=y(x)+\xi\eta(x)\) and therefore \(\frac{\mathrm{d}}{\mathrm{d}\xi}\widehat{y}(x)=\eta(x)\). Now, we integrate by parts to remove any derivative of \(\eta(x)\)
\[\frac{\mathrm{d}}{\mathrm{d}\xi}\mathcal{I}[(y+\xi\eta,y_{x}+\xi \eta_{x},y_{x^{2}}+\xi\eta_{x^{2}})]=\] \[\int\limits_{\Omega}\left[\frac{\partial}{\partial\widehat{y}} \mathcal{F}(\widehat{y},\widehat{y}_{x},\widehat{y}_{x^{2}},x)-\frac{\mathrm{ d}}{\mathrm{d}x}\frac{\partial}{\partial\widehat{y}_{x}}\mathcal{F}(\widehat{y}, \widehat{y}_{x},\widehat{y}_{x^{2}},x)+\right.\] \[\left.\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}\frac{\partial}{ \partial\widehat{y}_{x^{2}}}\mathcal{F}(\widehat{y},\widehat{y}_{x},\widehat {y}_{x^{2}},x)\right]\eta(x)\mathrm{d}x+\] \[\left[\frac{\partial}{\partial\widehat{y}}\mathcal{F}(\widehat{y },\widehat{y}_{x},\widehat{y}_{x^{2}},x)\eta(x)\right]_{\partial\Omega}+ \left[\frac{\partial}{\partial\widehat{y}}\mathcal{F}(\widehat{y},\widehat{y }_{x},\widehat{y}_{x^{2}},x)\eta_{x}(x)\right]_{\partial\Omega}\] \[-\left[\frac{\mathrm{d}}{\mathrm{d}x}\frac{\partial^{2}}{ \partial\widehat{y}_{x}^{2}}\mathcal{F}(\widehat{y},\widehat{y}_{x},\widehat {y}_{x^{2}},x)\eta(x)\right]_{\partial\Omega}. \tag{23}\]
Note that we integrated the last term of Eq. (B.2) twice by parts to obtain \(\eta(x)\) under the integral. We obtain a plus sign in front of the third term of the integral. Since
\(\eta_{x}(x)|_{\partial\Omega}=0\), the three last terms in Eq. (B.3) vanish. Finally, since Eq. (B.3) must be zero for all \(\eta(x)\), we obtain, for \(\xi=0\)
\[\frac{\partial}{\partial y}\mathcal{F}(y,y_{x},y_{x^{2}},x)-\frac{ \mathrm{d}}{\mathrm{d}x}\frac{\partial}{\partial y_{x}}\mathcal{F}(y,y_{x},y_{ x^{2}},x)+\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}\frac{\partial}{\partial y_{x^{2}}} \mathcal{F}(y,y_{x},y_{x^{2}},x)=0.\] (B.4)
This proof can be generalized straightforwardly by imposing that \(\eta(x)\) and its \(n-1\) derivatives vanish at \(\partial\Omega\) for a functional involving \(n\) order derivatives. Then we get the Euler-Lagrange for a \(n\)-order functional
\[\frac{\partial}{\partial y}\mathcal{F}(y,y_{x},\ldots,y_{x^{n}},x)+\sum_{k=1} ^{n}(-1)^{k}\frac{\mathrm{d}^{k}}{\mathrm{d}x^{k}}\frac{\partial}{\partial y_ {x^{k}}}\mathcal{F}(y,y_{x},\ldots,y_{x^{n}},x)=0\]
## Appendix C Independence of the complex conjugate in the functional
Deriving one function with respect to its complex conjugate in a functional is a _confusing_ situation that we physicists have to face from time to time. Usually we try to bypass this uncomfortable dilemma by seeking for an alternative way, as in the case of variations which involve the wave function and its complex conjugate. This cases are specially common in path integral, QED [59] or many-body theory [60, 10], among others. Below, we show that in the derivation of the Euler-Lagrange Equations from the functional (3) we can consider \(\Psi(\Omega,t)\) and \(\Psi^{*}(\Omega,t)\) as independent.
Proof.: Let us define \(\mathcal{G}(\Phi,\Psi,\chi,\zeta,\varepsilon)\) as
\[\mathcal{G}(\Phi,\Psi,\chi,\zeta,\varepsilon)= -i\int\limits_{0}^{\widehat{T}}\left[\int_{\Omega}\Phi(\Omega,t) \left(i\frac{\partial}{\partial t}-H(\Omega,\varepsilon)\right)\chi(\Omega, t)\mathrm{d}\Omega\right]\mathrm{d}t\] (C.1) \[+i\int\limits_{0}^{\widehat{T}}\left[\int_{\Omega}\zeta(\Omega,t )\left(i\frac{\partial}{\partial t}-H(\Omega,\varepsilon)\right)\Psi(\Omega, t)\mathrm{d}\Omega\right]\mathrm{d}t\] \[-\alpha\int\limits_{0}^{\widehat{T}}\left[\varepsilon(t)- \varepsilon_{\mathrm{ref}}(t)\right]^{2}\mathrm{d}t+\int_{\Omega}\Phi O\Psi \mathrm{d}\Omega\]
Applying the Euler-Lagrange equations for \(\chi,\zeta,\Phi,\Psi\) and \(\epsilon\) we obtain
\[i\frac{\partial}{\partial t}\Phi(\Omega,t)+H(\Omega,\epsilon)\Phi( \Omega,t)=0 \tag{13}\] \[i\frac{\partial}{\partial t}\Psi(\Omega,t)-H(\Omega,\epsilon)\Psi( \Omega,t)=0\] (14) \[i\frac{\partial}{\partial t}\chi(\Omega,t)-H(\Omega,\epsilon) \chi(\Omega,t)=-iO\Psi(\Omega,t)\] (15) \[i\frac{\partial}{\partial t}\zeta(\Omega,t)+H(\Omega,\epsilon) \zeta(\Omega,t)=-iO\Phi(\Omega,t)\] (16) \[\epsilon(t) =\epsilon_{\rm ref}(t)-\frac{1}{2\alpha}\left[-i\int_{\Omega} \Phi(\Omega,t)\frac{\partial}{\partial\epsilon}H(\Omega,\epsilon)\chi(\Omega, t)\mathrm{d}\Omega\right.\] (17) \[\left.+i\int_{\Omega}\zeta(\Omega,t)\frac{\partial}{\partial \epsilon}H(\Omega,\epsilon)\Psi(\Omega,t)\mathrm{d}\Omega\right]\]
By inspecting Eqs. (13) and (14) we check that \(\Phi^{*}(\Omega,t)\) and \(\Psi(\Omega,t)\) fulfil the same equation. Furthermore, if we set \(\Phi(\Omega,0)=\beta\Psi^{*}(\Omega,0)\), we find that \(\Phi(\Omega,t)=\beta\Psi^{*}(\Omega,t)\), being \(\beta\in\mathbb{C}\). Analogously, we get that \(\zeta(\Omega,t)=\xi\chi^{*}(\Omega,t)\), where \(\xi\in\mathbb{C}\). Next, we plug this in Eq. (17)
\[\epsilon(t)=\epsilon_{\rm ref}(t)-\frac{1}{2\alpha}\left[\beta \int_{\Omega}\Psi^{*}(\Omega,t)\frac{\partial}{\partial\epsilon}H(\Omega, \epsilon)\chi(\Omega,t)\mathrm{d}\Omega\right.\] \[\left.+\xi\int_{\Omega}\chi^{*}(\Omega,t)\frac{\partial}{\partial \epsilon}H(\Omega,\epsilon)\Psi(\Omega,t)\mathrm{d}\Omega\right]. \tag{18}\]
Note that for \(\beta=\xi=1\), we obtain
\[\left(i\frac{\partial}{\partial t}-H(\Omega,\epsilon)\right)\psi (\Omega,t)=0, \tag{19}\] \[\left(i\frac{\partial}{\partial t}-H(\Omega,\epsilon)\right)\chi (\Omega,t)=-iO\psi(\Omega,t)\delta(t-T),\] (20) \[\epsilon(t)=\epsilon_{\rm ref}(t)+\frac{1}{\alpha}\operatorname{ Im}\int_{\Omega}\chi^{*}(\Omega,t)\frac{\partial}{\partial\epsilon}H(\Omega, \epsilon)\psi(\Omega,t)\mathrm{d}\Omega, \tag{21}\]
and their complex conjugates.
Summing up, by setting \(\Phi(\Omega,0)=\Psi^{*}(\Omega,0)\) and \(\zeta(\Omega,0)=\chi^{*}(\Omega,0)\) we may derive the QOCT equations (23)-(25) as done in Sec. 2 by assuming that \(\Psi(\Omega,t)\) and \(\chi(\Omega,t)\) and their complex conjugates are independent.
|
2309.16288 | Lagrangian formalism and classical statistical ensemble | The Lagrangian formulation in the classical statistical mechanics is
introduced. A key important point is that one requires to replace the standard
real time with the imaginary time through the Wick's rotation. The area of a
constant energy-shell in the tangent bundle is preserved under the time
evolution. Consequently, a definition of the statistical ensemble can be
defined. | Sikarin Yoo-Kong | 2023-09-28T09:34:33Z | http://arxiv.org/abs/2309.16288v3 | ###### Abstract
###### Abstract
The Lagrangian formulation in the classical statistical mechanics is introduced. A key important point is that one requires to replace the standard real time with the imaginary time through the Wick's rotation. The area of a constant energy-shell in the tangent bundle is preserved under the time evolution. Consequently, a definition of the statistical ensemble can be defined.
**Lagrangian formalism and classical statistical ensemble**
Sikarin Yoo-Kong
_The Institute for Fundamental Study (IF),_
_Naresuan University (NU), Phitsanulok-Nakhon Sawan,_
_99 Moo 9, Tha Pho, Mueang Phitsanulok, 65000 Phitsanulok, Thailand._
_e-mail: [email protected]_
**Keywords**: Lagrangian, statistical ensemble, imaginary time.
## 1 Introduction
Classical statistical mechanics is concerned with describing the behavior of systems consisting of a large number of particles (e.g., atoms or molecules) where quantum effects are negligible. Traditionally, it's based on Hamiltonian formulation in classical mechanics, which assumes that particles have well-defined positions and momenta [1]. In this context, the cotangent bundle (phase space) of a system, where each point represents a microstate, is a mathematical space that combines all possible positions and momenta of the particles. Liouville's theorem states that, in a Hamiltonian system as a energy function, the hyper-volume of a constant-energy shell in contangent bundle is conserved over time. Consequently, one can define a set of different microstates, but share certain macroscopic properties (like energy, volume, and particle number), called an ensemble [2].
Here comes to a question: "Is there a Lagrangian approach in the classical statistical mechanics?". According to a limited knowledge of the author, it seems to be many attempts to make a connection between Lagrangian mechanics and classical statistical mechanics, but not in the similar fashion with those in the Hamiltonian set up. Naively, if one shall try to construct the classical statistical ensemble on the tangent bundle with a given Lagrangian \(L(\dot{q},q)=\dot{q}^{2}/2-V(q)\), where \((\dot{q},q)\) is a set of coordinates in tangent bungle and \(V(q)\) is a potential energy, it can be immediately noticed that the Lagrangian cannot be used as the energy function \(L(\dot{q},q)\neq E\). Therefore, this problem prevents us to proceed further on formulating the ensemble in tangent bundle. One interesting fact is that one can go the quantum level by employing Hamiltonian approach or Lagrangian approach (Feynman path integration with an imaginary time) [3]. Then, this leads to an in-completed big picture, see figure 1, since we do not have a proper Lagrangian formulation in the classical level of the statistical mechanics.
In this work, we shall provide a way to formulate the classical statistic ensemble in the tangent bundle. To achieve this goal, the imaginary-time (Wick's rotation) must be applied. In section 2, the imaginary-time Lagrangian mechanics will be discussed and the classical statistical ensemble in the tangent bundle will be given. A simple example, harmonic oscillator with one degree of freedom, will be used to illustrate the consistent result on computing physical quantities both Hamiltonian and imaginary-time Lagrangian. In section 3, the summary together with some remarks will be mentioned.
## 2 Imaginary-time Lagrangian mechanics and classical statistical ensemble
In this section, we shall provide a way to construct the classical ensemble directly from the Lagrangian mechanics. For simplicity, we shall consider a system with one degree of freedom and the Lagrangian is given by
\[L(\dot{q},q;t)=\frac{\dot{q}^{2}}{2}-V(q)\;. \tag{2.1}\]
Under the Wick rotation on time variable: \(t\to i\tau\), the Lagrangian becomes
\[-L(\tilde{q},q)=\frac{\tilde{q}^{2}}{2}+V(q)\equiv E. \tag{2.2}\]
where \(\tilde{q}=dq/d\tau\). Interestingly, we see that a negative Lagrangian is now a total energy of the system. With (2.2), it is not difficult to see that
\[\tilde{q}=-\frac{\partial L}{\partial\tilde{q}}\;, \tag{2.3}\]
and the Euler-Lagrange equation is
\[\frac{\partial L}{\partial q}+\frac{d}{d\tau}\frac{\partial L}{\partial \tilde{q}}=0\;. \tag{2.4}\]
Inserting (2.3) into (2.4), one obtains
\[\tilde{\tilde{q}}=\frac{\partial L}{\partial q}\;. \tag{2.5}\]
We next would like to show that the area of the tangent bundle preserves under the time evolution. Suppose there is an element area: \(d\tilde{q}dq|_{\tau=0}\) and a later time \(\tau>0\) area is given by \(d\tilde{q}dq|_{\tau>0}\). Using the fact that \(q_{\tau}=q_{0}+\tilde{q}d\tau\) and \(\tilde{q}_{\tau}=\tilde{q}_{0}+\tilde{\tilde{q}}d\tau\), we have
\[d\tilde{q}dq|_{\tau>0} = \left(d\tilde{q}_{0}+d\tilde{\tilde{q}}d\tau\right)\left(dq_{0}+ d\tilde{q}d\tau\right)\;. \tag{2.6}\] \[= \left(d\tilde{q}_{0}+\frac{\partial\tilde{\tilde{q}}}{\partial \tilde{q}}d\tau\right)\left(dq_{0}+\frac{\partial\tilde{q}}{\partial q}d\tau\right)\] \[\approx d\tilde{q}dq|_{\tau=0}+\left(\frac{\partial\tilde{\tilde{q}}}{ \partial\tilde{q}}+\frac{\partial\tilde{q}}{\partial q}\right)d\tau\;.\]
Using (2.3) and (2.5), equation (2.6) becomes
\[d\tilde{q}dq|_{\tau>0} = d\tilde{q}dq|_{\tau=0}+\left(\frac{\partial}{\partial\tilde{q}} \frac{\partial L}{\partial q}-\frac{\partial}{\partial q}\frac{\partial L}{ \partial\tilde{q}}\right)d\tau\] \[d\tilde{q}dq|_{\tau>0} = d\tilde{q}dq|_{\tau=0}\;. \tag{2.7}\]
Therefore, the area on the tangent bundle is preserved under the time evolution and we shall treat is feature as a Lagrangian version of the Liouville's theorem, see figure 2.
It now prompts to define a density function of the states on the tangent bundle: \(\rho(\tilde{q},q;\tau)\) and it is not difficult to see that the density function satisfies the equation
\[0=\frac{\partial\rho}{\partial\tau}+\frac{\partial\rho}{\partial q}\frac{ \partial q}{\partial\tau}+\frac{\partial\rho}{\partial\tilde{q}}\frac{ \partial\tilde{q}}{\partial\tau}=\frac{\partial\rho}{\partial\tau}+\{\rho,L\}\;, \tag{2.8}\]
Figure 1: Hamiltonian and Lagrangian approaches in the classical and quantum statistical mechanics.
where \(\{*,L\}\) is treated as a Lagrange bracket1
Footnote 1: We note that this definition of the bracket is not the same with the Lagrangian bracket.
\[\{*,L\}=\left(\frac{\partial*}{\partial\tilde{q}}\frac{\partial L}{\partial q}- \frac{\partial*}{\partial q}\frac{\partial L}{\partial\tilde{q}}\right) \tag{2.9}\]
with a property \(\{*,L\}=-\{L,*\}\). Here, \(*\) is a function defined on the tangent bundle. Interesting fact is that the Lagrange bracket (2.9) provides a set of dynamic equations as follows
\[\tilde{q} = -\frac{\partial L}{\partial\tilde{q}}\;, \tag{2.10}\] \[\tilde{\tilde{q}} = \frac{\partial L}{\partial q}\;, \tag{2.11}\]
which are (2.3) and (2.5), respectively. One can note that a set of (2.10) and (2.11) can be treated as a Lagrange version of the Hamilton's eqauations.
Let \(dN(\tilde{q},q;\tau)\) be a number of points in an element area \(d\tilde{q}dq\) around a point \((\tilde{q},q)\). Then probability density is given by
\[\rho(\tilde{q},q;\tau)d\tilde{q}dq=\lim_{N\to\infty}\frac{dN}{N}\;, \tag{2.12}\]
which satisfies the normalisation condition
\[\int_{\Gamma}d\Gamma\rho(\tilde{q},q;\tau)=1\;,\;\;\;d\Gamma=d\tilde{q}dq\;. \tag{2.13}\]
Any macroscopic quantity can be computed through
\[\langle\Omega(\tilde{q},q)\rangle=\int d\Gamma\rho(\tilde{q},q;\tau)\Omega( \tilde{q},q)\;, \tag{2.14}\]
which is treated as an ensemble average. Moreover, the time evolution of the ensemble average is given by
\[\frac{d}{d\tau}\langle\Omega(\tilde{q},q)\rangle=\int d\Gamma\frac{\partial \rho(\tilde{q},q;\tau)}{\partial\tau}\Omega(\tilde{q},q)=\int d\Gamma\Omega( \tilde{q},q)\left(\frac{\partial\rho}{\partial\tilde{q}}\frac{\partial L}{ \partial q}-\frac{\partial\rho}{\partial q}\frac{\partial L}{\partial\tilde{ q}}\right)\;. \tag{2.15}\]
Applying integrating by parts, we obatin
\[\frac{d}{d\tau}\langle\Omega(\tilde{q},q)\rangle = -\int d\Gamma\rho\left[\left(\frac{\partial\Omega}{\partial \tilde{q}}\frac{\partial L}{\partial q}-\frac{\partial\Omega}{\partial q} \frac{\partial L}{\partial\tilde{q}}\right)+\Omega\left(\frac{\partial}{ \partial\tilde{q}}\frac{\partial L}{\partial q}-\frac{\partial}{\partial q} \frac{\partial L}{\partial\tilde{q}}\right)\right]\;. \tag{2.16}\] \[= -\int d\Gamma\rho\{L,\Omega\}=\langle\{\Omega,L\}\rangle.\]
Figure 2: Area under the time evolution on the tangent bundle.
For an equilibrium macroscopic state, the ensemble average does not explicitly depend on time: \(\frac{\partial\rho}{\partial\tau}=0\), demanding
\[\{\rho_{eq},L\}=0\;. \tag{2.17}\]
This means that a possible solution is for \(\rho_{eq}\) to be a function of the Lagrangian: \(\rho_{eq}(L(\tilde{q},q))\). Consequently, one finds that
\[\{\rho_{eq}(L),L\}=\rho^{\prime}_{eq}(L)\{L,L\}=0\;. \tag{2.18}\]
This means that \(\rho\) is constant on the energy surface \(E=-L\), in tangent bundle.
**Microcanonical ensemble**: For a system with fixed internal energy \(U=\langle E\rangle\), volume \(V\) and a number of particles \(n\), the density function is given by
\[\rho(\tilde{q},q)=\frac{1}{\Sigma}\delta(L(\tilde{q},q)+U)\;. \tag{2.19}\]
With the \(\int\rho(\tilde{q},q)d\Gamma=1\), it demands
\[\Sigma(U)=\int d\Gamma\delta(L(\tilde{q},q)+U)\;. \tag{2.20}\]
Then the ensemble defined on tangent bundle with energy less than or equal \(U\) is given by
\[\Omega(U)=\int_{-L(\tilde{q},q)\leq U}d\Gamma=\int_{0}^{U}dE\Sigma(E)\;. \tag{2.21}\]
With a given ensemble in (2.21), the classical statistical entropy is given by
\[S(U,V,n)=k_{B}\ln\Omega(U)\;. \tag{2.22}\]
For two separated systems with \((U_{1},V_{1},n_{1})\) and \((U_{2},V_{2},n_{2})\), the total ensemble is given by \(\Omega_{12}=\Omega_{1}\Omega_{2}\) resulting in
\[S_{12}=S_{1}+S_{2}\;, \tag{2.23}\]
which is known as the additive property of the entropy.
Now, we are ready to make a connection between microscopic and macroscopic worlds. Let's start with a definition of the temperature
\[\frac{1}{T}=\left(\frac{\partial S}{\partial U}\right)\Big{|}_{V,n}\;. \tag{2.24}\]
\[dS_{12}=dS_{1}+dS_{2}\;. \tag{2.25}\]
At equilibrium, \(dS_{12}=0\) and \(d(U_{1}+U_{2})=0\), one obtains
\[\frac{1}{T_{1}}=\left(\frac{\partial S_{1}}{\partial U_{1}}\right)\Big{|}_{V_ {1},n_{1}}=\left(\frac{\partial S_{2}}{\partial U_{2}}\right)\Big{|}_{V_{2},n _{2}}=\frac{1}{T_{2}}\;, \tag{2.26}\]
which gives a thermal equilibrium condition between two systems or the zeroth law of thermodynamics.
**Example**: We shall now consider the imaginary-time Lagrangian for the harmonic oscillator
\[-L(\tilde{q},q;\tau)=\left(\tilde{q}^{2}+q^{2}\right)=E\;. \tag{2.27}\]
This new form of the Lagrangian allows us to construct the classical ensemble on tangent bundle. In the energy range: \(E\to E+\delta E\), see figure 3, the number of microsates is give by
\[\delta\Omega=2\pi\delta E\;, \tag{2.28}\]
and, obviously, in the energy range \(0\to E\), the number of microstates is given by
\[\Omega(E)=\frac{1}{h}\int_{0}^{E}d\Omega=\frac{E}{h}\;. \tag{2.29}\]
Applying the relation \(U=\langle E\rangle=k_{B}T\) and the first law \(\delta Q=\delta U\), one finds
\[\frac{\delta\Omega}{\Omega}=\frac{\delta U}{U}=\frac{\delta Q}{k_{B}T}\to dS= \frac{\delta Q}{T}\sim\frac{\delta\Omega}{\Omega} \tag{2.30}\]
Finally, we find that the entropy is proportional to the total number of microstates: \(S=k_{B}\ln\Omega(E)\).
**Canonical ensemble**: A system 1 characterised by \((U_{1},V_{1},n_{1})\) is in thermal equilibrium at temperature \(T\) with a heat bath, labeled as 2, characterised by \((E_{2},V_{2},n_{2})\) with conditions \(E_{2}\gg E_{1}\) and \(n_{2}\gg n_{1}\) and \(E_{12}=E_{1}+E_{2}\). The condition here is that the energy is allowed to exchange but not of particles. The Lagrangian of the total system is given by
\[L_{12}(\tilde{q}_{1},\tilde{q}_{2},q_{1},q_{2})=L_{1}(\tilde{q}_{1},q_{1})+L( \tilde{q}_{2},q_{2})\;. \tag{2.31}\]
Since the total system is isolated, the density function of the total system is given by
\[\rho_{12}(\tilde{q},q)=\frac{1}{\Sigma_{12}}\delta(L_{12}+E_{12})\;, \tag{2.32}\]
where \(\tilde{q}=(\tilde{q}_{1},\tilde{q}_{2})\) and \(q=(q_{1},q_{2})\) and
\[\Sigma_{12}=\int d\Gamma_{12}\delta(L_{12}+E_{12})\;. \tag{2.33}\]
Then the classical ensemble is given by
\[\Omega_{12}(E_{12})=\int d\Gamma_{12}\delta(L_{12}+E_{12})\;. \tag{2.34}\]
Actually, we are interested in study the property the system 1. Then one has to trace out 2
\[\rho_{1}(\tilde{q}_{1},q_{1}) = Tr_{2}\rho_{12}(\tilde{q},q) \tag{2.35}\] \[= \frac{\int d\tilde{q}_{2}\int dq_{2}\delta(L_{1}+L_{2}+E_{12})}{ \Omega_{12}(E_{12})}\] \[= \frac{\Omega_{2}(E_{12}+L_{1})}{\Omega_{12}(E_{12})}\;,\]
where \(\Omega_{2}(E_{2})=\Omega_{12}(E_{12}+L_{1})\) is a classical ensemble for the system 2. With a condition \(E_{1}\ll E_{12}\), one can expand \(\ln\Omega_{2}(E_{2})\) around the \(E_{2}=E_{12}\) resulting in
\[\ln\Omega_{2}(E_{2})\approx\ln\Omega_{2}(E_{12})+\frac{\partial\ln\Omega_{2}} {\partial E_{2}}\Big{|}_{E_{2}=E_{12}}L_{1}\;. \tag{2.36}\]
Figure 3: Ensemble in the energy range \(E\to E+\delta E\) on the tangent bundle.
Since the system 1 and system 2 are in thermal equilibrium with temperature \(T\), then we have
\[\Omega_{2}(E_{12}+L_{1})=\Omega_{2}(E_{12})e^{\frac{L_{1}}{K_{B}T}} \tag{2.37}\]
Inserting (2.37) into (2.35), one gets
\[\rho_{1}(\tilde{q}_{1},q_{1})=\frac{\Omega_{2}(E_{12})}{\Omega_{12}(E_{12})}e^{ \frac{L_{1}}{K_{B}T}}=\frac{e^{\beta L_{1}}}{\int d\Gamma_{1}e^{\beta L_{1}}} \propto e^{\beta L_{1}}\;, \tag{2.38}\]
where \(e^{\beta L_{1}}\) will be treated as a Lagrangian version of the Boltzmann factor and \(\beta=\frac{1}{k_{B}T}\).
What we have now is that, for any system with thermal equilibrium with surrounding, the density function is given by
\[\rho(\tilde{q},q)=\frac{e^{\beta L(\tilde{q},q)}}{\int d\Gamma e^{\beta L( \tilde{q},q)}}\;, \tag{2.39}\]
and the canonical partition function \(Z\) is defined as
\[Z\equiv-\frac{1}{h}\int d\Gamma e^{\beta L(\tilde{q},q)}\;. \tag{2.40}\]
Next, we consider
\[\frac{\partial}{\partial\beta}\ln Z = -\frac{\int d\Gamma(L)e^{\beta L}}{\int d\Gamma e^{\beta L}} \tag{2.41}\] \[= \langle-L\rangle=U\;.\]
Then we employ the relation \(U=\frac{\partial}{\partial\beta}(\beta F)\), where \(F\) is the Helmholtz free energy. The final relation is
\[Z=e^{\beta F}\;,\;\;\mbox{or}\;\;F(T,V,n)=k_{B}T\ln Z(T)\;. \tag{2.42}\]
**Example**: We shall work out with the harmonic oscillator with one degree of freedom again. The partition is given by
\[Z=-\frac{1}{h}\int_{-\infty}^{+\infty}d\tilde{q}\int_{-\infty}^{+\infty}dqe^{ \beta(\tilde{q}^{2}+q^{2})}=\frac{k_{B}T}{h}\;, \tag{2.43}\]
which is identical with the Hamiltonian approach.
## 3 Concluding summary
We successfully provide a systematical way on constructing the classical statistical ensemble in the tangent bundle. A magic trick used here is the Wick's rotation from the real time to the imaginary time on the Lagrangian. This transformation gives a couple of first order differential equations which can be viewed as the Lagrangian analogue of the Hamilton's equations. With this new structure on tangent bundle, the area under the consideration does not change under the time evolution and, of course, this feature could be considered as the Lagrangian version of the Liouville's theorem in tangent bundle. Throughout these processes, one can naturally construct the statistical ensemble in tangent bundle and an important quantity in this context known as the statistical entropy or Boltzmann entropy is constructed. With a well known example, harmonic oscillator with one degree freedom, one can show that both approaches, Hamiltonian and imaginary-time Lagrangian, give identical result on computing physical quantities, i.e. entropy and canonical partition function. We hope very much that, with this preliminary work, the missing piece in figure 1 is filled and completes a whole picture. Moreover, with this alternative approach on the Lagrangian side, new mathematical features might be possibly explored leading a new play ground on studying physics.
**Acknowledgements**
S. Yoo-Kong would like to express a depth of gratitude to colleagues for valuable discussions. |
2309.06925 | On Some Unramified Families of Motivic Euler Sums | It is well known that sometimes Euler sums (i.e., alternating multiple zeta
values) can be expressed as $\Q$-linear combinations of multiple zeta values
(MZVs). In her thesis Glanois presented a criterion for motivic Euler sums to
be unramified, namely, expressible as $\Q$-linear combinations of motivic MZVs.
By applying this criterion we present a few families of such unramified motivic
Euler sums in two groups. In one such group we can further prove the concrete
identities relating the motivic Euler sums to the motivic MZVs, determined up
to rational multiple of a motivic Riemann zeta value by a result of Brown,
under the assumption that the analytic version of such identities hold. | Ce Xu, Jianqiang Zhao | 2023-09-13T12:50:57Z | http://arxiv.org/abs/2309.06925v3 | # On Some Unramified Families of Motivic Euler Sums
# On Some Unramified Families of Motivic Euler Sums
Ce Xu\({}^{a,}\)1 and Jianqiang Zhao\({}^{b,}\)2
a. School of Mathematics and Statistics, Anhui Normal University, Wuhu 241002, P.R. China
b. Department of Mathematics, The Bishop's School, La Jolla, CA 92037, USA
Footnote 1: Email: [email protected]
Footnote 2: Email: [email protected]
**Abstract.** It is well known that sometimes Euler sums (i.e., alternating multiple zeta values) can be expressed as \(\mathbb{Q}\)-linear combinations of multiple zeta values (MZVs). In her thesis Glanois presented a criterion for a motivic Euler sums (MES) to be unramified, namely, expressible as \(\mathbb{Q}\)-linear combinations of motivic MZVs. By applying this criterion we present a few families of such unramified MES in two groups. In one such group we can further prove the concrete identities relating the MES to the motivic MZVs, determined up to rational multiple of a motivic Riemann zeta value by a result of Brown.
**Keywords**: (motivic) multiple zeta values, (motivic) Euler sums.
**AMS Subject Classifications (2020):** 11M32; 11M99.
## 1 Introduction
The ubiquitous nature of multiple zeta values (MZVs) has attracted many mathematicians and theoretical physicists in recent years after the seminal works of Zagier [13] and Hoffman [8]. Its higher level generalization is given by the _colored multiple zeta values_ (CMZVs) of level \(N\) defined as follows. Let \(\mathbb{N}\) be the set of positive integers and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). For any composition \((s_{1},\ldots,s_{d})\in\mathbb{N}^{d}\) and \(N\)-th roots of unity \((\varepsilon_{1},\ldots,\varepsilon_{d})\) we define
\[\zeta\begin{pmatrix}s_{1},\ldots,s_{d}\\ \varepsilon_{1},\ldots,\varepsilon_{d}\end{pmatrix}:=\sum_{0<k_{1}<\cdots<k_{ d}}\frac{\varepsilon_{1}^{k_{1}}\cdots\varepsilon_{d}^{k_{d}}}{k_{1}^{s_{1}} \cdots k_{d}^{s_{d}}}.\]
To guarantee convergence we impose the condition that \((s_{d},\varepsilon_{d})\neq(1,1)\).
The CMZVs have played a pivotal role in the theory of mixed Tate motives over \(\mathbb{Z}[\mu_{N}][1/N]\) (resp. \(\mathbb{Z}[\mu_{N}]\)), where \(\mu_{N}=\exp(2\pi i/N)\), for \(N=1,2,4,6,8\) (resp. \(N=6\)) as manifested by the works [4, 5, 6]. In fact, they first appeared unexpectedly in the computation of Feynman integrals in the 1990s. In particular, the level two MZVs, sometimes also called _Euler sums_, have been studied quite intensively in [1, 3, 7, 9]. To save space, |